Internet-Draft Bhuvaneswaran Vengainathan
Network Working Group Anton Basil
Intended Status: Informational Veryx Technologies
Expires: April 18, 2016 Mark Tassinari
Hewlett-Packard
Vishwas Manral
Ionos Corp
Sarah Banks
VSS Monitoring
October 19, 2015
Benchmarking Methodology for SDN Controller Performance
draft-ietf-bmwg-sdn-controller-benchmark-meth-00
Abstract
This document defines the methodologies for benchmarking performance
of SDN controllers. Terminology related to benchmarking SDN
controllers is described in the companion terminology document. SDN
controllers have been implemented with many varying designs in order
to achieve their intended network functionality. Hence, the authors
have taken the approach of considering an SDN controller as a black
box, defining the methodology in a manner that is agnostic to
protocols and network services supported by controllers. The intent
of this document is to provide a standard mechanism to measure the
performance of all controller implementations.
Status of this Memo
This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current.
Internet-Drafts are draft documents valid for a maximum of six
months and may be updated, replaced, or obsoleted by other documents
at any time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress.
This Internet-Draft will expire on April 18, 2016.
Bhuvan, et al. Expires April 18, 2016 [Page 1]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
Copyright Notice
Copyright (c) 2015 IETF Trust and the persons identified as the
document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with
respect to this document. Code Components extracted from this
document must include Simplified BSD License text as described in
Section 4.e of the Trust Legal Provisions and are provided without
warranty as described in the Simplified BSD License.
Table of Contents
1. Introduction ................................................ 3
2. Scope ....................................................... 4
3. Test Setup .................................................. 4
3.1. Test setup - Controller working in Standalone Mode ..... 4
3.2. Test setup - Controller working in Cluster Mode ........ 5
4. Test Considerations ......................................... 6
4.1. Network Topology ....................................... 6
4.2. Test Traffic ........................................... 7
4.3. Connection Setup ....................................... 7
4.4. Measurement Point Specification and Recommendation ..... 7
4.5. Connectivity Recommendation ............................ 8
4.6. Test Repeatability ..................................... 8
5. Benchmarking Tests .......................................... 9
5.1. Performance ............................................ 9
5.1.1. Network Topology Discovery Time ................... 9
5.1.2. Asynchronous Message Processing Time ............. 10
5.1.3. Asynchronous Message Processing Rate ............. 11
5.1.4. Reactive Path Provisioning Time .................. 13
5.1.5. Proactive Path Provisioning Time ................. 14
5.1.6. Reactive Path Provisioning Rate .................. 16
5.1.7. Proactive Path Provisioning Rate ................. 17
5.1.8. Network Topology Change Detection Time ........... 18
5.2. 6.2 Scalability........................................ 20
5.2.1. Control Session Capacity ......................... 20
5.2.2. Network Discovery Size ........................... 20
5.2.3. 6.2.3 Forwarding Table Capacity .................. 21
5.3. 6.3 Security .......................................... 23
5.3.1. 6.3.1 Exception Handling ......................... 23
5.3.2. Denial of Service Handling ....................... 24
Bhuvan, et al. Expires April 18, 2016 [Page 2]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
5.4. Reliability ........................................... 26
5.4.1. Controller Failover Time ......................... 26
5.4.2. Network Re-Provisioning Time ..................... 27
6. References ................................................. 29
6.1. Normative References .................................. 29
6.2. Informative References ................................ 29
7. IANA Considerations ........................................ 30
8. Security Considerations .................................... 30
9. Acknowledgments ............................................ 30
Appendix A. Example Test Topologies ........................... 31
A.1. Leaf-Spine Topology - Three Tier Network Architecture . 31
A.2. Leaf-Spine Topology - Two Tier Network Architecture ... 31
Appendix B. Benchmarking Methodology using OpenFlow Controllers 32
B.1. Protocol Overview ..................................... 32
B.2. Messages Overview ..................................... 32
B.3. Connection Overview ................................... 32
B.4. Performance Benchmarking Tests ........................ 33
B.4.1. Network Topology Discovery Time .................. 33
B.4.2. Asynchronous Message Processing Time ............. 34
B.4.3. Asynchronous Message Processing Rate ............. 35
B.4.4. Reactive Path Provisioning Time .................. 36
B.4.5. Proactive Path Provisioning Time ................. 37
B.4.6. Reactive Path Provisioning Rate .................. 38
B.4.7. Proactive Path Provisioning Rate ................. 39
B.4.8. Network Topology Change Detection Time ........... 40
B.5. Scalability ........................................... 41
B.5.1. Control Sessions Capacity ........................ 41
B.5.2. Network Discovery Size ........................... 41
B.5.3. Forwarding Table Capacity ........................ 42
B.6. Security .............................................. 44
B.6.1. Exception Handling ............................... 44
B.6.2. Denial of Service Handling ....................... 45
B.7. Reliability ........................................... 47
B.7.1. Controller Failover Time ......................... 47
B.7.2. Network Re-Provisioning Time ..................... 48
Authors' Addresses ............................................ 51
1. Introduction
This document provides generic methodologies for benchmarking SDN
controller performance. An SDN controller may support many
northbound and southbound protocols, implement a wide range of
applications, and work solely, or as a group to achieve the desired
functionality. This document considers an SDN controller as a black
box, regardless of design and implementation. The tests defined in
the document can be used to benchmark SDN controller for
performance, scalability, reliability and security independent of
Bhuvan, et al. Expires April 18, 2016 [Page 3]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
northbound and southbound protocols. These tests can be performed on
an SDN controller running as a virtual machine (VM) instance or on a
bare metal server. This document is intended for those who want to
measure the SDN controller performance as well as compare various
SDN controllers performance.
Conventions used in this document
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC 2119.
2. Scope
This document defines methodology to measure the networking metrics
of SDN controllers. For the purpose of this memo, the SDN controller
is a function that manages and controls SDN nodes. Any SDN controller
without a control capability is out of scope for this memo. The
tests defined in this document enable benchmarking of SDN
Controllers in two ways; as a standalone controller and as a cluster
of homogeneous controllers. These tests are recommended for
execution in lab environments rather than in live network
deployments. Performance benchmarking of a federation of controllers
is beyond the scope of this document.
3. Test Setup
The tests defined in this document enable measurement of an SDN
controllers performance in standalone mode and cluster mode. This
section defines common reference topologies that are later referred
to in individual tests.
3.1. Test setup - Controller working in Standalone Mode
+-----------------------------------------------------------+
| Management Plane Test Emulator |
| |
| -------------------- |
| | SDN Applications | |
| -------------------- |
| |
+-----------------------------+(I2)-------------------------+
|
Bhuvan, et al. Expires April 18, 2016 [Page 4]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
|
| (Northbound interface)
+-------------------------------+
| +----------------+ |
| | SDN Controller | |
| +----------------+ |
| |
| Device Under Test (DUT) |
+-------------------------------+
| (Southbound interface)
|
|
+-----------------------------+(I1)-------------------------+
| |
| +---------+ +---------+ |
| | SDN |l1 ln-1| SDN | |
| | Node 1 |----- .... -----| Node n | |
| +---------+ +---------+ |
| |l0 |ln |
| | | |
| | | |
| +---------------+ +---------------+ |
| | Test Traffic | | Test Traffic | |
| | Generator | | Generator | |
| | (TP1) | | (TP2) | |
| +---------------+ +---------------+ |
| |
| Forwarding Plane Test Emulator |
+-----------------------------------------------------------+
Figure 1
3.2. Test setup - Controller working in Cluster Mode
+-----------------------------------------------------------+
| Management Plane Test Emulator |
| |
| -------------------- |
| | SDN Applications | |
| -------------------- |
| |
+-----------------------------+(I2)-------------------------+
|
|
Bhuvan, et al. Expires April 18, 2016 [Page 5]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
| (Northbound interface)
+---------------------------------------------------------+
| |
| ------------------ ------------------ |
| | SDN Controller 1 | <--E/W--> | SDN Controller n | |
| ------------------ ------------------ |
| |
| Device Under Test (DUT) |
+---------------------------------------------------------+
| (Southbound interface)
|
|
+-----------------------------+(I1)-------------------------+
| |
| +---------+ +---------+ |
| | SDN |l1 ln-1| SDN | |
| | Node 1 |----- .... -----| Node n | |
| +---------+ +---------+ |
| |l0 |ln |
| | | |
| | | |
| +---------------+ +---------------+ |
| | Test Traffic | | Test Traffic | |
| | Generator | | Generator | |
| | (TP1) | | (TP2) | |
| +---------------+ +---------------+ |
| |
| Forwarding Plane Test Emulator |
+-----------------------------------------------------------+
Figure 2
4. Test Considerations
4.1. Network Topology
The test cases SHOULD use Leaf-Spine topology with at least 1 SDN
node in the topology for benchmarking. The test traffic generators
TP1 and TP2 SHOULD be connected to the first and the last SDN leaf
node. If a test case uses test topology with 1 SDN node, the test
traffic generators TP1 and TP2 SHOULD be connected to the same node.
However to achieve a complete performance characterization of the
SDN controller, it is recommended that the controller be benchmarked
for many network topologies and a varying number of SDN nodes. This
document includes a few sample test topologies, defined in Section
10 - Appendix A for reference. Further, care should be taken to make
sure that a loop prevention mechanism is enabled either in the SDN
Bhuvan, et al. Expires April 18, 2016 [Page 6]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
controller, or in the network when the topology contains redundant
network paths.
4.2. Test Traffic
Test traffic is used to notify the controller about the arrival of
new flows. The test cases SHOULD use multiple frame sizes as
recommended in RFC2544 for benchmarking.
4.3. Connection Setup
There may be controller implementations that support unencrypted and
encrypted network connections with SDN nodes. Further, the
controller may have backward compatibility with SDN nodes running
older versions of southbound protocols. It is recommended that the
controller performance be measured with one or more applicable
connection setup methods defined below.
1. Unencrypted connection with SDN nodes, running same protocol
version.
2. Unencrypted connection with SDN nodes, running different
protocol versions.
Example:
a. Controller running current protocol version and switch
running older protocol version
b. Controller running older protocol version and switch
running current protocol version
3. Encrypted connection with SDN nodes, running same protocol
version
4. Encrypted connection with SDN nodes, running different protocol
versions.
Example:
a. Controller running current protocol version and switch
running older protocol version
b. Controller running older protocol version and switch
running current protocol version
4.4. Measurement Point Specification and Recommendation
The measurement accuracy depends on several factors including the
point of observation where the indications are captured. For
example, the notification can be observed at the controller or test
emulator. The test operator SHOULD make the observations/
measurements at the interfaces of test emulator unless it is
explicitly mentioned otherwise in the individual test.
Bhuvan, et al. Expires April 18, 2016 [Page 7]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
4.5. Connectivity Recommendation
The SDN controller in the test setup SHOULD be connected directly
with the forwarding and the management plane test emulators to avoid
any delays or failure introduced by the intermediate devices during
benchmarking tests.
4.6. Test Repeatability
To increase the confidence in measured result, it is recommended
that each test SHOULD be repeated a minimum of 10 times.
Test Reporting
Each test has a reporting format that contains some global and
identical reporting components, and some individual components that
are specific to individual tests. The following test configuration
parameters and controller settings parameters MUST be reflected in
the test report.
Test Configuration Parameters:
1. Controller name and version
2. Northbound protocols and versions
3. Southbound protocols and versions
4. Controller redundancy mode (Standalone or Cluster Mode)
5. Connection setup (Unencrypted or Encrypted)
6. Network Topology (Mesh or Tree or Linear)
7. SDN Node Type (Physical or Virtual or Emulated)
8. Number of Nodes
9. Number of Links
10. Test Traffic Type
11. Controller System Configuration (e.g., CPU, Memory, Operating
System, Interface Speed etc.,)
12. Reference Test Setup (e.g., Section 3.1 etc.,)
Controller Settings Parameters:
1. Topology re-discovery timeout
2. Controller redundancy mode (e.g., active-standby etc.,)
Bhuvan, et al. Expires April 18, 2016 [Page 8]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
5. Benchmarking Tests
5.1. Performance
5.1.1. Network Topology Discovery Time
Objective:
Measure the time taken by the SDN controller to discover the network
topology (nodes and links), expressed in milliseconds.
Reference Test Setup:
The test SHOULD use one of the test setups described in section 3.1
or section 3.2 of this document.
Prerequisite:
1. The controller MUST support network discovery.
2. Tester should be able to retrieve the discovered topology
information either through the controller's management interface,
or northbound interface to determine if the discovery was
successful and complete.
3. Ensure that the controller's topology re-discovery timeout has
been set to the maximum value to avoid initiation of re-discovery
process in the middle of the test.
Procedure:
1. Ensure that the controller is operational, its network
applications, northbound and southbound interfaces are up and
running.
2. Establish the network connections between controller and SDN
nodes.
3. Record the time for the first discovery message (Tm1) received
from the controller at forwarding plane test emulator interface
I1.
4. Query the controller every 3 seconds to obtain the discovered
network topology information through the northbound interface or
the management interface and compare it with the deployed network
topology information.
5. Stop the test when the discovered topology information matches the
deployed network topology, or when the discovered topology
information for 3 consecutive queries return the same details.
6. Record the time last discovery message (Tmn) sent to controller
from the forwarding plane test emulator interface (I1) when the
test completed successfully. (e.g., the topology matches).
Bhuvan, et al. Expires April 18, 2016 [Page 9]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
Measurement:
Topology Discovery Time Tr1 = Tmn-Tm1.
Tr1 + Tr2 + Tr3 .. Trn
Average Topology Discovery Time = -----------------------
Total Test Iterations
Reporting Format:
The Topology Discovery Time results MUST be reported in the format
of a table, with a row for each successful iteration. The last row
of the table indicates the average Topology Discovery Time.
If this test is repeated with varying number of nodes over the same
topology, the results SHOULD be reported in the form of a graph. The
X coordinate SHOULD be the Number of nodes (N), the Y coordinate
SHOULD be the average Topology Discovery Time.
If this test is repeated with same number of nodes over different
topologies, the results SHOULD be reported in the form of a graph.
The X coordinate SHOULD be the Topology Type, the Y coordinate
SHOULD be the average Topology Discovery Time.
5.1.2. Asynchronous Message Processing Time
Objective:
Measure the time taken by the SDN controller to process an
asynchronous message, expressed in milliseconds.
Reference Test Setup:
This test SHOULD use one of the test setup described in section 3.1
or section 3.2 of this document.
Prerequisite:
1. The controller MUST have completed the network topology discovery
for the connected SDN nodes.
Procedure:
1. Generate asynchronous messages from every connected SDN node,
to the SDN controller, one at a time in series from the
forwarding plane test emulator for the test duration.
Bhuvan, et al. Expires April 18, 2016 [Page 10]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
2. Record every request transmit (T1) timestamp and the
corresponding response (R1) received timestamp at the
forwarding plane test emulator interface (I1) for every
successful message exchange.
Measurement:
(R1-T1) + (R2-T2)..(Rn-Tn)
Asynchronous Message Processing Time Tr1 = -----------------------
Nrx
Where Nrx is the total number of successful messages exchanged
Tr1 + Tr2 + Tr3..Trn
Average Asynchronous Message Processing Time= --------------------
Total Test Iterations
Reporting Format:
The Asynchronous Message Processing Time results MUST be reported in
the format of a table with a row for each iteration. The last row of
the table indicates the average Asynchronous Message Processing
Time.
The report should capture the following information in addition to
the configuration parameters captured in section 5. - Successful
messages exchanged (Nrx)
If this test is repeated with varying number of nodes with same
topology, the results SHOULD be reported in the form of a graph. The
X coordinate SHOULD be the Number of nodes (N), the Y coordinate
SHOULD be the average Asynchronous Message Processing Time.
If this test is repeated with same number of nodes using different
topologies, the results SHOULD be reported in the form of a graph.
The X coordinate SHOULD be the Topology Type, the Y coordinate
SHOULD be the average Asynchronous Message Processing Time.
5.1.3. Asynchronous Message Processing Rate
Objective:
To measure the maximum rate of asynchronous messages (session
aliveness check message, new flow arrival notification message etc.)
Bhuvan, et al. Expires April 18, 2016 [Page 11]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
a controller can process within the test duration, expressed in
messages processed per second.
Reference Test Setup:
The test SHOULD use one of the test setups described in section 3.1
or section 3.2 of this document.
Prerequisite:
1. The controller MUST have completed the network topology discovery
for the connected SDN nodes.
Procedure:
1. Generate asynchronous messages continuously at the maximum
possible rate on the established connections from all the
connected SDN nodes in the forwarding plane test emulator for the
Test Duration (Td).
2. Record the total number of responses received from the controller
(Nrx) as well as the number of messages sent(Ntx) to the
controller within the test duration(Td) at the forwarding plane
test emulator interface (I1).
Measurement:
Nrx
Asynchronous Message Processing Rate Tr1 = -----
Td
Tr1 + Tr2 + Tr3..Trn
Average Asynchronous Message Processing Rate= --------------------
Total Test Iterations
Loss Ratio = (Ntx-Nrx)/100.
Reporting Format:
The Asynchronous Message Processing Rate results MUST be reported in
the format of a table with a row for each iteration. The last row of
the table indicates the average Asynchronous Message Processing
Rate.
The report should capture the following information in addition to
the configuration parameters captured in section 5.
- Offered rate (Ntx)
Bhuvan, et al. Expires April 18, 2016 [Page 12]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
- Loss Ratio
If this test is repeated with varying number of nodes over same
topology, the results SHOULD be reported in the form of a graph. The
X coordinate SHOULD be the Number of nodes (N), the Y coordinate
SHOULD be the average Asynchronous Message Processing Rate.
If this test is repeated with same number of nodes over different
topologies, the results SHOULD be reported in the form of a graph.
The X coordinate SHOULD be the Topology Type, the Y coordinate
SHOULD be the average Asynchronous Message Processing Rate.
5.1.4. Reactive Path Provisioning Time
Objective:
To measure the time taken by the controller to setup a path
reactively between source and destination node, expressed in
milliseconds.
Reference Test Setup:
The test SHOULD use one of the test setups described in section 3.1
or section 3.2 of this document.
Prerequisite:
1. The controller MUST contain the network topology information for
the deployed network topology.
2. The controller should have the knowledge about the location of
destination endpoint for which the path has to be provisioned.
This can be achieved through dynamic learning or static
provisioning.
3. Ensure that the default action for 'flow miss' in SDN node is
configured to 'send to controller'.
4. Ensure that each SDN node in a path requires the controller to
make the forwarding decision while paving the entire path.
Procedure:
1. Send a single traffic stream from the test traffic generator TP1
to test traffic generator TP2.
2. Record the time of the first flow provisioning request message
sent to the controller (Tsf1) from the SDN node at the forwarding
plane test emulator interface (I1).
Bhuvan, et al. Expires April 18, 2016 [Page 13]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
3. Wait for the arrival of first traffic frame at the Traffic
Endpoint TP2 or the expiry of test duration (Td).
4. Record the time of the last flow provisioning response message
received from the controller (Tdf1) to the SDN node at the
forwarding plane test emulator interface (I1).
Measurement:
Reactive Path Provisioning Time Tr1 = Tdf1-Tsf1.
Tr1 + Tr2 + Tr3 .. Trn
Average Reactive Path Provisioning Time = -----------------------
Total Test Iterations
Reporting Format:
The Reactive Path Provisioning Time results MUST be reported in the
format of a table with a row for each iteration. The last row of the
table indicates the Average Reactive Path Provisioning Time
The report should capture the following information in addition to
the configuration parameters captured in section 5.
- Number of SDN nodes in the path
5.1.5. Proactive Path Provisioning Time
Objective:
To measure the time taken by the controller to setup a path
proactively between source and destination node, expressed in
milliseconds.
Reference Test Setup:
The test SHOULD use one of the test setups described in section 3.1
or section 3.2 of this document.
Prerequisite:
1. The controller MUST contain the network topology information for
the deployed network topology.
Bhuvan, et al. Expires April 18, 2016 [Page 14]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
2. The controller should have the knowledge about the location of
destination endpoint for which the path has to be provisioned.
This can be achieved through dynamic learning or static
provisioning.
3. Ensure that the default action for flow miss in SDN node is
'drop'.
Procedure:
1. Send a single traffic stream from test traffic generator TP1 to
TP2.
2. Install the flow entries to reach from test traffic generator TP1
to the test traffic generator TP2 through controller's northbound
or management interface.
3. Wait for the arrival of first traffic frame at the test traffic
generator TP2 or the expiry of test duration (Td).
4. Record the time when the proactive flow is provisioned in the
Controller (Tsf1) at the management plane test emulator interface
I2.
5. Record the time of the last flow provisioning message received
from the controller (Tdf1) at the forwarding plane test emulator
interface I1.
Measurement:
Proactive Flow Provisioning Time Tr1 = Tdf1-Tsf1.
Tr1 + Tr2 + Tr3 .. Trn
Average Proactive Path Provisioning Time = -----------------------
Total Test Iterations
Reporting Format:
The Proactive Path Provisioning Time results MUST be reported in the
format of a table with a row for each iteration. The last row of the
table indicates the Average Proactive Path Provisioning Time.
The report should capture the following information in addition to
the configuration parameters captured in section 5.
- Number of SDN nodes in the path
Bhuvan, et al. Expires April 18, 2016 [Page 15]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
5.1.6. Reactive Path Provisioning Rate
Objective:
Measure the maximum number of independent paths a controller can
concurrently establish between source and destination nodes
reactively within the test duration, expressed in paths per second.
Reference Test Setup:
The test SHOULD use one of the test setups described in section 3.1
or section 3.2 of this document.
Prerequisite:
1. The controller MUST contain the network topology information for
the deployed network topology.
2. The controller should have the knowledge about the location of
destination addresses for which the paths have to be provisioned.
This can be achieved through dynamic learning or static
provisioning.
3. Ensure that the default action for 'flow miss' in SDN node is
configured to 'send to controller'.
4. Ensure that each SDN node in a path requires the controller to
make the forwarding decision while provisioning the entire path.
Procedure:
1. Send traffic with unique source and destination addresses from
test traffic generator TP1.
2. Record total number of unique traffic frames (Ndf) received at the
test traffic generator TP2 within the test duration (Td).
Measurement:
Ndf
Reactive Path Provisioning Rate Tr1 = ------
Td
Tr1 + Tr2 + Tr3 .. Trn
Average Reactive Path Provisioning Rate = ------------------------
Total Test Iterations
Bhuvan, et al. Expires April 18, 2016 [Page 16]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
Reporting Format:
The Reactive Path Provisioning Rate results MUST be reported in the
format of a table with a row for each iteration. The last row of the
table indicates the Average Reactive Path Provisioning Rate.
The report should capture the following information in addition to
the configuration parameters captured in section 5.
- Number of SDN nodes in the path
- Offered rate
5.1.7. Proactive Path Provisioning Rate
Objective:
Measure the maximum number of independent paths a controller can
concurrently establish between source and destination nodes
proactively within the test duration, expressed in paths per second.
Reference Test Setup:
The test SHOULD use one of the test setups described in section 3.1
or section 3.2 of this document.
Prerequisite:
1. The controller MUST contain the network topology information for
the deployed network topology.
2. The controller should have the knowledge about the location of
destination addresses for which the paths have to be provisioned.
This can be achieved through dynamic learning or static
provisioning.
3. Ensure that the default action for flow miss in SDN node is
'drop'.
Procedure:
1. Send traffic continuously with unique source and destination
addresses from test traffic generator TP1.
Bhuvan, et al. Expires April 18, 2016 [Page 17]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
2. Install corresponding flow entries to reach from simulated
sources at the test traffic generator TP1 to the simulated
destinations at test traffic generator TP2 through controller's
northbound or management interface.
3. Record total number of unique traffic frames received Ndf) at the
test traffic generator TP2 within the test duration (Td).
Measurement:
Ndf
Proactive Path Provisioning Rate Tr1 = ------
Td
Tr1 + Tr2 + Tr3 .. Trn
Average Proactive Path Provisioning Rate = -----------------------
Total Test Iterations
Reporting Format:
The Proactive Path Provisioning Rate results MUST be reported in the
format of a table with a row for each iteration. The last row of the
table indicates the Average Proactive Path Provisioning Rate.
The report should capture the following information in addition to
the configuration parameters captured in section 5.
- Number of SDN nodes in the path
- Offered rate
5.1.8. Network Topology Change Detection Time
Objective:
Measure the time taken by the controller to detect any changes in
the network topology, expressed in milliseconds.
Reference Test Setup:
The test SHOULD use one of the test setups described in section 3.1
or section 3.2 of this document.
Bhuvan, et al. Expires April 18, 2016 [Page 18]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
Prerequisite:
1. The controller MUST have discovered the network topology
information for the deployed network topology.
2. The periodic network discovery operation should be configured to
twice the Test duration (Td) value.
Procedure:
1. Trigger a topology change event by bringing down an active SDN
node in the topology.
2. Record the time when the first topology change notification is
sent to the controller (Tcn) at the forwarding plane test emulator
interface (I1).
3. Stop the test when the controller sends the first topology re-
discovery message to the SDN node or the expiry of test interval
(Td).
4. Record the time when the first topology re-discovery message is
received from the controller (Tcd) at the forwarding plane test
emulator interface (I1)
Measurement:
Network Topology Change Detection Time Tr1 = Tcd-Tcn.
Tr1 + Tr2 + Tr3 .. Trn
Average Network Topology Change Detection Time = ------------------
Total Test Iterations
Reporting Format:
The Network Topology Change Detection Time results MUST be reported
in the format of a table with a row for each iteration. The last
row of the table indicates the average Network Topology Change Time.
Bhuvan, et al. Expires April 18, 2016 [Page 19]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
5.2. 6.2 Scalability
5.2.1. Control Session Capacity
Objective:
Measure the maximum number of control sessions that the controller
can maintain.
Reference Test Setup:
The test SHOULD use one of the test setups described in section 3.1
or section 3.2 of this document.
Procedure:
1. Establish control connection with controller from every SDN node
emulated in the forwarding plane test emulator.
2. Stop the test when the controller starts dropping the control
connection.
3. Record the number of successful connections established with the
controller (CCn) at the forwarding plane test emulator.
Measurement:
Control Sessions Capacity = CCn.
Reporting Format:
The Control Session Capacity results MUST be reported in addition to
the configuration parameters captured in section 5.
5.2.2. Network Discovery Size
Objective:
Measure the network size (number of nodes, links, and hosts) that a
controller can discover.
Reference Test Setup:
The test SHOULD use one of the test setups described in section 3.1
or section 3.2 of this document.
Bhuvan, et al. Expires April 18, 2016 [Page 20]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
Prerequisite:
1. The controller MUST support automatic network discovery.
2. Tester should be able to retrieve the discovered topology
information either through controller's management interface or
northbound interface.
Procedure:
1. Establish the network connections between controller and network
nodes.
2. Query the controller for the discovered network topology
information and compare it with the deployed network topology
information.
3. 3a. Increase the number of nodes by 1 when the comparison is
successful and repeat the test.
4. 3b. Decrease the number of nodes by 1 when the comparison fails
and repeat the test.
5. Continue the test until the comparison of step 3b is successful.
6. Record the number of nodes for the last iteration (Ns) where the
topology comparison was successful.
Measurement:
Network Discovery Size = Ns.
Reporting Format:
The Network Discovery Size results MUST be reported in addition to
the configuration parameters captured in section 5.
5.2.3. 6.2.3 Forwarding Table Capacity
Objective:
Measure the maximum number of flow entries a controller can manage
in its Forwarding table.
Reference Test Setup:
The test SHOULD use one of the test setups described in section 3.1
or section 3.2 of this document.
Bhuvan, et al. Expires April 18, 2016 [Page 21]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
Prerequisite:
1. The controller Forwarding table should be empty.
2. Flow Idle time MUST be set to higher or infinite value.
3. The controller MUST have completed network topology discovery.
4. Tester should be able to retrieve the forwarding table information
either through controller's management interface or northbound
interface.
Procedure:
Reactive Flow Provisioning Mode:
1. Send bi-directional traffic continuously with unique source and/or
destination addresses from test traffic generators TP1 and TP2 at
the asynchronous message processing rate of controller.
2. Query the controller at a regular interval (e.g., 5 seconds) for
the number of learnt flow entries from its northbound interface.
3. Stop the test when the retrieved value is constant for three
consecutive iterations and record the value received from the last
query (Nrp).
Proactive Flow Provisioning Mode:
1. Install unique flows continuously through controller's northbound
or management interface until a failure response is received from
the controller.
2. Record the total number of successful responses (Nrp).
Note:
Some controller designs for proactive flow provisioning mode may
require the switch to send flow setup requests in order to generate
flow setup responses. In such cases, it is recommended to generate
bi-directional traffic for the provisioned flows.
Measurement:
Proactive Flow Provisioning Mode:
Max Flow Entries = Total number of flows provisioned (Nrp)
Reactive Flow Provisioning Mode:
Max Flow Entries = Total number of learnt flow entries (Nrp)
Forwarding Table Capacity = Max Flow Entries.
Bhuvan, et al. Expires April 18, 2016 [Page 22]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
Reporting Format:
The Forwarding Table Capacity results MUST be tabulated with the
following information in addition to the configuration parameters
captured in section 5.
- Provisioning Type (Proactive/Reactive)
5.3. 6.3 Security
5.3.1. 6.3.1 Exception Handling
Objective:
Determine the effect of handling error packets and notifications on
performance tests. The impact MUST be measured for the following
performance tests
a. Path Provisioning Rate
b. Path Provisioning Time
c. Network Topology Change Detection Time
Reference Test Setup:
The test SHOULD use one of the test setups described in section 3.1
or section 3.2 of this document.
Prerequisite:
1. This test MUST be performed after obtaining the baseline
measurement results for the above performance tests.
2. Ensure that the invalid messages are not dropped by the
intermediate devices connecting the controller and SDN nodes.
Procedure:
1. Perform the above listed performance tests and send 1% of messages
from the Asynchronous Message Processing Rate as invalid messages
Bhuvan, et al. Expires April 18, 2016 [Page 23]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
from the connected SDN nodes emulated at the forwarding plane test
emulator.
2. Perform the above listed performance tests and send 2% of messages
from the Asynchronous Message Processing Rate as invalid messages
from the connected SDN nodes emulated at the forwarding plane test
emulator.
Note:
Invalid messages can be frames with incorrect protocol fields or any
form of failure notifications sent towards controller.
Measurement:
Measurement MUST be done as per the equation defined in the
corresponding performance test measurement section.
Reporting Format:
The Exception Handling results MUST be reported in the format of
table with a column for each of the below parameters and row for
each of the listed performance tests.
- Without Exceptions
- With 1% Exceptions
- With 2% Exceptions
5.3.2. Denial of Service Handling
Objective:
Determine the effect of handling DoS attacks on performance and
scalability tests the impact MUST be measured for the following
tests:
a. Path Provisioning Rate
b. Path Provisioning Time
c. Network Topology Change Detection Time
d. Network Discovery Size
Bhuvan, et al. Expires April 18, 2016 [Page 24]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
Reference Test Setup:
The test SHOULD use one of the test setups described in section 3.1
or section 3.2 of this document.
Prerequisite:
This test MUST be performed after obtaining the baseline measurement
results for the above tests.
Procedure:
1. Perform the listed tests and launch a DoS attack towards
controller while the test is running.
Note:
DoS attacks can be launched on one of the following interfaces.
a. Northbound (e.g., Sending a huge number of requests on
northbound interface)
b. Management (e.g., Ping requests to controller's management
interface)
c. Southbound (e.g., TCP SYNC messages on southbound interface)
Measurement:
Measurement MUST be done as per the equation defined in the
corresponding test's measurement section.
Reporting Format:
The DoS Attacks Handling results MUST be reported in the format of
table with a column for each of the below parameters and row for
each of the listed tests.
- Without any attacks
- With attacks
The report should also specify the nature of attack and the
interface.
Bhuvan, et al. Expires April 18, 2016 [Page 25]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
5.4. Reliability
5.4.1. Controller Failover Time
Objective:
Measure the time taken to switch from an active controller to the
backup controller, when the controllers work in redundancy mode and
the active controller fails.
Reference Test Setup:
The test SHOULD use the test setup described in section 3.2 of this
document.
Prerequisite:
1. Master controller election MUST be completed.
2. Nodes are connected to the controller cluster as per the
Redundancy Mode (RM).
3. The controller cluster should have completed the network topology
discovery.
4. The SDN Node MUST send all new flows to the controller when it
receives from the test traffic generator.
5. Controller should have learnt the location of destination (D1) at
test traffic generator TP2.
Procedure:
1. Send uni-directional traffic continuously with incremental
sequence number and source addresses from test traffic generator
TP1 at the rate that the controller processes without any drops.
2. Ensure that there are no packet drops observed at the test traffic
generator TP2.
3. Bring down the active controller.
4. Stop the test when a first frame received on TP2 after failover
operation.
5. Record the time at which the last valid frame received (T1) at
test traffic generator TP2 before sequence error and the first
valid frame received (T2) after the sequence error at TP2
Measurement:
Controller Failover Time = (T2 - T1)
Bhuvan, et al. Expires April 18, 2016 [Page 26]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
Packet Loss = Number of missing packet sequences.
Reporting Format:
The Controller Failover Time results MUST be tabulated with the
following information.
- Number of cluster nodes
- Redundancy mode
- Controller Failover
- Time Packet Loss
- Cluster keep-alive interval
5.4.2. Network Re-Provisioning Time
Objective:
Compute the time taken to re-route the traffic by the controller
when there is a failure in existing traffic paths.
Reference Test Setup:
This test SHOULD use one of the test setup described in section 3.1
or section 3.2 of this document.
Prerequisite:
1. Network with the given number of nodes and redundant paths MUST be
deployed.
2. Ensure that the controller MUST have knowledge about the location
of test traffic generators TP1 and TP2.
3. Ensure that the controller does not pre-provision the alternate
path in the emulated SDN nodes at the forwarding plane test
emulator.
Procedure:
1. Send bi-directional traffic continuously with unique sequence
number from TP1 and TP2.
2. Bring down a link or switch in the traffic path.
Bhuvan, et al. Expires April 18, 2016 [Page 27]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
3. Stop the test after receiving first frame after network re-
convergence.
4. Record the time of last received frame prior to the frame loss at
TP2 (TP2-Tlfr) and the time of first frame received after the
frame loss at TP2 (TP2-Tffr).
5. Record the time of last received frame prior to the frame loss at
TP1 (TP1-Tlfr) and the time of first frame received after the
frame loss at TP1 (TP1-Tffr).
Measurement:
Forward Direction Path Re-Provisioning Time (FDRT)
= (TP2-Tffr - TP2-Tlfr)
Reverse Direction Path Re-Provisioning Time (RDRT)
= (TP1-Tffr - TP1-Tlfr)
Network Re-Provisioning Time = (FDRT+RDRT)/2
Forward Direction Packet Loss = Number of missing sequence frames
at TP1
Reverse Direction Packet Loss = Number of missing sequence frames
at TP2
Reporting Format:
The Network Re-Provisioning Time results MUST be tabulated with the
following information.
- Number of nodes in the primary path
- Number of nodes in the alternate path
- Network Re-Provisioning Time
- Forward Direction Packet Loss
- Reverse Direction Packet Loss
Bhuvan, et al. Expires April 18, 2016 [Page 28]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
6. References
6.1. Normative References
[RFC2544] S. Bradner, J. McQuaid, "Benchmarking Methodology for
Network Interconnect Devices",RFC 2544, March 1999.
[RFC2330] V. Paxson, G. Almes, J. Mahdavi, M. Mathis,
"Framework for IP Performance Metrics",RFC 2330,
May 1998.
[RFC6241] R. Enns, M. Bjorklund, J. Schoenwaelder, A. Bierman,
"Network Configuration Protocol (NETCONF)",RFC 6241,
July 2011.
[RFC6020] M. Bjorklund, "YANG - A Data Modeling Language for
the Network Configuration Protocol (NETCONF)", RFC 6020,
October 2010
[RFC5440] JP. Vasseur, JL. Le Roux, "Path Computation Element (PCE)
Communication Protocol (PCEP)", RFC 5440, March 2009.
[OpenFlow Switch Specification] ONF,"OpenFlow Switch Specification"
Version 1.4.0 (Wire Protocol 0x05), October 14, 2013.
[I-D.sdn-controller-benchmark-term] Bhuvaneswaran.V, Anton Basil,
Mark.T, Vishwas Manral, Sarah Banks, "Terminology for
Benchmarking SDN Controller Performance",
draft-ietf-bmwg-sdn-controller-benchmark-term-00
(Work in progress), October 19, 2015
6.2. Informative References
[I-D.i2rs-architecture] A. Atlas, J. Halpern, S. Hares, D. Ward,
T. Nadeau, "An Architecture for the Interface to the
Routing System", draft-ietf-i2rs-architecture-09
(Work in progress), March 6, 2015
[OpenContrail] Ankur Singla, Bruno Rijsman, "OpenContrail
Architecture Documentation",
http://opencontrail.org/opencontrail-architecture-documentation
[OpenDaylight] OpenDaylight Controller:Architectural Framework,
https://wiki.opendaylight.org/view/OpenDaylight_Controller
Bhuvan, et al. Expires April 18, 2016 [Page 29]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
7. IANA Considerations
This document does not have any IANA requests.
8. Security Considerations
Benchmarking tests described in this document are limited to the
performance characterization of controller in lab environment with
isolated network.
9. Acknowledgments
The authors would like to thank the following individuals for
providing their valuable comments to the earlier versions of this
document: Al Morton (AT&T), Sandeep Gangadharan (HP),
M. Georgescu (NAIST), Andrew McGregor (Google), Scott Bradner
(Harvard University), Jay Karthik (Cisco), Ramakrishnan (Dell),
Khasanov Boris (Huawei), Brian Castelli (Spirent)
This document was prepared using 2-Word-v2.0.template.dot.
Bhuvan, et al. Expires April 18, 2016 [Page 30]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
Appendix A. Example Test Topologies
A.1. Leaf-Spine Topology - Three Tier Network Architecture
+----------+
| SDN |
| Node | (Core)
+----------+
/ \
/ \
+------+ +------+
| SDN | | SDN | (Spine)
| Node |.. | Node |
+------+ +------+
/ \ / \
/ \ / \
l1 / / \ ln-1
/ / \ \
+--------+ +-------+
| SDN | | SDN |
| Node |.. | Node | (Leaf)
+--------+ +-------+
A.2. Leaf-Spine Topology - Two Tier Network Architecture
+------+ +------+
| SDN | | SDN | (Spine)
| Node |.. | Node |
+------+ +------+
/ \ / \
/ \ / \
l1 / / \ ln-1
/ / \ \
+--------+ +-------+
| SDN | | SDN |
| Node |.. | Node | (Leaf)
+--------+ +-------+
Bhuvan, et al. Expires April 18, 2016 [Page 31]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
Appendix B. Benchmarking Methodology using OpenFlow Controllers
This section gives an overview of OpenFlow protocol and provides
test methodology to benchmark SDN controllers supporting OpenFlow
southbound protocol.
B.1. Protocol Overview
OpenFlow is an open standard protocol defined by Open Networking
Foundation (ONF), used for programming the forwarding plane of
network switches or routers via a centralized controller.
B.2. Messages Overview
OpenFlow protocol supports three messages types namely controller-
to-switch, asynchronous and symmetric.
Controller-to-switch messages are initiated by the controller and
used to directly manage or inspect the state of the switch. These
messages allow controllers to query/configure the switch (Features,
Configuration messages), collect information from switch (Read-State
message), send packets on specified port of switch (Packet-out
message), and modify switch forwarding plane and state (Modify-
State, Role-Request messages etc.).
Asynchronous messages are generated by the switch without a
controller soliciting them. These messages allow switches to update
controllers to denote an arrival of new flow (Packet-in), switch
state change (Flow-Removed, Port-status) and error (Error).
Symmetric messages are generated in either direction without
solicitation. These messages allow switches and controllers to set
up connection (Hello), verify for liveness (Echo) and offer
additional functionalities (Experimenter).
B.3. Connection Overview
OpenFlow channel is used to exchange OpenFlow message between an
OpenFlow switch and an OpenFlow controller. The OpenFlow channel
connection can be setup using plain TCP or TLS. By default, a switch
establishes single connection with SDN controller. A switch may
establish multiple parallel connections to single controller
(auxiliary connection) or multiple controllers to handle controller
failures and load balancing.
Bhuvan, et al. Expires April 18, 2016 [Page 32]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
B.4. Performance Benchmarking Tests
B.4.1. Network Topology Discovery Time
Procedure:
SDN Nodes OpenFlow SDN
Controller Application
| | |
| |<Initialize controller |
| |app.,NB and SB interfaces> |
| | |
|<Deploy network with | |
| given no. of OF switches> | |
| | |
| OFPT_HELLO Exchange | |
|<-------------------------->| |
| | |
| PACKET_OUT with LLDP | |
| to all switches | |
(Tm1)|<---------------------------| |
| | |
| PACKET_IN with LLDP| |
| rcvd from switch-1| |
|--------------------------->| |
| | |
| PACKET_IN with LLDP| |
| rcvd from switch-2| |
|--------------------------->| |
| . | |
| . | |
| | |
| PACKET_IN with LLDP| |
| rcvd from switch-n| |
(Tmn)|--------------------------->| |
| | |
| | <Wait for the expiry |
| | of Test Duration (Td)>|
| | |
| | Query the controller for|
| | discovered n/w topo.(Di)|
| |<--------------------------|
| | |
| | <Compare the discovered |
| | & offered n/w topology>|
| | |
Bhuvan, et al. Expires April 18, 2016 [Page 33]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
Legend:
NB: Northbound
SB: Southbound
OF: OpenFlow
Tm1: Time of reception of first LLDP message from controller
Tmn: Time of last LLDP message sent to controller
Discussion:
The Network Topology Discovery Time can be obtained by calculating
the time difference between the first PACKET_OUT with LLDP message
received from the controller (Tm1) and the last PACKET_IN with LLDP
message sent to the controller (Tmn) when the comparison is
successful.
B.4.2. Asynchronous Message Processing Time
Procedure:
SDN Nodes OpenFlow SDN
Controller Application
| | |
|PACKET_IN with single | |
|OFP match header | |
(T0)|--------------------------->| |
| | |
| PACKET_OUT with single OFP | |
| action header | |
(R0)|<---------------------------| |
| . | |
| . | |
| . | |
| | |
|PACKET_IN with single OFP | |
|match header | |
(Tn)|--------------------------->| |
| | |
| PACKET_OUT with single OFP | |
| action header| |
(Rn)|<---------------------------| |
| | |
|<Wait for the expiry of | |
|Test Duration> | |
| | |
|<Record the number of | |
Bhuvan, et al. Expires April 18, 2016 [Page 34]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
|PACKET_INs/PACKET_OUTs | |
|Exchanged (Nrx)> | |
| | |
Legend:
T0,T1, ..Tn are PACKET_IN messages transmit timestamps.
R0,R1, ..Rn are PACKET_OUT messages receive timestamps.
Nrx : Number of successful PACKET_IN/PACKET_OUT message
exchanges
Discussion:
The Asynchronous Message Processing Time will be obtained by sum of
((R0-T0),(R1-T1)..(Rn - Tn))/ Nrx.
B.4.3. Asynchronous Message Processing Rate
Procedure:
SDN Nodes OpenFlow SDN
Controller Application
| | |
|PACKET_IN with multiple OFP | |
|match headers | |
|--------------------------->| |
| | |
| PACKET_OUT with multiple | |
| OFP action headers| |
|<---------------------------| |
| | |
|PACKET_IN with multiple OFP | |
|match headers | |
|--------------------------->| |
| | |
| PACKET_OUT with multiple | |
| OFP action headers| |
|<---------------------------| |
| . | |
| . | |
| . | |
| | |
|PACKET_IN with multiple OFP | |
|match headers | |
|--------------------------->| |
| | |
Bhuvan, et al. Expires April 18, 2016 [Page 35]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
| PACKET_OUT with multiple | |
| OFP action headers| |
|<---------------------------| |
| | |
|<Wait for the expiry of | |
|Test Duration> | |
| | |
|<Record the number of OFP | |
(Nrx)|action headers rcvd> | |
| | |
Discussion:
The Asynchronous Message Processing Rate will be obtained by
calculating the number of OFP action headers received in all
PACKET_OUT messages during the test duration.
B.4.4. Reactive Path Provisioning Time
Procedure:
Test Traffic Test Traffic SDN Nodes OpenFlow
Generator TP1 Generator TP2 Controller
| | | |
| |G-ARP (D1) | |
| |--------------------->| |
| | | |
| | |PACKET_IN(D1) |
| | |------------------>|
| | | |
|Traffic (S1,D1) | |
(Tsf1)|----------------------------------->| |
| | | |
| | | |
| | | |
| | |PACKET_IN(S1,D1) |
| | |------------------>|
| | | |
| | | FLOW_MOD(D1) |
| | |<------------------|
| | | |
| |Traffic (S1,D1) | |
| (Tdf1)|<---------------------| |
| | | |
Bhuvan, et al. Expires April 18, 2016 [Page 36]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
Legend:
G-ARP: Gratuitous ARP message.
Tsf1: Time of first frame sent from TP1
Tdf1: Time of first frame received from TP2
Discussion:
The Reactive Path Provisioning Time can be obtained by finding the
time difference between the transmit and receive time of the traffic
(Tsf1-Tdf1).
B.4.5. Proactive Path Provisioning Time
Procedure:
Test Traffic Test Traffic SDN Nodes OpenFlow SDN
Generator TP1 Generator TP2 Controller Application
| | | | |
| |G-ARP (D1) | | |
| |-------------->| | |
| | | | |
| | |PACKET_IN(D1) | |
| | |--------------->| |
| | | | |
|Traffic (S1,D1) | | |
Tsf1)|---------------------------->| | |
| | | | |
| | | | <Install flow|
| | | | for S1,D1> |
| | | | |
| | | FLOW_MOD(D1) | |
| | |<---------------| |
| | | | |
| |Traffic (S1,D1)| | |
| (Tdf1)|<--------------| | |
| | | | |
Legend:
G-ARP: Gratuitous ARP message.
Tsf1: Time of first frame sent from TP1
Tdf1: Time of first frame received from TP2
Bhuvan, et al. Expires April 18, 2016 [Page 37]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
Discussion:
The Proactive Path Provisioning Time can be obtained by finding the
time difference between the transmit and receive time of the traffic
(Tsf1-Tdf1).
B.4.6. Reactive Path Provisioning Rate
Procedure:
Test Traffic Test Traffic SDN Nodes OpenFlow
Generator TP1 Generator TP2 Controller
| | | |
| | | |
| | | |
| |G-ARP (D1..Dn) | |
| |--------------------| |
| | | |
| | |PACKET_IN(D1..Dn) |
| | |--------------------->|
| | | |
|Traffic (S1..Sn,D1..Dn) | |
|--------------------------------->| |
| | | |
| | |PACKET_IN(S1.Sn,D1.Dn)|
| | |--------------------->|
| | | |
| | | FLOW_MOD(S1) |
| | |<---------------------|
| | | |
| | | FLOW_MOD(D1) |
| | |<---------------------|
| | | |
| | | FLOW_MOD(S2) |
| | |<---------------------|
| | | |
| | | FLOW_MOD(D2) |
| | |<---------------------|
| | | . |
| | | . |
| | | |
| | | FLOW_MOD(Sn) |
| | |<---------------------|
| | | |
| | | FLOW_MOD(Dn) |
| | |<---------------------|
| | | |
Bhuvan, et al. Expires April 18, 2016 [Page 38]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
| | Traffic (S1..Sn, | |
| | D1..Dn)| |
| |<-------------------| |
| | | |
| | | |
Legend:
G-ARP: Gratuitous ARP
D1..Dn: Destination Endpoint 1, Destination Endpoint 2 ....
Destination Endpoint n
S1..Sn: Source Endpoint 1, Source Endpoint 2 .., Source
Endpoint n
Discussion:
The Reactive Path Provisioning Rate can be obtained by finding the
total number of frames received at TP2 after the test duration.
B.4.7. Proactive Path Provisioning Rate
Procedure:
Test Traffic Test Traffic SDN Nodes OpenFlow SDN
Generator TP1 Generator TP2 Controller Application
| | | | |
| |G-ARP (D1..Dn) | | |
| |-------------->| | |
| | | | |
| | |PACKET_IN(D1.Dn)| |
| | |--------------->| |
| | | | |
|Traffic (S1..Sn,D1..Dn) | | |
Tsf1)|---------------------------->| | |
| | | | |
| | | | <Install flow|
| | | | for S1,D1> |
| | | | |
| | | | . |
| | | | <Install flow|
| | | | for Sn,Dn> |
| | | | |
| | | FLOW_MOD(S1) | |
| | |<---------------| |
| | | | |
| | | FLOW_MOD(D1) | |
| | |<---------------| |
Bhuvan, et al. Expires April 18, 2016 [Page 39]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
| | | | |
| | | . | |
| | | FLOW_MOD(Sn) | |
| | |<---------------| |
| | | | |
| | | FLOW_MOD(Dn) | |
| | |<---------------| |
| | | | |
| |Traffic (S1.Sn,| | |
| | D1.Dn)| | |
| (Tdf1)|<--------------| | |
| | | | |
Legend:
G-ARP: Gratuitous ARP
D1..Dn: Destination Endpoint 1, Destination Endpoint 2 ....
Destination Endpoint n
S1..Sn: Source Endpoint 1, Source Endpoint 2 .., Source
Endpoint n
Discussion:
The Proactive Path Provisioning Rate can be obtained by finding the
total number of frames received at TP2 after the test duration
B.4.8. Network Topology Change Detection Time
Procedure:
SDN Nodes OpenFlow SDN
Controller Application
| | |
| | <Bring down a link in |
| | switch S1>|
| | |
T0 |PORT_STATUS with link down | |
| from S1 | |
|--------------------------->| |
| | |
|First PACKET_OUT with LLDP | |
|to OF Switch | |
T1 |<---------------------------| |
| | |
| | <Record time of 1st |
| | PACKET_OUT with LLDP T1>|
Bhuvan, et al. Expires April 18, 2016 [Page 40]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
Discussion:
The Network Topology Change Detection Time can be obtained by
finding the difference between the time the OpenFlow switch S1 sends
the PORT_STATUS message (T0) and the time that the OpenFlow
controller sends the first topology re-discovery message (T1) to
OpenFlow switches.
B.5. Scalability
B.5.1. Control Sessions Capacity
Procedure:
SDN Nodes OpenFlow
Controller
| |
| OFPT_HELLO Exchange for Switch 1 |
|<------------------------------------->|
| |
| OFPT_HELLO Exchange for Switch 2 |
|<------------------------------------->|
| . |
| . |
| . |
| OFPT_HELLO Exchange for Switch n |
|X<----------------------------------->X|
| |
Discussion:
The value of Switch n-1 will provide Control Sessions Capacity.
B.5.2. Network Discovery Size
Procedure:
SDN Nodes OpenFlow SDN
Controller Application
| | |
| | <Deploy network with |
| |given no. of OF switches N>|
| | |
| OFPT_HELLO Exchange | |
|<-------------------------->| |
| | |
| PACKET_OUT with LLDP | |
Bhuvan, et al. Expires April 18, 2016 [Page 41]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
| to all switches | |
|<---------------------------| |
| | |
| PACKET_IN with LLDP| |
| rcvd from switch-1| |
|--------------------------->| |
| | |
| PACKET_IN with LLDP| |
| rcvd from switch-2| |
|--------------------------->| |
| . | |
| . | |
| | |
| PACKET_IN with LLDP| |
| rcvd from switch-n| |
|--------------------------->| |
| | |
| | <Wait for the expiry |
| | of Test Duration (Td)>|
| | |
| | Query the controller for|
| | discovered n/w topo.(N1)|
| |<--------------------------|
| | |
| | <If N1==N repeat Step 1 |
| |with N+1 nodes until N1<N >|
| | |
| | <If N1<N repeat Step 1 |
| | with N=N1 nodes once and |
| | exit> |
| | |
Legend:
n/w topo: Network Topology
OF: OpenFlow
Discussion:
The value of N1 provides the Network Discovery Size value. The test
duration can be set to the stipulated time within which the user
expects the controller to complete the discovery process.
B.5.3. Forwarding Table Capacity
Bhuvan, et al. Expires April 18, 2016 [Page 42]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
Procedure:
Test Traffic SDN Nodes OpenFlow SDN
Generator TP1 Controller Application
| | | |
| | | |
|G-ARP (H1..Hn) | | |
|----------------->| | |
| | | |
| |PACKET_IN(D1..Dn) | |
| |------------------>| |
| | | |
| | |<Wait for 5 secs>|
| | | |
| | | <Query for FWD |
| | | entry> |(F1)
| | | |
| | |<Wait for 5 secs>|
| | | |
| | | <Query for FWD |
| | | entry> |(F2)
| | | |
| | |<Wait for 5 secs>|
| | | |
| | | <Query for FWD |
| | | entry> |(F3)
| | | |
| | | <Repeat Step 2 |
| | |until F1==F2==F3>|
| | | |
Legend:
G-ARP: Gratuitous ARP
H1..Hn: Host 1 .. Host n
FWD: Forwarding Table
Discussion:
Query the controller forwarding table entries for multiple times
until the three consecutive queries return the same value. The last
value retrieved from the controller will provide the Forwarding
Table Capacity value. The query interval is user configurable. The 5
seconds shown in this example is for representational purpose.
Bhuvan, et al. Expires April 18, 2016 [Page 43]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
B.6. Security
B.6.1. Exception Handling
Procedure:
Test Traffic Test Traffic SDN Nodes OpenFlow SDN
Generator TP1 Generator TP2 Controller Application
| | | | |
| |G-ARP (D1..Dn) | | |
| |------------------>| | |
| | | | |
| | |PACKET_IN(D1..Dn)| |
| | |---------------->| |
| | | | |
|Traffic (S1..Sn,D1..Dn) | | |
|----------------------------->| | |
| | | | |
| | |PACKET_IN(S1..Sa,| |
| | | D1..Da)| |
| | |---------------->| |
| | | | |
| | |PACKET_IN(Sa+1.. | |
| | |.Sn,Da+1..Dn) | |
| | |(1% incorrect OFP| |
| | | Match header)| |
| | |---------------->| |
| | | | |
| | | FLOW_MOD(D1..Dn)| |
| | |<----------------| |
| | | | |
| | | FLOW_MOD(S1..Sa)| |
| | | OFP headers| |
| | |<----------------| |
| | | | |
| |Traffic (S1..Sa, | | |
| | D1..Da)| | |
| |<------------------| | |
| | | | |
| | | | <Wait for |
| | | | Test |
| | | | Duration>|
| | | | |
| | | | <Record Rx|
| | | | frames at|
| | | | TP2 (Rn1)>|
| | | | |
Bhuvan, et al. Expires April 18, 2016 [Page 44]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
| | | | <Repeat |
| | | | Step1 with |
| | | |2% incorrect|
| | | | PACKET_INs>|
| | | | |
| | | | <Record Rx|
| | | | frames at|
| | | | TP2 (Rn2)>|
| | | | |
Legend:
G-ARP: Gratuitous ARP
PACKET_IN(Sa+1..Sn,Da+1..Dn): OpenFlow PACKET_IN with wrong
version number
Rn1: Total number of frames received at Test Port 2 with
1% incorrect frames
Rn2: Total number of frames received at Test Port 2 with
2% incorrect frames
Discussion:
The traffic rate sent towards OpenFlow switch from Test Port 1
should be 1% higher than the Path Programming Rate. Rn1 will provide
the Path Provisioning Rate of controller at 1% of incorrect frames
handling and Rn2 will provide the Path Provisioning Rate of
controller at 2% of incorrect frames handling.
The procedure defined above provides test steps to determine the
effect of handling error packets on Path Programming Rate. Same
procedure can be adopted to determine the effects on other
performance tests listed in this benchmarking tests.
B.6.2. Denial of Service Handling
Procedure:
Test Traffic Test Traffic SDN Nodes OpenFlow SDN
Generator TP1 Generator TP2 Controller Application
| | | | |
| |G-ARP (D1..Dn) | | |
| |------------------>| | |
| | | | |
| | |PACKET_IN(D1..Dn)| |
| | |---------------->| |
| | | | |
|Traffic (S1..Sn,D1..Dn) | | |
Bhuvan, et al. Expires April 18, 2016 [Page 45]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
|----------------------------->| | |
| | | | |
| | |PACKET_IN(S1..Sn,| |
| | | D1..Dn)| |
| | |---------------->| |
| | | | |
| | |TCP SYN Attack | |
| | |from a switch | |
| | |---------------->| |
| | | | |
| | |FLOW_MOD(D1..Dn) | |
| | |<----------------| |
| | | | |
| | | FLOW_MOD(S1..Sn)| |
| | | OFP headers| |
| | |<----------------| |
| | | | |
| |Traffic (S1..Sn, | | |
| | D1..Dn)| | |
| |<------------------| | |
| | | | |
| | | | <Wait for |
| | | | Test |
| | | | Duration>|
| | | | |
| | | | <Record Rx|
| | | | frames at|
| | | | TP2 (Rn1)>|
| | | | |
Legend:
G-ARP: Gratuitous ARP
Discussion:
TCP SYN attack should be launched from one of the emulated/simulated
OpenFlow Switch. Rn1 provides the Path Programming Rate of
controller uponhandling denial of service attack.
The procedure defined above provides test steps to determine the
effect of handling denial of service on Path Programming Rate. Same
procedure can be adopted to determine the effects on other
performance tests listed in this benchmarking tests.
Bhuvan, et al. Expires April 18, 2016 [Page 46]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
B.7. Reliability
B.7.1. Controller Failover Time
Procedure:
Test Traffic Test Traffic SDN Nodes OpenFlow SDN
Generator TP1 Generator TP2 Controller Application
| | | | |
| |G-ARP (D1) | | |
| |------------>| | |
| | | | |
| | |PACKET_IN(D1) | |
| | |---------------->| |
| | | | |
|Traffic (S1..Sn,D1) | | |
|-------------------------->| | |
| | | | |
| | | | |
| | |PACKET_IN(S1,D1) | |
| | |---------------->| |
| | | | |
| | |FLOW_MOD(D1) | |
| | |<----------------| |
| | |FLOW_MOD(S1) | |
| | |<----------------| |
| | | | |
| |Traffic (S1,D1)| | |
| |<------------| | |
| | | | |
| | |PACKET_IN(S2,D1) | |
| | |---------------->| |
| | | | |
| | |FLOW_MOD(S2) | |
| | |<----------------| |
| | | | |
| | |PACKET_IN(Sn-1,D1)| |
| | |---------------->| |
| | | | |
| | |PACKET_IN(Sn,D1) | |
| | |---------------->| |
| | | . | |
| | | . |<Bring down the|
| | | . |active control-|
Bhuvan, et al. Expires April 18, 2016 [Page 47]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
| | | | ler> |
| | | FLOW_MOD(Sn-1) | |
| | | <-X----------| |
| | | | |
| | |FLOW_MOD(Sn) | |
| | |<----------------| |
| | | | |
| |Traffic (Sn,D1)| | |
| |<------------| | |
| | | | |
| | | |<Stop the test |
| | | |after recv. |
| | | |traffic upon |
| | | | failure> |
Legend:
G-ARP: Gratuitous ARP.
Discussion:
The time difference between the last valid frame received before the
traffic loss and the first frame received after the traffic loss
will provide the controller failover time.
If there is no frame loss during controller failover time, the
controller failover time can be deemed negligible.
B.7.2. Network Re-Provisioning Time
Procedure:
Test Traffic Test Traffic SDN Nodes OpenFlow SDN
Generator TP1 Generator TP2 Controller Application
| | | | |
| |G-ARP (D1) | | |
| |-------------->| | |
| | | | |
| | |PACKET_IN(D1) | |
| | |---------------->| |
| G-ARP (S1) | | |
|---------------------------->| | |
| | | | |
| | |PACKET_IN(S1) | |
| | |---------------->| |
| | | | |
|Traffic (S1,D1,Seq.no (1..n))| | |
Bhuvan, et al. Expires April 18, 2016 [Page 48]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
|---------------------------->| | |
| | | | |
| | |PACKET_IN(S1,D1) | |
| | |---------------->| |
| | | | |
| |Traffic (D1,S1,| | |
| | Seq.no (1..n))| | |
| |-------------->| | |
| | | | |
| | |PACKET_IN(D1,S1) | |
| | |---------------->| |
| | | | |
| | |FLOW_MOD(D1) | |
| | |<----------------| |
| | | | |
| | |FLOW_MOD(S1) | |
| | |<----------------| |
| | | | |
| |Traffic (S1,D1,| | |
| | Seq.no(1))| | |
| |<--------------| | |
| | | | |
| |Traffic (S1,D1,| | |
| | Seq.no(2))| | |
| |<--------------| | |
| | | | |
| | | | |
| Traffic (D1,S1,Seq.no(1))| | |
|<----------------------------| | |
| | | | |
| Traffic (D1,S1,Seq.no(2))| | |
|<----------------------------| | |
| | | | |
| Traffic (D1,S1,Seq.no(x))| | |
|<----------------------------| | |
| | | | |
| |Traffic (S1,D1,| | |
| | Seq.no(x))| | |
| |<--------------| | |
| | | | |
| | | | |
| | | | <Bring down |
| | | | the switch in|
| | | |active traffic|
| | | | path> |
| | | | |
| | |PORT_STATUS(Sa) | |
Bhuvan, et al. Expires April 18, 2016 [Page 49]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
| | |---------------->| |
| | | | |
| |Traffic (S1,D1,| | |
| | Seq.no(n-1))| | |
| | X<-----------| | |
| | | | |
| Traffic (D1,S1,Seq.no(n-1))| | |
| X------------------------| | |
| | | | |
| | | | |
| | |FLOW_MOD(D1) | |
| | |<----------------| |
| | | | |
| | |FLOW_MOD(S1) | |
| | |<----------------| |
| | | | |
| Traffic (D1,S1,Seq.no(n))| | |
|<----------------------------| | |
| | | | |
| |Traffic (S1,D1,| | |
| | Seq.no(n))| | |
| |<--------------| | |
| | | | |
| | | |<Stop the test|
| | | | after recv. |
| | | | traffic upon|
| | | | failover> |
Legend:
G-ARP: Gratuitous ARP message.
Seq.no: Sequence number.
Sa: Neighbour switch of the switch that was brought down.
Discussion:
The time difference between the last valid frame received before the
traffic loss (Packet number with sequence number x) and the first
frame received after the traffic loss (packet with sequence number
n) will provide the network path re-provisioning time.
Note that the test is valid only when the controller provisions the
alternate path upon network failure.
Bhuvan, et al. Expires April 18, 2016 [Page 50]
Internet-Draft SDN Controller Benchmarking Methodology October 2015
Authors' Addresses
Bhuvaneswaran Vengainathan
Veryx Technologies Inc.
1 International Plaza, Suite 550
Philadelphia
PA 19113
Email: bhuvaneswaran.vengainathan@veryxtech.com
Anton Basil
Veryx Technologies Inc.
1 International Plaza, Suite 550
Philadelphia
PA 19113
Email: anton.basil@veryxtech.com
Mark Tassinari
Hewlett-Packard,
8000 Foothills Blvd,
Roseville, CA 95747
Email: mark.tassinari@hp.com
Vishwas Manral
Ionos Corp,
4100 Moorpark Ave,
San Jose, CA
Email: vishwas@ionosnetworks.com
Sarah Banks
VSS Monitoring
930 De Guigne Drive,
Sunnyvale, CA
Email: sbanks@encrypted.net
Bhuvan, et al. Expires April 18, 2016 [Page 51]