Benchmarking Methodology Working Group BB. Balarajah
Internet-Draft EANTC AG
Intended status: Informational December 7, 2017
Expires: June 10, 2018
Benchmarking Methodology for Network Security Device Performance
draft-balarajah-bmwg-ngfw-performance-00
Abstract
This document provides benchmarking terminology and methodology for
next-generation network security devices including next-generation
firewalls (NGFW), intrusion detection and prevention solutions (IDS/
IPS) and unified threat management (UTM) implementations. The
document aims to strongly improve the applicability, reproducibility
and transparency of benchmarks and to align the test methodology with
today's increasingly complex 7application use cases. The main areas
covered in this document are test terminology, traffic profiles and
benchmarking methodology for NGFWs to start with.
Status of This Memo
This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet-
Drafts is at https://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
This Internet-Draft will expire on June 10, 2018.
Copyright Notice
Copyright (c) 2017 IETF Trust and the persons identified as the
document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents
(https://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect
Balarajah Expires June 10, 2018 [Page 1]
Internet-Draft Benchmarking for NGFW perfromance December 2017
to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License.
Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2
2. Requirements . . . . . . . . . . . . . . . . . . . . . . . . 2
3. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
4. Test Setup . . . . . . . . . . . . . . . . . . . . . . . . . 3
4.1. Testbed Configuration . . . . . . . . . . . . . . . . . . 3
4.2. DUT/SUT Configuration . . . . . . . . . . . . . . . . . . 4
4.3. Test Equipment Configuration . . . . . . . . . . . . . . 6
4.3.1. Client Configuration . . . . . . . . . . . . . . . . 7
4.3.2. Backend Server Configuration . . . . . . . . . . . . 8
4.3.3. Traffic Flow Definition . . . . . . . . . . . . . . . 9
4.3.4. Traffic Load Profile . . . . . . . . . . . . . . . . 10
5. Test Bed Considerations . . . . . . . . . . . . . . . . . . . 11
6. Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . 12
6.1. Key Performance Indicators . . . . . . . . . . . . . . . 13
7. Benchmarking Tests . . . . . . . . . . . . . . . . . . . . . 14
7.1. Throughput Performance . . . . . . . . . . . . . . . . . 15
7.1.1. Objective . . . . . . . . . . . . . . . . . . . . . . 15
7.1.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 15
7.1.3. Test Parameters . . . . . . . . . . . . . . . . . . . 15
7.1.4. Test Procedures and expected Results . . . . . . . . 17
7.2. TCP Concurrent Connection Capacity . . . . . . . . . . . 18
7.3. TCP Connection Setup Rate . . . . . . . . . . . . . . . . 18
7.4. Application Transaction Rate . . . . . . . . . . . . . . 18
7.5. SSL/TLS Handshake Rate . . . . . . . . . . . . . . . . . 18
8. Formal Syntax . . . . . . . . . . . . . . . . . . . . . . . . 18
9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 18
10. Security Considerations . . . . . . . . . . . . . . . . . . . 18
11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 18
12. Normative References . . . . . . . . . . . . . . . . . . . . 18
Appendix A. An Appendix . . . . . . . . . . . . . . . . . . . . 18
Author's Address . . . . . . . . . . . . . . . . . . . . . . . . 19
1. Introduction
TBD
2. Requirements
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC 2119 [RFC2119].
Balarajah Expires June 10, 2018 [Page 2]
Internet-Draft Benchmarking for NGFW perfromance December 2017
3. Scope
TBD.
4. Test Setup
Test setup defined in this document will be applicable to all of the
benchmarking test cases described in Section 7 (Section 7).
4.1. Testbed Configuration
Testbed configuration MUST ensure that any performance implications
that are discovered during the benchmark testing aren't due to the
inherent physical network limitations such as number of physical
links and forwarding performance capabilities (throughput and
latency) of the network devise in the testbed. For this reason, this
document recommends to avoid external devices such as switch and
router in the testbed as possible.
In the typical deployment, the security devices (DUT/SUT) will not
have a large number of entries in MAC or ARP tables, which impact the
actual DUT/SUT performance due to MAC and ARP table lookup processes.
Therefore, depend on number of used IP address in client and server
side, it is recommended to connect Layer 3 device(s) between test
equipment and DUT/SUT as shown in figure 1 (Figure 1).
If the test equipment is capable to emulate layer 3 routing
functionality and there is no need for test equipment ports
aggregation, it is recommended to configure the test setup as shown
in figure 2 (Figure 2).
Balarajah Expires June 10, 2018 [Page 3]
Internet-Draft Benchmarking for NGFW perfromance December 2017
+-------------------+ +-----------+ +--------------------+
|Aggregation Switch/| | | | Aggregation Switch/|
| Router +------+ DUT/SUT +------+ Router |
| | | | | |
+----------+--------+ +-----------+ +----------+---------+
| |
| |
+-----------+-----------+ +------------+----------+
| | | |
| +-------------------+ | | +-------------------+ |
| | Emulated Router(s)| | | | Emulated Router(s)| |
| | (Optional) | | | | (Optional) | |
| +-------------------+ | | +-------------------+ |
| +-------------------+ | | +-------------------+ |
| | Clients | | | | Servers | |
| +-------------------+ | | +-------------------+ |
| | | |
| Test Equipment | | Test Equipment |
+-----------------------+ +-----------------------+
Figure 1: Testbed Setup - Option 1
+-----------------------+ +-----------------------+
| +-------------------+ | +-----------+ | +-------------------+ |
| | Emulated Router(s)| | | | | | Emulated Router(s)| |
| | (Optional) | +----- DUT/SUT +-----+ (Optional) | |
| +-------------------+ | | | | +-------------------+ |
| +-------------------+ | +-----------+ | +-------------------+ |
| | Clients | | | | Servers | |
| +-------------------+ | | +-------------------+ |
| | | |
| Test Equipment | | Test Equipment |
+-----------------------+ +-----------------------+
Figure 2: Testbed Setup - Option 2
4.2. DUT/SUT Configuration
An unique DUT/SUT configuration MUST be used for all of the
benchmarking tests described in section 7 (Section 7). Since each
DUT/SUT will have their own unique configuration, users SHOULD
configure their device with the same parameters that would be used in
the actual deployment of the device or a typical deployment. Also it
is mandatory to enable all the security features on the DUT/SUT in
order to achieve maximum security coverage for a specific deployment
scenario.
Balarajah Expires June 10, 2018 [Page 4]
Internet-Draft Benchmarking for NGFW perfromance December 2017
This document attempts to define the recommended security features
which SHOULD be consistently enabled for all test cases. The table
below describes the recommended sets of feature list which SHOULD be
configured on the DUT/SUT. In order to improve repeatability, a
summary of the DUT configuration including description of all enabled
DUT/SUT features MUST be published with the benchmarking results.
+----------------------------------------------------+
| Device |
+---------------------------------+--+----+---+------+
| | | | | | SSL |
| NGFW |NGIPS|AD| WAF|BPS|Broker|
+----------------------------------------------------------------------+
| | |Included |Added to| Future test standards |
| DUT Features |Feature|in initial|future | to be de^eloped |
| | |Scope |Scope | |
+---------------------------------------------------+---+---+---+------+
| SSL Inspection | x | | x | | | | | |
+----------------------------------------------------------------------+
| IDS/IPS | x | x | | | | | | |
+----------------------------------------------------------------------+
| Web Filtering | x | | x | | | | | |
+----------------------------------------------------------------------+
| Anti^irus | x | x | | | | | | |
+----------------------------------------------------------------------+
| Anti Spyware | x | x | | | | | | |
+----------------------------------------------------------------------+
| Anti Botnet | x | x | | | | | | |
+----------------------------------------------------------------------+
| DLP | x | | x | | | | | |
+----------------------------------------------------------------------+
| DDoS | x | | x | | | | | |
+----------------------------------------------------------------------+
| SSL Certificate | x | | x | | | | | |
| Validation | | | | | | | | |
+----------------------------------------------------------------------+
| Logging and | x | x | | | | | | |
| Reporting | | | | | | | | |
+----------------------------------------------------------------------+
| Application | x | x | | | | | | |
| Identification | | | | | | | | |
+-----------------+-------+----------+--------+-----+---+---+---+------+
Table 1: DUT/SUT Feature List
It is also recommended to configure a realistic number of access
policy rules on the DUT/SUT. This document attempts to determine the
number of access policy rules for three different class of DUT/SUT.
Balarajah Expires June 10, 2018 [Page 5]
Internet-Draft Benchmarking for NGFW perfromance December 2017
The document classified the DUT/SUT based on its performance
capability. The access rule defined in the, MUST be configured from
top to bottom in correct order. The configured access policy rule
MUST NOT block the test traffic used for the performance test.
+---------------------------------------------------+------------------+
| | DUT/SUT |
| | Classification |
| | # Rules |
+-----------+-----------+--------------------+------+------------------+
| | Match | | |
|Rules Type | Criteria| Description |Action|Small|Medium|Large|
+----------------------------------------------------------------------+
|Application|Application|Any application |block | 10 | 20 | 50 |
|layer | |traffic NOT included| | | | |
| | |in the test traffic | | | | |
+----------------------------------------------------------------------+
|Transport |Src IP and |Any src IP used in |block | 50 | 100 | 250 |
|layer |TCP/UDP |the test AND any dst| | | | |
| |Dst ports |ports NOT used in | | | | |
| | |the test traffic | | | | |
+----------------------------------------------------------------------+
|IP layer |Src/Dst IP |Any src/dst IP NOT |block | 50 | 100 | 250 |
| | |used in the test | | | | |
+----------------------------------------------------------------------+
|Application|Application|Applications |allow | 10 | 10 | 10 |
|layer | |included in the test| | | | |
| | |traffic | | | | |
+----------------------------------------------------------------------+
|Transport |Src IP and |Half of the src IP |allow | 1 | 1 | 1 |
|layer |TCP/UDP |used in the test AND| | | | |
| |Dst ports |any dst ports used | | | | |
| | |in the test traffic.| | | | |
| | |One rule per subnet | | | | |
+----------------------------------------------------------------------+
|IP layer |Src IP |The rest of the src |allow | 1 | 1 | 1 |
| | |IP subnet range used| | | | |
| | |in the test. | | | | |
| | |One rule per subnet | | | | |
+-----------+--------------------------------+------+-----+------+-----+
Table 2: DUT/SUT Access List
4.3. Test Equipment Configuration
In general, test equipment allows configuring parameters in different
protocol level. These parameters thereby influencing the traffic
flows which will be offered and impacting performance measurements.
Balarajah Expires June 10, 2018 [Page 6]
Internet-Draft Benchmarking for NGFW perfromance December 2017
This document attempts to explicitly specify which test equipment
parameters SHOULD be configurable, any such parameter(s) MUST be
noted in the test report.
4.3.1. Client Configuration
This section specifies which parameters SHOULD be considerable while
configuring emulated clients using test equipment. Also this section
specifies the recommended values for certain parameters.
4.3.1.1. TCP Stack Attributes
The TCP stack SHOULD use a TCP Reno variant, which include congestion
avoidance, back off and windowing, retransmission and recovery on
every TCP connection between client and server endpoints. The
default IPv4 and IPv6 MSS segments size MUST be set to 1460 bytes and
1440 bytes and a TX and RX receive windows of 32768 bytes. Delayed
ACKs are permitted, but it SHOULD be limited to either a 200 mSec
delay timeout or 3000 in bytes before a forced ACK. Up to 3 retries
SHOULD be allowed before a timeout event is declared. All traffic
MUST set the TCP PSH flag to high. The source port range SHOULD be
in the range of 1024 - 65535. Internal timeout SHOULD be dynamically
scalable per RFC 793..
4.3.1.2. Client IP Address Space
The sum of the client IP space SHOULD contain the following
attributes. The traffic blocks SHOULD consist of multiple unique,
continuous static address blocks. A default gateway is permitted.
The IPv4 ToS byte should be set to '00'.
The following equation can be used to determine the required total
number of client IP address.
Desired total number of client IP = Target throughput [Mbit/s] /
Throughput per IP address [Mbit/s]
(Idea 1) 6-7 Mbps per IP= 1,400-1,700 IPs per 10Gbit/s throughput
(Idea 2) 0.1-0.2 Mbps per IP = 50,000-100,000 IPs per 10Gbit/s
throughput
Based on deployment and usecase scenario, client IP addresses SHOULD
be distributed between IPv4 and IPv6 type. This document recommends
using the following ratio(s) between IPv4 and IPv6:
(Idea 1) 100 % IPv4, no IPv6
Balarajah Expires June 10, 2018 [Page 7]
Internet-Draft Benchmarking for NGFW perfromance December 2017
(Idea 2) 80 % IPv4, 20 % IPv6
(Idea 3) 50 % IPv4, 50 % IPv6
(Idea 4) 0 % IPv4, 100 % IPv6
4.3.1.3. Emulated Web Browser Attributes
The emulated web browser contains attributes that will materially
affect how traffic is loaded. The objective is to emulate a modern,
typical browser attributes to improve realism of the result set. The
emulated browser must negotiate HTTP 1.1 with persistence. The
browser will open up to 6 TCP connections per Server endpoint IP at
any time depending on how many sequential transactions are needed to
be processed. Within the TCP connection multiple transactions can be
processed if the emulated browser has available connections, for
example where transactions to the same server endpoint IP exceed 6 or
are non-sequential. The browser must advertise a User-Agent header.
Headers will be sent uncompressed. The browser should enforce
content length validation.
4.3.1.4. Client Emulated Web Browser SSL/TLS Layer Attributes
The test traffic shall be a realistic blend of encrypted and clear
traffic. For encrypted traffic, the following attributes shall
define the negotiated encryption parameters. The tests must use
TLSv1.2 or higher with a record size of 16383, commonly used cipher
suite and key strength. Session reuse or ticket resumption may be
used for subsequent connections to the same Server endpoint IP. The
client endpoint must send TLS Extension SNI information when opening
up a security tunnel. Server certificate validation should be
disabled.
If the DUT/SUT doesn't perform SSL inspection, cipher suite and
certificate selection for the test is irrelevant. However, it is
recommended to use latest and not deprecated certificates, in order
to mimic real world traffic.
4.3.2. Backend Server Configuration
This document attempts to specify which parameters should be
considerable while configuring emulated backend servers using test
equipment.
Balarajah Expires June 10, 2018 [Page 8]
Internet-Draft Benchmarking for NGFW perfromance December 2017
4.3.2.1. TCP Stack Attributes
The TCP stack SHOULD use a TCP Reno variant, which include congestion
avoidance, back off and windowing, retransmission and recovery on
every TCP connection between client and server endpoints. The
default IPv4 MSS segment size MUST be set to 1460 bytes and a TX and
RX receive windows of at least 32768 bytes. Delayed ACKs are
permitted but SHOULD be limited to either a 200 mSec delay timeout or
3k in bytes before a forced ACK. Up to 2 retries SHOULD be allowed
before a timeout event is declared. All traffic must set the TCP PSH
flag to high. The source port range SHOULD be in the range of 1024 -
65535. Internal timeout should be dynamically scalable per RFC 793.
4.3.2.2. Server Endpoint IP Addressing
The server IP blocks should consist of unique, continuous static
address blocks with one IP per Server FQDN endpoint per test port.
The IPv4 ToS byte should be set to '00'. The source mac address of
the server endpoints shall be the same emulating routed behavior.
Each Server FQDN should have it's own unique IP address. The Server
IP addressing should be fixed to the same number of FQDN entries.
4.3.2.3. HTTP / HTTPS Server Pool Endpoint Attributes
The emulated server pool for HTTP should listen on TCP port 80 and
emulated HTTP version 1.1 with persistence. For HTTPS server, the
pool must have the same basic attributes of an HTTP server pool plus
attributes for SSL/TLS. The server must advertise a server type.
For HTTPS server, TLS 1.2 or higher must be used with a record size
of 16,383 bytes and ticket resumption or Session ID reuse enabled.
The server must listen on port TCP 443. The server shall serve a
2048 server SSL certificate to the client. It is required that the
HTTPS server also check Host SNI information with the Fully Qualified
Domain Name (FQDN). Client certificate validation should be
disabled.
If the DUT/SUT doesn't perform SSL inspection, cipher suite and
certificate selection for the test is irrelevant. However, it is
recommended to use latest and not deprecated certificates, in order
to mimic real world traffic.
4.3.3. Traffic Flow Definition
The section describes the traffic pattern between the client and
server endpoints. At the beginning of the test, the server endpoint
initializes and will be in a ready to accept connection state
including initialization of the TCP stack as well as bound HTTP and
HTTPS servers. When a client endpoint is needed, it will initialize
Balarajah Expires June 10, 2018 [Page 9]
Internet-Draft Benchmarking for NGFW perfromance December 2017
and be given attributes such as the MAC and IP address. The behavior
of the client is to sweep though the given server IP space,
sequentially generating a recognizable service by the DUT. Thus, a
balanced, mesh between client endpoints and server endpoints will be
generated in a client port server port combination. Each client
endpoint performs the same actions as other endpoints, with the
difference being the source IP of the client endpoint and the target
server IP pool. The client shall use Fully Qualified Domain Names in
Host Headers and for TLS 1.2 Server Name Indication (SNI).
4.3.3.1. Description of Intra-Client Behavior
Client endpoints are independent of other clients that are
concurrently executing. When a client endpoint initiate traffic,
this section will describe how the steps though different services.
Once initialized, the user should randomly hold (perform no
operation) for a few milliseconds to allow for better randomization
of start of client traffic. The client will then either open up a
new TCP connection or connect to a TCP persistence stack still open
to that specific server. At any point that the service profile may
require encryption, a TLS 1.2 encryption tunnel will form presenting
the URL request to the server. The server will then perform an SNI
name check with the proposed FQDN compared to the domain embedded in
the certificate. Only when correct, will the server process the
object. The initial object to the server does not have a fixed size,
its size is based on for example the URL path length. Up to six
additional sub-URLs (Objects on the service page) may be requested
simultaneously. This may or may not be to the same server IP as the
initial URL. Each sub-object will also use a conical FQDN and URL
path, as observed in the traffic mix used. The traffic mix in the
appendix table is represented by the actions of each and every client
endpoint. Therefor the instantaneous percent of mix will vary, but
the overall mix through the duration of the test will be fixed. This
is based on the number of active users, TCP recovery mechanism, etc.
4.3.4. Traffic Load Profile
The loading of traffic will be described in this section. The
loading of an traffic load profile has five distinct phases: Init,
ramp up, sustain, ramp down/close, and collection.
Within the Init phase, test bed devices including the client and
server endpoints should negotiate layer 2-3 connectivity such as MAC
learning and ARP. Only after successful MAC learning or ARP
resolution shall the test iteration move to the next phase. No
measurements are made in this phase. The minimum recommended time
for init phase is 5 seconds. During this phase the emulated clients
SHOULD NOT initiate any sessions with the DUT/SUT, in contrast, the
Balarajah Expires June 10, 2018 [Page 10]
Internet-Draft Benchmarking for NGFW perfromance December 2017
emulated servers should be ready to accept requests from DUT/SUT or
from emulated clients.
In the ramp up phase, the test equipment should start to generate the
test traffic. It should use a set approximate number of unique
client IP addresses actively to generate traffic. The traffic should
ramp from zero to desired target throughput objective. The duration
for the ramp up phase must be configured long enough, so that the
test equipment does not overwhelm DUT/SUT's supported performance
metrics, namely: connection setup rate, concurrent connection and
application transaction. The recommended time duration for the ramp
up phase is 180-300 seconds. No measurements are made in this phase.
In the sustain phase, the test equipment should keep to generate
traffic at constant rate for a constant number of active client IPs.
The recommended time duration for sustain phase is 600 seconds. This
is the phase where measurements occur.
In the ramp down/close phase, no new connection is established and no
measurements are made. The recommend duration of this phase is 180-
300 seconds.
The last phase is administrative and will be when the tester merges
and collates the report data.
5. Test Bed Considerations
This section recommends steps to control the test environment and
test equipment, specifically focusing on virtualized environments and
virtualized test equipment.
1. Ensure that any ancillary switching or routing functions between
the system under test and the test equipment do not limit the
performance of the traffic generator. This is specifically
important for virtualized components (vSwitches, vRouters).
2. Verify that the performance of the test equipment matches and
reasonably exceeds the expected maximum performance of the system
under test.
3. Assert that the test bed characteristics are stable during the
whole test session. A number of factors might influence
stability specifically for virtualized test beds, for example
additional work loads in a virtualized system, load balancing and
movement of virtual machines during the test, or simple issues
such as additional heat created by high workloads leading to an
emergency CPU performance reduction.
Balarajah Expires June 10, 2018 [Page 11]
Internet-Draft Benchmarking for NGFW perfromance December 2017
Test bed reference pre-tests help to ensure that the desired traffic
generator aspects such as maximum throughput and the network
performance metrics such as maximum latency and maximum packet loss
are met.
Once the desired maximum performance goals for the system under test
have been identified, a safety margin of 10 % SHOULD be added for
throughput and subtracted for maximum latency and maximum packet
loss.
Test bed preparation can be performed either by configuring the DUT
in the most trivial setup (fast forwarding) or without presence of
DUT.
6. Reporting
This section describes how the final report should be formatted and
presented. The final test report may have two major sections;
Introduction and result sections. The following attributes should be
present in the introduction section of the test report.
1. The name of the NetSecOPEN traffic mix must be prominent.
2. The time and date of the execution of the test must be prominent.
3. Summary of testbed software and Hardware details
A. DUT Hardware/Virtual Configuration
+ This section should clearly identify the make and model of
the DUT
+ iThe port interfaces, including speed and link information
must be documented.
+ If the DUT is a virtual VNF, interface acceleration such
as DPDK and SR-IOV must be documented as well as cores
used, RAM used, and the pinning / resource sharing
configuration. The Hypervisor and version must be
documented.
+ Any additional hardware relevant to the DUT such as
controllers must be documented
B. DUT Software
+ The operating system name must be documented
Balarajah Expires June 10, 2018 [Page 12]
Internet-Draft Benchmarking for NGFW perfromance December 2017
+ The version must be documented
+ The specific configuration must be documented
C. DUT Enabled Features
+ Specific features, such as logging, NGFW, DPI must be
documented
+ iAttributes of those featured must be documented
+ Any additional relevant information about features must be
documented
D. Test equipment hardware and software
+ Test equipment vendor name
+ Hardware details including model number, interface type
+ Test equipment firmware and test application software
version
4. Results Summary / Executive Summary
1. Results should resemble a pyramid in how it is reported, with
the introduction section documenting the summary of results
in a prominent, easy to read block.
2. In the result section of the test report, the following
attributes should be present for each test scenario.
a. KPIs must be documented separately for each test
scenario. The format of the KPI metrics should be
presented as described in section 6.1 (Section 6.1).
b. The next level of detains should be graphs showing each
of these metrics over the duration (sustain phase) of the
test. This allows the user to see the measured
performance stability changes over time.
6.1. Key Performance Indicators
This section lists KPIs for overall benchmarking tests scenarios.
All KPIs MUST be measured in whole period of sustain phase as
described insection 4.3.4 (Section 4.3.4). All KPIs MUST be measured
from test equipment statistics only.
Balarajah Expires June 10, 2018 [Page 13]
Internet-Draft Benchmarking for NGFW perfromance December 2017
o TCP Concurrent Connection Capacity
This key performance indicator will measure the average concurrent
open TCP connections in the sustaining period.
o TCP Connection Setup Rate
This key performance indicator will measure the average
established TCP connections per second in the sustaining period.
For Session setup rate benchmarking test scenario, the KPI will
measure average established and terminated TCP connections per
second simultaneously.
o Application Transaction Rate
This key performance indicator will measure the average successful
transactions per seconds in the sustaining period.
o TLS Handshake Rate
This key performance indicator will measure the average TLS 1.2 or
higher session formation rate within the sustaining period.
o URL Response time / Time to Last Byte (TTLB)
This key performance indicator will measure the minimum, average
and maximum per URL response time in the sustaining period as well
as the average variance in the same period.
o Application Transaction Time
This key performance indicator will measure the minimum, average
and maximum the amount of time to receive all objects from the
server.
o Time to First Byte (TTFB)
This key performance indicator will measure minimum, average and
maximum the time to first byte. TTFB is the elapsed time between
sending the SYN packet from the client and receiving the first
byte of application date from the DUT/SUT. TTFB SHOULD be
expressed in millisecond.
o TCP Connect Time
This key performance indicator will measure minimum, average and
maximum TCP connect time. It is elapsed between the time the
client sends a SYN packet and the time it receives the SYN/ACK.
TCP connect time SHOULD be expressed in millisecond.
7. Benchmarking Tests
Balarajah Expires June 10, 2018 [Page 14]
Internet-Draft Benchmarking for NGFW perfromance December 2017
7.1. Throughput Performance
7.1.1. Objective
To determine the average throughput performance of the DUT/SUT when
using application traffic mix defined insection 7.1.3.3
(Section 7.1.3.3).
7.1.2. Test Setup
Test bed setup MUST be configured as defined in section 4
(Section 4). Any test scenario specific test bed configuration
changes must be documented.
7.1.3. Test Parameters
In this section, test scenario specific parameters SHOULD be defined.
7.1.3.1. Test Equipment Configuration Parameters
Test equipment configuration parameters MUST conform to the
requirements defined in section 4.3 (Section 4.3). Following
parameters MUST be noted for this test scenario:
Client IP address range
Server IP address range
Traffic distribution ratio between IPv4 and IPv6
Traffic load objective or specification type (e.g Throughput,
SimUsers and etc.)
Target throughput: It can be defined based on requirements.
Otherwise it represents aggregated line rate of interface(s) used
in the DUT/SUT
Initial throughput: Initial throughput can be up to 10% of the
"Target throughput"
7.1.3.2. DUT/SUT Configuration Parameters
DUT/SUT parameters MUST conform to the requirements defined in
section 4.2 (Section 4.2). Any configuration changes for this
specific test scenario MUST be documented.
Balarajah Expires June 10, 2018 [Page 15]
Internet-Draft Benchmarking for NGFW perfromance December 2017
7.1.3.3. Traffic Profile
Test scenario MUST be run with a single application traffic mix
profile. The name of the NetSecOpen traffic mix MUST be documented.
7.1.3.4. Test Results Acceptance Criteria
The following test Criteria is defined as test results acceptance
criteria
a. Number of failed Application transaction MUST be 0.01%.
b. Number of Terminated TCP connection due to unexpected TCP RST
sent by DUT/SUT MUST be less than 0.01%
c. Maximum deviation (max. dev) of application transaction time /
TTLB (Time To Last Byte) MUST be less than X (e.g. 2, TBD)
The following equation MUST be used to calculate the deviation of
application transaction time or TTLB.
max. dev = max((avg_latency - min_latency),(max_latency -
avg_latency)) / (Initial latency)
Where, the initial latency is calculated using the following
equation. For this calculation, the latency values (min', avg'
and max') MUST be measured during test procedure step 1 as
defined in section 7.1.4.1 (Section 7.1.4.1).
The variable latency represents application transaction time or
TTLB.
Initial latency:= min((avg' latency - min' latency) | (max'
latency - avg' latency))
d. Maximum value of TCP connect time must be less than (TBD) ms.
(beta tests required to determine the value). The definition for
TCP connect time can be found in section 6.2 (Section 6.1).
e. Maximum value of Time to First Byte must be less than 2* TCP
connect time.
Test Acceptance criteria for this test scenario MUST be monitored
during the sustain phase of the traffic load profile only.
7.1.3.5. Measurement
Following KPI metrics MUST be reported for this test scenario.
Balarajah Expires June 10, 2018 [Page 16]
Internet-Draft Benchmarking for NGFW perfromance December 2017
Mandatory KPIs: average Throughput, maximum Concurrent TCP
connection, TTLB/application transaction time (minimum, average and
maximum) and average application transaction rate
Optional KPIs: average TCP connection setup rate, average TLS
handshake rate, TCP connect time and TTFB
7.1.4. Test Procedures and expected Results
The test procedure is designed to measure the throughput performance
of the DUT/SUT at the sustaining period of traffic load profile. The
test procedure consists of three major steps.
7.1.4.1. Step 1: Test Initialization and Qualification
Verify the link status of the all connected physical interfaces. All
interfaces are expected to be "UP" status.
Configure traffic load profile of the test equipment to generate test
traffic at "initial throughput" rate as described in the parameters
section. The DUT/SUT SHOULD reach the "initial throughput" during
the sustain phase. Measure all KPI as defined in section 7.1.3.5
(Section 7.1.3.5). The measured KPIs during the sustain phase MUST
meet acceptance criteria "a" and "b" defined in section 7.1.3.4
(Section 7.1.3.4).
If the KPI metrics do not meet the acceptance criteria, the test
procedure MUST NOT be continued to step 2.
7.1.4.2. Step 2: Test Run with Target Objective
Configure test equipment to generate traffic at "Target throughput"
rate defined in the parameter table. The test equipment SHOULD
follow the traffic load profile definition as described in section
4.3.4 (Section 4.3.4). The test equipment SHOULD start to measure
and record all specified KPIs. The frequency of KPI metrics
measurement MUST be less than 5 seconds. Continue the test until all
traffic profile phases are completed.
The DUT/SUT is expected to reach the desired target throughput during
the sustain phase. In addition, the measured KPIs must meet all
acceptance criteria. Follow the step 3, if the KPI metrics do not
meet the acceptance criteria.
Balarajah Expires June 10, 2018 [Page 17]
Internet-Draft Benchmarking for NGFW perfromance December 2017
7.1.4.3. Step 3: Test Iteration with Binary Search
Use binary search algorithm to configure the desired traffic load
profile for each test iteration.
Determine the maximum and average achievable throughput within the
acceptance criteria.
7.1.4.3.1. Pseudocode for binary search algorithm
TBD Resolution:=0.01* Target throughput and Backoff:= 50%
7.2. TCP Concurrent Connection Capacity
7.3. TCP Connection Setup Rate
7.4. Application Transaction Rate
7.5. SSL/TLS Handshake Rate
8. Formal Syntax
9. IANA Considerations
This document makes no request of IANA.
Note to RFC Editor: this section may be removed on publication as an
RFC.
10. Security Considerations
11. Acknowledgements
12. Normative References
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119,
DOI 10.17487/RFC2119, March 1997,
<https://www.rfc-editor.org/info/rfc2119>.
Appendix A. An Appendix
tbd
Balarajah Expires June 10, 2018 [Page 18]
Internet-Draft Benchmarking for NGFW perfromance December 2017
Author's Address
Balamuhunthan Balarajah
EANTC AG
Salzufer 14
Berlin 10587
Germany
Email: balarajah@eantc.de
Balarajah Expires June 10, 2018 [Page 19]