Skip to main content

VNF Benchmarking Methodology
draft-rosa-bmwg-vnfbench-01

The information below is for an old version of the document.
Document Type
This is an older version of an Internet-Draft whose latest revision state is "Expired".
Authors Raphael Vicente Rosa , Christian Esteve Rothenberg
Last updated 2018-03-03
RFC stream (None)
Formats
Stream Stream state (No stream defined)
Consensus boilerplate Unknown
RFC Editor Note (None)
IESG IESG state I-D Exists
Telechat date (None)
Responsible AD (None)
Send notices to (None)
draft-rosa-bmwg-vnfbench-01
BMWG                                                        R. Rosa, Ed.
Internet-Draft                                             C. Rothenberg
Intended status: Informational                                   UNICAMP
Expires: September 3, 2018                                 March 2, 2018

                      VNF Benchmarking Methodology
                      draft-rosa-bmwg-vnfbench-01

Abstract

   This document describes a common methodology for benchmarking
   Virtualized Network Functions (VNFs) in general-purpose hardware.
   Specific cases of benchmarking methodologies for particular VNFs can
   be derived from this document.  An open source reference
   implementation called Gym is reported as a running code embodiment of
   the proposed methodology for VNFs.

Status of This Memo

   This Internet-Draft is submitted in full conformance with the
   provisions of BCP 78 and BCP 79.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF).  Note that other groups may also distribute
   working documents as Internet-Drafts.  The list of current Internet-
   Drafts is at https://datatracker.ietf.org/drafts/current/.

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   This Internet-Draft will expire on September 3, 2018.

Copyright Notice

   Copyright (c) 2018 IETF Trust and the persons identified as the
   document authors.  All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents
   (https://trustee.ietf.org/license-info) in effect on the date of
   publication of this document.  Please review these documents
   carefully, as they describe your rights and restrictions with respect
   to this document.  Code Components extracted from this document must
   include Simplified BSD License text as described in Section 4.e of

Rosa & Rothenberg       Expires September 3, 2018               [Page 1]
Internet-Draft                  VNFBench                      March 2018

   the Trust Legal Provisions and are provided without warranty as
   described in the Simplified BSD License.

Table of Contents

   1.  Introduction  . . . . . . . . . . . . . . . . . . . . . . . .   2
   2.  Terminology . . . . . . . . . . . . . . . . . . . . . . . . .   3
   3.  Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . .   3
   4.  Considerations  . . . . . . . . . . . . . . . . . . . . . . .   4
   4.1.  VNF Testing Methods . . . . . . . . . . . . . . . . . . . .   4
   4.2.  Generic VNF Benchmarking Setup  . . . . . . . . . . . . . .   4
   4.3.  Deployment Scenarios  . . . . . . . . . . . . . . . . . . .   6
   4.4.  Influencing Aspects . . . . . . . . . . . . . . . . . . . .   7
   5.  Methodology . . . . . . . . . . . . . . . . . . . . . . . . .   8
   5.1.  General Description . . . . . . . . . . . . . . . . . . . .   8
   5.1.1.  Configurations  . . . . . . . . . . . . . . . . . . . . .   8
   5.1.2.  Testing Procedures  . . . . . . . . . . . . . . . . . . .   9
   5.2.  Particular Cases  . . . . . . . . . . . . . . . . . . . . .  10
   6.  VNF Benchmark Report  . . . . . . . . . . . . . . . . . . . .  11
   7.  Open Source Reference Implementation  . . . . . . . . . . . .  11
   8.  Security Considerations . . . . . . . . . . . . . . . . . . .  12
   9.  IANA Considerations . . . . . . . . . . . . . . . . . . . . .  12
   10. Acknowledgement . . . . . . . . . . . . . . . . . . . . . . .  12
   11. References  . . . . . . . . . . . . . . . . . . . . . . . . .  12
   11.1.  Normative References . . . . . . . . . . . . . . . . . . .  12
   11.2.  Informative References . . . . . . . . . . . . . . . . . .  13
   Authors' Addresses  . . . . . . . . . . . . . . . . . . . . . . .  13

1.  Introduction

   Benchmarking Methodology Working Group (BMWG) initiated efforts,
   approaching considerations in [RFC8172], to develop methodologies for
   benchmarking VNFs.  Similarly described in [RFC8172], VNF benchmark
   motivating aspects define: (i) pre-deployment infrastructure
   dimensioning to realize associated VNF performance profiles; (ii)
   comparison factor with physical network functions; (iii) and output
   results for analytical VNF development.

   Having no strict and clear execution boundaries, different from
   earlier self-contained black-box benchmarking methodologies described
   in BMWG, a VNF depends on underlying virtualized environment
   parameters [ETS14a], intrinsic considerations for analysis when
   addressing performance.  This document stands as a ground methodology
   guide for VNF benchmarking.  It addresses the state-of-the-art
   publications and the current developments in similar standardization
   efforts (e.g., [ETS14c] and [RFC8204]) towards bechmarking VNFs.

Rosa & Rothenberg       Expires September 3, 2018               [Page 2]
Internet-Draft                  VNFBench                      March 2018

2.  Terminology

   Common benchmarking terminology contained in this document is derived
   from [RFC1242].  Also, the reader is assumed to be familiar with the
   terminology as defined in the European Telecommunications Standards
   Institute (ETSI) NFV document [ETS14b].  Some of these terms, and
   others commonly used in this document, are defined below.

   NFV:  Network Function Virtualization - The principle of separating
      network functions from the hardware they run on by using virtual
      hardware abstraction.

   NFVI PoP:  NFV Infrastructure Point of Presence - Any combination of
      virtualized compute, storage and network resources.

   NFVI:  NFV Infrastructure - Collection of NFVI PoPs under one
      orchestrator.

   VIM:  Virtualized Infrastructure Manager - functional block that is
      responsible for controlling and managing the NFVI compute, storage
      and network resources, usually within one operator's
      Infrastructure Domain (e.g.  NFVI-PoP).

   VNFM:  Virtualized Network Function Manager - functional block that
      is responsible for controlling and managing the VNF life-cycle.

   NFVO:  NFV Orchestrator - functional block that manages the Network
      Service (NS) life-cycle and coordinates the management of NS life-
      cycle, VNF life-cycle (supported by the VNFM) and NFVI resources
      (supported by the VIM) to ensure an optimized allocation of the
      necessary resources and connectivity.

   VNF:  Virtualized Network Function - a software-based network
      function.

   VNFD:  Virtualised Network Function Descriptor - configuration
      template that describes a VNF in terms of its deployment and
      operational behaviour, and is used in the process of VNF on-
      boarding and managing the life cycle of a VNF instance.

   VNF-FG:  Virtualized Network Function Forwarding Graph - an ordered
      list of VNFs creating a service chain.

3.  Scope

   This document assumes VNFs as black boxes when defining VNF
   benchmarking methodologies.  White box approaches are assumed and

Rosa & Rothenberg       Expires September 3, 2018               [Page 3]
Internet-Draft                  VNFBench                      March 2018

   analysed as a particular case under proper considerations of internal
   VNF instrumentation.

4.  Considerations

   VNF benchmarking considerations are defined in [RFC8172].
   Additionally, VNF pre-deployment testing considerations are well
   explored in [ETS14c].

4.1.  VNF Testing Methods

   Following the ETSI's model in [ETS14c], we distinguish three methods
   for VNF evaluation:

   Benchmarking:  Where parameters (e.g., cpu, memory, storage) are
      provided and the corresponding performance metrics (e.g., latency,
      throughput) are obtained.  Note, such request might create
      multiple reports, for example, with minimal latency or maximum
      throughput results.

   Verification:  Both parameters and performance metrics are provided
      and a stimulus verify if the given association is correct or not.

   Dimensioning:  Where performance metrics are provided and the
      corresponding parameters obtained.  Note, multiple deployment
      interactions may be required, or if possible, underlying allocated
      resources need to be dynamically altered.

   Note: Verification and Dimensioning can be reduced to Benchmarking.
   Therefore, we detail Benchmarking in what follows.

4.2.  Generic VNF Benchmarking Setup

   A generic VNF benchmarking setup is shown in Figure 1, and its
   components are explained below.  Note here, not all components are
   mandatory, and VNF benchmarking scenarios, further explained, can
   dispose components in varied settings.

Rosa & Rothenberg       Expires September 3, 2018               [Page 4]
Internet-Draft                  VNFBench                      March 2018

                              +---------------+
                              |    Manager    |
                Control       | (Coordinator) |
                Interface     +---+-------+---+
             +--------+-----------+       +-------------------+
             |        |                                       |
             |        |   +-------------------------+         |
             |        |   |    System Under Test    |         |
             |        |   |                         |         |
             |        |   |    +-----------------+  |         |
             |     +--+------- +                 |  |         |
             |     |           |       VNF       |  |         |
             |     |           |                 |  |         |
             |     |           +----.---------.--+  |         |
       +-----+---+ |  Monitor  |    :         :     |   +-----+----+
       | Agent   | |{listeners}|----^---------V--+  |   |  Agent   |
       |(Sender) | |           |    Execution    |  |   |(Receiver)|
       |         | |           |   Environment   |  |   |          |
       |{Probers}| +-----------|                 |  |   |{Probers} |
       +-----.---+        |    +----.---------.--+  |   +-----.----+
             :            +---------^---------V-----+         :
             V                      :         :               :
             :................>.....:         :............>..:
             Stimulus Traffic Flow

                 Figure 1: Generic VNF Benchmarking Setup

   Agent --  executes active stimulus using probers, benchmarking tools,
      to benchmark and collect network and system performance metrics.
      While a single Agent is capable of performing localized benchmarks
      (e.g., stress tests on CPU, memory, disk I/O), the interaction
      among distributed Agents enable the generation and collection of
      end-to-end metrics (e.g., frame loss rate, latency).  In a
      deployment scenario, one Agent can create the benchmark stimuli
      and the other end be the VNF itself where, for example, one-way
      latency is evaluated.  A prober defines a software/hardware-based
      tool able to generate traffic specific to a VNF (e.g., sipp) or
      generic to multiple VNFs (e.g., pktgen).  An Agent can be defined
      by a physical or virtual network function.

   Monitor --  when possible, it is instantiated inside the target VNF
      or NFVI PoP (e.g., as a plug-in process in a virtualized
      environment) to perform passive monitoring, using listeners, for
      metrics collection based on benchmark tests evaluated according to
      Agents` stimuli.  Different from the active approach of Agents
      that can be seen as generic benchmarking VNFs, monitor observes
      particular properties according to NFVI PoPs and VNFs

Rosa & Rothenberg       Expires September 3, 2018               [Page 5]
Internet-Draft                  VNFBench                      March 2018

      capabilities.  A listener defines one or more interfaces for the
      extraction of particular metrics monitored in a target VNF and/or
      execution environment.  Logically, a Monitor is defined by as a
      virtual network function.

   Manager --  in a VNF benchmarking deployment scenario, is responsible
      for (i) the coordination and synchronization of activities of
      Agents and Monitors, (ii) collecting and parsing all VNF
      benchmarking results, and (iii) aggregating the inputs and parset
      benchmark outputs to construct a VNF performance profile, report
      that correlates the VNF stimuli and the monitored metrics.  A
      Manager executes the main configuration, operation and management
      actions to deliver the VNF benchmarking results.  A Manager can be
      defined by a physical or virtual network function.

   Virtualized Network Function (VNF) --  consists of one or more
      software components adequate for performing a network function
      according to allocated virtual resources and satisfied
      requirements in an execution environment.  A VNF can demand
      particular configurations for benchmarking specifications,
      demonstrating variable performance profiles based on available
      virtual resources/parameters and configured enhancements
      targetting specific technologies.

   Execution Environment --  defines a virtualized and controlled
      composition of capabilities necessary for the execution of a VNF.
      An execution environment stands as a general purpose level of
      virtualization with abstracted resources available for one or more
      VNFs.  It can also define specific technology habilitation,
      incurring in viable settings for enhancing VNF performance
      profiles.

4.3.  Deployment Scenarios

   A VNF benchmark deployment scenario establishes the physical and/or
   virtual instantiation of components defined in a VNF benchmarking
   setup.

   Based on a generic VNF benchmarking setup, the following
   considerations hold for deployment scenarios:

   o  Components can be composed in a single entity and defined as black
      or white boxes.  For instance, Manager and Agent could jointly
      define a software entity to perform a VNF benchmark and present
      results.

Rosa & Rothenberg       Expires September 3, 2018               [Page 6]
Internet-Draft                  VNFBench                      March 2018

   o  Monitor is not a mandatory component and must be considered only
      when performed white box benchmarking approaches for a VNF and/or
      its execution environment.

   o  Monitor can be defined by multiple instances of software
      components, each addressing a VNF or execution environment and
      their respective open interfaces for the extraction of metrics.

   o  Agents can be disposed in varied topology setups, included the
      possibility of multiple input and output ports of a VNF being
      directly connected each in one Agent.

   o  All benchmarking components defined in a deployment scenario must
      perform the synchronization of clocks to an international time
      standard.

4.4.  Influencing Aspects

   In general, VNF benchmarks must capture relevant causes of
   performance variability.  Examples of VNF performance influencing
   aspects can be observed in:

   Deployment Scenario Topology:  The orchestrated disposition of
      components can define particular interconnections among them
      composing a specific case/method of VNF benchmarking.

   Execution Environment:  The availability of generic and specific
      capabilities satisfying VNF requirements define a skeleton of
      opportunities for the allocation of VNF resources.  In addition,
      particular cases can define multiple VNFs interacting in the same
      execution environment of a benchmarking setup.

   VNF:  A detailed description of functionalities performed by a VNF
      sets possible traffic forwarding and processing operations it can
      perform on packets, added to its running requirements and specific
      configurations, which might affect and compose a benchmarking
      setup.

   Agent:  The toolset available for benchmarking stimulus for a VNF and
      its characteristics of packets format, disposition, and workload
      can interfere in a benchmarking setup.  VNFs can support specific
      traffic format as stimulus.

   Monitor:  In a particular benchmarking setup where measurements of
      VNF and/or execution environment metrics are available for
      extraction, an important analysis consist in verifying if the
      Monitor components can impact performance metrics of the VNF and
      the underlying execution environment.

Rosa & Rothenberg       Expires September 3, 2018               [Page 7]
Internet-Draft                  VNFBench                      March 2018

   Manager:  The overall composition of VNF benchmarking procedures can
      determine arrangements of internal states inside a VNF, which can
      interfere in observed benchmark metrics.

5.  Methodology

   Portability as a intrinsic characteristic of VNFs, allow them to be
   deployed in multiple environments, enabling, even parallel,
   benchmarking procedures in varied deployment scenarios.  A VNF
   benchmarking methodology must be described in a clear and objective
   manner in order to allow effective repeatability and comparability of
   the test results.

5.1.  General Description

   For the sake of clarity and generalization of VNF benchmarking tests,
   consider the following definitions.

   VNF Benchmarking Layout (VNF-BL) --  a setup that specifies a method
      of how to measure a VNF Performance Profile.  The specification
      includes structural and functional instructions, and variable
      parameters at different abstractions (e.g., topology of the
      deployment scenario, benchmarking target metrics, parameters of
      benchmarking components).  VNF-BL may be specific to a VNF or
      applicable to several VNF types.  A VNF-BL can be used to
      elaborate a VNF benchmark deployment scenario aiming the
      extraction of particular VNF performance metrics.

   VNF Performance Profile: (VNF-PP) --  defines a mapping between VNF
      allocated capabilities (e.g., cpu, memory) and the VNF performance
      metrics (e.g., throughput, latency between in/out ports) obtained
      in a benchmarking test elaborated based on a VNF-BL.  Logically,
      packet processing metrics are presented in a specific format
      addressing statistical significance where a correspondence among
      VNF parameters and the delivery of a measured/qualified VNF
      performance exists.

5.1.1.  Configurations

   In addition to a VNF-BL, all the items listed below, added their
   associated, and not limited to, settings must be contained in
   annotations describing a VNF benchmark deployment scenario.  Ideally,
   any person in possession of such annotations and the necessary/
   associated skeleton of hardware and software components should be
   able to reproduce the same deployment scenario and VNF benchmarking
   test.

Rosa & Rothenberg       Expires September 3, 2018               [Page 8]
Internet-Draft                  VNFBench                      March 2018

   VNF:   type, model, version/release, allocated resources, specific
      parameters, technology requirements, software details.

   Execution Environment:   type, model, version/release, available
      resources, technology capabilities, software details.

   Agents:   toolset of available probers and related benchmarking
      metrics, workload, traffic formats, virtualization layer (if
      existent), hardware capabilities (if existent).

   Monitors:   toolset of available listeners and related monitoring
      metrics, monitoring target (VNF and/or execution environment),
      virtualization layer (if existent), hardware capabilities (if
      existent).

   Manager:   utilized procedures during the benchmark test, set of
      events and settings exchanged with Agents/Monitors, established
      sequence of possible states triggered in the target VNF.

5.1.2.  Testing Procedures

   Consider the following definitions:

   Trial:   Consists in a single process or iteration to obtain VNF
      benchmarking metrics as a singular measurement.

   Test:   Defines strict parameters for benchmarking components perform
      one or more trials.

   Method:   Consists of a VNF-BL targeting one or more Tests to achieve
      VNF benchmarking measurements.  A Method explicits ranges of
      parameter values for the configuration of benchmarking components
      realized in a Test.

   The following sequence of events compose basic general procedures
   that must be performed for the execution of a VNF benchmarking test.

   1.   The sketch of a VNF benchmarking setup must be defined to later
      be translated into a deployment scenario.  Such sketch must
      contain all the structural and functional settings composing a
      VNF-BL.  At the end of this step the complete Method of
      benchmarking the target VNF is defined.

   2.   Via an automated orchestrator or in a manual process, all the
      components of the VNF benchmark setup must be allocated and
      interconnected.  VNF and the execution environment must be
      configured to properly address the VNF benchmark stimuli.

Rosa & Rothenberg       Expires September 3, 2018               [Page 9]
Internet-Draft                  VNFBench                      March 2018

   3.   Manager, Agent(s) and Monitor(s) (if existent), must be started
      and configured to execute the benchmark stimuli and retrieve
      expected/target metrics captured during and at the end of the VNF
      benchmarking test.  One or more trials realize the measurement of
      VNF performance metrics.

   4.   Output results from each obtained benchmarking test must be
      received by Manager.  In an automated or manual process, intended
      metrics to be extracted defined in the VNF-BL must compose a VNF-
      PP, resulting in a VNF benchmark report.

5.2.  Particular Cases

   Configurations and procedures concerning particular cases of VNF
   benchmarks address testing methodologies proposed in [RFC8172].  In
   addition to the general description previously defined, some details
   must be taken into consideration in the following VNF benchmarking
   cases.

   Noisy Neighbor:   An Agent can detain the role of a noisy neighbor,
      generating a particular workload in synchrony with a benchmarking
      procedure over a VNF.  Adjustments of the noisy workload stimulus
      type, frequency, virtualization level, among others, must be
      detailed in the VNF-BL.

   Representative Capacity:   An average value of workload must be
      specified as an Agent stimulus.  Considering a long-term analysis,
      the VNF must be configured to properly address a desired average
      behavior of performance in comparison with the value of the
      workload stimulus.

   Flexibility and Elasticity:   Having the possibility of a VNF be
      composed by multiple components, internal events of the VNF might
      trigger variated behaviors activating functionalities associated
      with elasticity, such as load balancing.  In this terms, a
      detailed characterization of a VNF must be specified and be
      contained in the VNF-PP and benchmarking report.

   On Failures:   Similarly to the case before, benchmarking setups of
      VNF must also capture the dynamics involved in the VNF behavior.
      In case of failures, a VNF would restart itself and possibly
      result in a off-line period.  A VNF-PP and benchmarking report
      must clearly capture such variation of VNF states.

   White Box VNF:   A benchmarking setup must define deployment
      scenarios to be compared with and without monitor components into
      the VNF and/or the execution environment, in order to analyze if
      the VNF performance is affected.  The VNF-PP and benchmarking

Rosa & Rothenberg       Expires September 3, 2018              [Page 10]
Internet-Draft                  VNFBench                      March 2018

      report must contain such analysis of performance variability,
      together with all the targeted VNF performance metrics.

6.  VNF Benchmark Report

   On the extraction of VNF and execution environment performance
   metrics various trials must be performed for statistical significance
   of the obtained benchmarking results.  Each trial must be executed
   following a particular deployment scenario composed by a VNF-BL.

   A VNF Benchmarking Report correlates structural and functional
   parameters of VNF-BL with targeted/extracted VNF benchmarking metrics
   of the obtained VNF-PP.

   A VNF performance profile must address the combined set of classified
   items in the 3x3 Matrix Coverage defined in [RFC8172].

7.  Open Source Reference Implementation

   The software, named Gym, is a framework for automated benchmarking of
   Virtualized Network Functions (VNFs).  It was coded following the
   initial ideas presented in a 2015 scientific paper entitled "VBaaS:
   VNF Benchmark-as-a-Service" [Rosa-a].  Later, the evolved design and
   prototyping ideas were presented at IETF/IRTF meetings seeking impact
   into NFVRG and BMWG.

   Gym was built to receive high-level test descriptors and execute them
   to extract VNFs profiles, containing measurements of performance
   metrics - especially to associate resources allocation (e.g., vCPU)
   with packet processing metrics (e.g., throughput) of VNFs.  From the
   original research ideas [Rosa-a], such output profiles might be used
   by orchestrator functions to perform VNF lifecycle tasks (e.g.,
   deployment, maintenance, tear-down).

   The proposed guiding principles, elaborated in [Rosa-b], to design
   and build Gym can be compounded in multiple practical ways for
   multiple VNF testing purposes:

   o  Comparability: Output of tests shall be simple to understand and
      process, in a human-read able format, coherent, and easily
      reusable (e.g., inputs for analytic applications).

   o  Repeatability: Test setup shall be comprehensively defined through
      a flexible design model that can be interpreted and executed by
      the testing platform repeatedly but supporting customization.

Rosa & Rothenberg       Expires September 3, 2018              [Page 11]
Internet-Draft                  VNFBench                      March 2018

   o  Configurability: Open interfaces and extensible messaging models
      shall be available between components for flexible composition of
      test descriptors and platform configurations.

   o  Interoperability: Tests shall be ported to different environments
      using lightweight components.

   In [Rosa-b] Gym was utilized to benchmark a decomposed IP Multimedia
   Subsystem VNF.  And in [Rosa-c], a virtual switch (Open vSwitch -
   OVS) was the target VNF of Gym for the analysis of VNF benchmarking
   automation.  Such articles validated Gym as a prominent open source
   reference implementation for VNF benchmarking tests.  Such articles
   set important contributions as discussion of the lessons learned and
   the overall NFV performance testing landscape, included automation.

   Gym stands as the open source reference implementation that realizes
   the VNF Benchmarking Methodologies presented in this document.  Gym
   is being released open source at [Gym].  The code repository includes
   also VNF Benchmarking Layout (VNF-BL) examples on the vIMS and OVS
   targets as described in [Rosa-b] and [Rosa-c].

8.  Security Considerations

   TBD

9.  IANA Considerations

   This document does not require any IANA actions.

10.  Acknowledgement

   The authors would like to thank the support of Ericsson Research,
   Brazil.

11.  References

11.1.  Normative References

   [ETS14a]   ETSI, "Architectural Framework - ETSI GS NFV 002 V1.2.1",
              Dec 2014, <http://www.etsi.org/deliver/etsi\_gs/
              NFV/001\_099/002/01.02.01-\_60/gs\_NFV002v010201p.pdf>.

   [ETS14b]   ETSI, "Terminology for Main Concepts in NFV - ETSI GS NFV
              003 V1.2.1", Dec 2014,
              <http://www.etsi.org/deliver/etsi_gs/NFV/001_099-
              /003/01.02.01_60/gs_NFV003v010201p.pdf>.

Rosa & Rothenberg       Expires September 3, 2018              [Page 12]
Internet-Draft                  VNFBench                      March 2018

   [ETS14c]   ETSI, "NFV Pre-deployment Testing - ETSI GS NFV TST001
              V1.1.1", April 2016,
              <http://docbox.etsi.org/ISG/NFV/Open/DRAFTS/TST001_-_Pre-
              deployment_Validation/NFV-TST001v0015.zip>.

   [RFC1242]  S. Bradner, "Benchmarking Terminology for Network
              Interconnection Devices", July 1991,
              <https://www.rfc-editor.org/info/rfc1242>.

   [RFC8172]  A. Morton, "Considerations for Benchmarking Virtual
              Network Functions and Their Infrastructure", July 2017,
              <https://www.rfc-editor.org/info/rfc8172>.

   [RFC8204]  M. Tahhan, B. O'Mahony, A. Morton, "Benchmarking Virtual
              Switches in the Open Platform for NFV (OPNFV)", September
              2017, <https://www.rfc-editor.org/info/rfc8204>.

11.2.  Informative References

   [Gym]      "Gym Home Page", <https://github.com/intrig-unicamp/gym>.

   [Rosa-a]   R. V. Rosa, C. E. Rothenberg, R. Szabo, "VBaaS: VNF
              Benchmark-as-a-Service", Fourth European Workshop on
              Software Defined Networks , Sept 2015,
              <http://ieeexplore.ieee.org/document/7313620>.

   [Rosa-b]   R. Rosa, C. Bertoldo, C. Rothenberg, "Take your VNF to the
              Gym: A Testing Framework for Automated NFV Performance
              Benchmarking", IEEE Communications Magazine Testing
              Series , Sept 2017,
              <http://ieeexplore.ieee.org/document/8030496>.

   [Rosa-c]   R. V. Rosa, C. E. Rothenberg, "Taking Open vSwitch to the
              Gym: An Automated Benchmarking Approach", IV Workshop pre-
              IETF/IRTF, CSBC Brazil, July 2017,
              <https://intrig.dca.fee.unicamp.br/wp-
              content/plugins/papercite/pdf/rosa2017taking.pdf>.

Authors' Addresses

   Raphael Vicente Rosa (editor)
   University of Campinas
   Av. Albert Einstein, 400
   Campinas, Sao Paulo  13083-852
   Brazil

   Email: rvrosa@dca.fee.unicamp.br
   URI:   https://intrig.dca.fee.unicamp.br/raphaelvrosa/

Rosa & Rothenberg       Expires September 3, 2018              [Page 13]
Internet-Draft                  VNFBench                      March 2018

   Christian Esteve Rothenberg
   University of Campinas
   Av. Albert Einstein, 400
   Campinas, Sao Paulo  13083-852
   Brazil

   Email: chesteve@dca.fee.unicamp.br
   URI:   http://www.dca.fee.unicamp.br/~chesteve/

Rosa & Rothenberg       Expires September 3, 2018              [Page 14]