Network Working Group M. Bagnulo
Internet-Draft UC3M
Intended status: Best Current Practice B. Claise
Expires: January 7, 2016 Cisco Systems, Inc.
P. Eardley
BT
A. Morton
AT&T Labs
A. Akhter
Consultant
July 6, 2015
Registry for Performance Metrics
draft-ietf-ippm-metric-registry-03
Abstract
This document defines the IANA Registry for Performance Metrics.
This document also gives a set of guidelines for Registered
Performance Metric requesters and reviewers.
Status of This Memo
This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
This Internet-Draft will expire on January 7, 2016.
Copyright Notice
Copyright (c) 2015 IETF Trust and the persons identified as the
document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents
Bagnulo, et al. Expires January 7, 2016 [Page 1]
Internet-Draft Registry for Performance Metrics July 2015
carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License.
Table of Contents
1. Open Issues . . . . . . . . . . . . . . . . . . . . . . . . . 3
2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 4
3. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 5
4. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
5. Motivation for a Performance Metrics Registry . . . . . . . . 7
5.1. Interoperability . . . . . . . . . . . . . . . . . . . . 7
5.2. Single point of reference for Performance Metrics . . . . 8
5.3. Side benefits . . . . . . . . . . . . . . . . . . . . . . 8
6. Criteria for Performance Metrics Registration . . . . . . . . 8
7. Performance Metric Registry: Prior attempt . . . . . . . . . 9
7.1. Why this Attempt Will Succeed . . . . . . . . . . . . . . 10
8. Definition of the Performance Metric Registry . . . . . . . . 10
8.1. Summary Category . . . . . . . . . . . . . . . . . . . . 11
8.1.1. Identifier . . . . . . . . . . . . . . . . . . . . . 11
8.1.2. Name . . . . . . . . . . . . . . . . . . . . . . . . 12
8.1.3. URI . . . . . . . . . . . . . . . . . . . . . . . . . 13
8.1.4. Description . . . . . . . . . . . . . . . . . . . . . 13
8.2. Metric Definition Category . . . . . . . . . . . . . . . 13
8.2.1. Reference Definition . . . . . . . . . . . . . . . . 13
8.2.2. Fixed Parameters . . . . . . . . . . . . . . . . . . 13
8.3. Method of Measurement Category . . . . . . . . . . . . . 14
8.3.1. Reference Method . . . . . . . . . . . . . . . . . . 14
8.3.2. Packet Generation Stream . . . . . . . . . . . . . . 14
8.3.3. Traffic Filter . . . . . . . . . . . . . . . . . . . 15
8.3.4. Sampling Distribution . . . . . . . . . . . . . . . . 15
8.3.5. Run-time Parameters . . . . . . . . . . . . . . . . . 16
8.3.6. Role . . . . . . . . . . . . . . . . . . . . . . . . 16
8.4. Output Category . . . . . . . . . . . . . . . . . . . . . 16
8.4.1. Type . . . . . . . . . . . . . . . . . . . . . . . . 16
8.4.2. Reference Definition . . . . . . . . . . . . . . . . 17
8.4.3. Metric Units . . . . . . . . . . . . . . . . . . . . 17
8.5. Administrative information . . . . . . . . . . . . . . . 17
8.5.1. Status . . . . . . . . . . . . . . . . . . . . . . . 17
8.5.2. Requester . . . . . . . . . . . . . . . . . . . . . . 17
8.5.3. Revision . . . . . . . . . . . . . . . . . . . . . . 17
8.5.4. Revision Date . . . . . . . . . . . . . . . . . . . . 17
8.6. Comments and Remarks . . . . . . . . . . . . . . . . . . 17
9. The Life-Cycle of Registered Metrics . . . . . . . . . . . . 18
9.1. Adding new Performance Metrics to the Registry . . . . . 18
9.2. Revising Registered Performance Metrics . . . . . . . . . 19
Bagnulo, et al. Expires January 7, 2016 [Page 2]
Internet-Draft Registry for Performance Metrics July 2015
9.3. Deprecating Registered Performance Metrics . . . . . . . 20
10. Security considerations . . . . . . . . . . . . . . . . . . . 21
11. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 21
12. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 22
13. References . . . . . . . . . . . . . . . . . . . . . . . . . 22
13.1. Normative References . . . . . . . . . . . . . . . . . . 22
13.2. Informative References . . . . . . . . . . . . . . . . . 23
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 24
1. Open Issues
1. Define the Filter column subcolumns, i.e. how filters are
expressed.
2. Need to include an example for a passive metric
3. Shall we remove the definitions of active and passive? If we
remove it, shall we keep all the related comments in the draft?
4. URL: should we include a URL link in each registry entry with a
URL specific to the entry that links to a different text page
that contains all the details of the registry entry as in
http://www.iana.org/assignments/xml-registry/xml-
registry.xhtml#ns
5. As discussed between Marcelo and Benoit, modify "defines" in the
Parameter definition. Reasoning: the distinction between a new
performance metric and a parameter is not clear. If it's defined
as a variable, is it a new perf metric? "All Parameters must be
known to measure using a metric": well, if it's a new perf
metric, we don't have the problem. And state what the parameter
is the example.
6. As discussed between Marcelo and Benoit, can we find a Parameter
for passive monitoring? The sampling distribution is a fixed
Parameter, right? Because it's needed "to interpret the
results", as mentioned in the Parameter definition.
7. We miss a new Parameter section that explains the link between
Parameters, Fixed Parameters, Run-time Parameters, and
potentially stream parameters. We must also add in this section
that "Differences in values for a fixed parameters implies a new
registry entries"
8. The double definitions are annoying: Registered Performance
Metric = Registered Metric, and Performance Metrics Registry =
Registry. I (Benoit) am in favor to only keep a single
definition (the longest one), and be consistent
Bagnulo, et al. Expires January 7, 2016 [Page 3]
Internet-Draft Registry for Performance Metrics July 2015
2. Introduction
The IETF specifies and uses Performance Metrics of protocols and
applications transported over its protocols. Performance metrics are
such an important part of the operations of IETF protocols that
[RFC6390] specifies guidelines for their development.
The definition and use of Performance Metrics in the IETF happens in
various working groups (WG), most notably:
The "IP Performance Metrics" (IPPM) WG is the WG primarily
focusing on Performance Metrics definition at the IETF.
The "Metric Blocks for use with RTCP's Extended Report Framework"
(XRBLOCK) WG recently specified many Performance Metrics related
to "RTP Control Protocol Extended Reports (RTCP XR)" [RFC3611],
which establishes a framework to allow new information to be
conveyed in RTCP, supplementing the original report blocks defined
in "RTP: A Transport Protocol for Real-Time Applications",
[RFC3550].
The "Benchmarking Methodology" WG (BMWG) defined many Performance
Metrics for use in laboratory benchmarking of inter-networking
technologies.
The "IP Flow Information eXport" (IPFIX) concluded WG specified an
IANA process for new Information Elements. Some Performance
Metrics related Information Elements are proposed on regular
basis.
The "Performance Metrics for Other Layers" (PMOL) concluded WG,
defined some Performance Metrics related to Session Initiation
Protocol (SIP) voice quality [RFC6035].
It is expected that more Performance Metrics will be defined in the
future, not only IP-based metrics, but also metrics which are
protocol-specific and application-specific.
However, despite the importance of Performance Metrics, there are two
related problems for the industry. First, how to ensure that when
one party requests another party to measure (or report or in some way
act on) a particular Performance Metric, then both parties have
exactly the same understanding of what Performance Metric is being
referred to. Second, how to discover which Performance Metrics have
been specified, so as to avoid developing new Performance Metric that
is very similar, but not quite inter-operable. The problems can be
addressed by creating a registry of performance metrics. The usual
way in which IETF organizes namespaces is with Internet Assigned
Bagnulo, et al. Expires January 7, 2016 [Page 4]
Internet-Draft Registry for Performance Metrics July 2015
Numbers Authority (IANA) registries, and there is currently no
Performance Metrics Registry maintained by the IANA.
This document therefore creates an IANA-maintained Performance
Metrics Registry. It also provides best practices on how to specify
new entries or update ones in the Performance Metrics Registry.
3. Terminology
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and
"OPTIONAL" in this document are to be interpreted as described in
[RFC2119].
Performance Metric: A Performance Metric is a quantitative measure
of performance, targeted to an IETF-specified protocol or targeted
to an application transported over an IETF-specified protocol.
Examples of Performance Metrics are the FTP response time for a
complete file download, the DNS response time to resolve the IP
address, a database logging time, etc. This definition is
consistent with the definition of metric in [RFC2330] and broader
than the definition of performance metric in [RFC6390].
Registered Performance Metric: A Registered Performance Metric (or
Registered Metric) is a Performance Metric expressed as an entry
in the Performance Metric Registry, administered by IANA. Such a
performance metric has met all the registry review criteria
defined in this document in order to included in the registry.
Performance Metrics Registry: The IANA registry containing
Registered Performance Metrics. In this document, it is also
called simply "Registry".
Proprietary Registry: A set of metrics that are registered in a
proprietary registry, as opposed to Performance Metrics Registry.
Performance Metrics Experts: The Performance Metrics Experts is a
group of designated experts [RFC5226] selected by the IESG to
validate the Performance Metrics before updating the Performance
Metrics Registry. The Performance Metrics Experts work closely
with IANA.
Parameter: An input factor defined as a variable in the definition
of a Performance Metric. A numerical or other specified factor
forming one of a set that defines a metric or sets the conditions
of its operation. All Parameters must be known to measure using a
metric and interpret the results. Although Parameters do not
change the fundamental nature of the Performance Metric's
Bagnulo, et al. Expires January 7, 2016 [Page 5]
Internet-Draft Registry for Performance Metrics July 2015
definition, some have substantial influence on the network
property being assessed and interpretation of the results.
Note: Consider the case of packet loss in the following two
Active Measurement Method cases. The first case is packet loss
as background loss where the parameter set includes a very
sparse Poisson stream, and only characterizes the times when
packets were lost. Actual user streams likely see much higher
loss at these times, due to tail drop or radio errors. The
second case is packet loss as inverse of throughput where the
parameter set includes a very dense, bursty stream, and
characterizes the loss experienced by a stream that
approximates a user stream. These are both "loss metrics", but
the difference in interpretation of the results is highly
dependent on the Parameters (at least), to the extreme where we
are actually using loss to infer its compliment: delivered
throughput.
Active Measurement Method: Methods of Measurement conducted on
traffic which serves only the purpose of measurement and is
generated for that reason alone, and whose traffic characteristics
are known a priori. Examples of Active Measurement Methods are
the measurement methods for the One way delay metric defined in
[RFC2679] and the one for round trip delay defined in [RFC2681].
Passive Measurement Method: Methods of Measurement conducted on
network traffic, generated either from the end users or from
network elements. One characteristic of Passive Measurement
Methods is that sensitive information may be observed, and as a
consequence, stored in the measurement system.
4. Scope
This document is meant for two different audiences. For those
defining new Registered Performance Metrics, it provides
specifications and best practices to be used in deciding which
Registered Metrics are useful for a measurement study, instructions
for writing the text for each column of the Registered Metrics, and
information on the supporting documentation required for the new
Registry entry (up to and including the publication of one or more
RFCs or I-Ds describing it). For the appointed Performance Metrics
Experts and for IANA personnel administering the new IANA Performance
Metric Registry, it defines a set of acceptance criteria against
which these proposed Registered Performance Metrics should be
evaluated.
This Performance Metric Registry is applicable to Performance Metrics
issued from Active Measurement, Passive Measurement, and any other
Bagnulo, et al. Expires January 7, 2016 [Page 6]
Internet-Draft Registry for Performance Metrics July 2015
form of Performance Metric. This registry is designed to encompass
Performance Metrics developed throughout the IETF and especially for
the technologies specified in the following working groups: IPPM,
XRBLOCK, IPFIX, and BMWG. This document analyzes an prior attempt to
set up a Performance Metric Registry, and the reasons why this design
was inadequate [RFC6248]. Finally, this document gives a set of
guidelines for requesters and expert reviewers of candidate
Registered Performance Metrics.
This document makes no attempt to populate the Registry with initial
entries. It does provides a few examples that are merely
illustrations and should not be included in the registry at this
point in time.
Based on [RFC5226] Section 4.3, this document is processed as Best
Current Practice (BCP) [RFC2026].
5. Motivation for a Performance Metrics Registry
In this section, we detail several motivations for the Performance
Metric Registry.
5.1. Interoperability
As any IETF registry, the primary use for a registry is to manage a
namespace for its use within one or more protocols. In the
particular case of the Performance Metric Registry, there are two
types of protocols that will use the Performance Metrics in the
Registry during their operation (by referring to the Index values):
o Control protocol: this type of protocols is used to allow one
entity to request another entity to perform a measurement using a
specific metric defined by the Registry. One particular example
is the LMAP framework [I-D.ietf-lmap-framework]. Using the LMAP
terminology, the Registry is used in the LMAP Control protocol to
allow a Controller to request a measurement task to one or more
Measurement Agents. In order to enable this use case, the entries
of the Performance Metric Registry must be well enough defined to
allow a Measurement Agent implementation to trigger a specific
measurement task upon the reception of a control protocol message.
This requirement heavily constrains the type of entries that are
acceptable for the Performance Metric Registry.
o Report protocol: This type of protocols is used to allow an entity
to report measurement results to another entity. By referencing
to a specific Performance Metric Registry, it is possible to
properly characterize the measurement result data being reported.
Using the LMAP terminology, the Registry is used in the Report
Bagnulo, et al. Expires January 7, 2016 [Page 7]
Internet-Draft Registry for Performance Metrics July 2015
protocol to allow a Measurement Agent to report measurement
results to a Collector.
5.2. Single point of reference for Performance Metrics
A Performance Metrics Registry serves as a single point of reference
for Performance Metrics defined in different working groups in the
IETF. As we mentioned earlier, there are several WGs that define
Performance Metrics in the IETF and it is hard to keep track of all
them. This results in multiple definitions of similar Performance
Metrics that attempt to measure the same phenomena but in slightly
different (and incompatible) ways. Having a Registry would allow
both the IETF community and external people to have a single list of
relevant Performance Metrics defined by the IETF (and others, where
appropriate). The single list is also an essential aspect of
communication about Performance Metrics, where different entities
that request measurements, execute measurements, and report the
results can benefit from a common understanding of the referenced
Performance Metric.
5.3. Side benefits
There are a couple of side benefits of having such a Registry.
First, the Registry could serve as an inventory of useful and used
Performance Metrics, that are normally supported by different
implementations of measurement agents. Second, the results of
measurements using the Performance Metrics would be comparable even
if they are performed by different implementations and in different
networks, as the Performance Metric is properly defined. BCP 176
[RFC6576] examines whether the results produced by independent
implementations are equivalent in the context of evaluating the
completeness and clarity of metric specifications. This BCP defines
the standards track advancement testing for (active) IPPM metrics,
and the same process will likely suffice to determine whether
Registered Performance Metrics are sufficiently well specified to
result in comparable (or equivalent) results. Registered Performance
Metrics which have undergone such testing SHOULD be noted, with a
reference to the test results.
6. Criteria for Performance Metrics Registration
It is neither possible nor desirable to populate the Registry with
all combinations of Parameters of all Performance Metrics. The
Registered Performance Metrics should be:
1. interpretable by the user.
2. implementable by the software designer,
Bagnulo, et al. Expires January 7, 2016 [Page 8]
Internet-Draft Registry for Performance Metrics July 2015
3. deployable by network operators,
4. accurate, for interoperability and deployment across vendors,
5. Operationally useful, so that it has significant industry
interest and/or has seen deployment,
6. Sufficiently tightly defined, so that different values for the
Run-time Parameters does not change the fundamental nature of the
measurement, nor change the practicality of its implementation.
In essence, there needs to be evidence that a candidate Registered
Performance Metric has significant industry interest, or has seen
deployment, and there is agreement that the candidate Registered
Performance Metric serves its intended purpose.
7. Performance Metric Registry: Prior attempt
There was a previous attempt to define a metric registry RFC 4148
[RFC4148]. However, it was obsoleted by RFC 6248 [RFC6248] because
it was "found to be insufficiently detailed to uniquely identify IPPM
metrics... [there was too much] variability possible when
characterizing a metric exactly" which led to the RFC4148 registry
having "very few users, if any".
A couple of interesting additional quotes from RFC 6248 might help
understand the issues related to that registry.
1. "It is not believed to be feasible or even useful to register
every possible combination of Type P, metric parameters, and
Stream parameters using the current structure of the IPPM Metrics
Registry."
2. "The registry structure has been found to be insufficiently
detailed to uniquely identify IPPM metrics."
3. "Despite apparent efforts to find current or even future users,
no one responded to the call for interest in the RFC 4148
registry during the second half of 2010."
The current approach learns from this by tightly defining each
Registered Performance Metric with only a few variable (Run-time)
Parameters to be specified by the measurement designer, if any. The
idea is that entries in the Registry stem from different measurement
methods which require input (Run-time) parameters to set factors like
source and destination addresses (which do not change the fundamental
nature of the measurement). The downside of this approach is that it
could result in a large number of entries in the Registry. There is
Bagnulo, et al. Expires January 7, 2016 [Page 9]
Internet-Draft Registry for Performance Metrics July 2015
agreement that less is more in this context - it is better to have a
reduced set of useful metrics rather than a large set of metrics,
some with with questionable usefulness.
7.1. Why this Attempt Will Succeed
As mentioned in the previous section, one of the main issues with the
previous registry was that the metrics contained in the registry were
too generic to be useful. This document specifies stricter criteria
for performance metric registration (see section 6), and imposes a
group of Performance Metrics Experts that will provide guidelines to
assess if a Performance Metric is properly specified.
Another key difference between this attempt and the previous one is
that in this case there is at least one clear user for the Registry:
the LMAP framework and protocol. Because the LMAP protocol will use
the Registry values in its operation, this actually helps to
determine if a metric is properly defined. In particular, since we
expect that the LMAP control protocol will enable a controller to
request a measurement agent to perform a measurement using a given
metric by embedding the Performance Metric Registry value in the
protocol, a metric is properly specified if it is defined well-enough
so that it is possible (and practical) to implement the metric in the
measurement agent. This was the failure of the previous attempt: a
registry entry with an undefined Type-P (section 13 of RFC 2330
[RFC2330]) allows implementation to be ambiguous.
8. Definition of the Performance Metric Registry
In this section we define the columns of the Performance Metric
Registry. This Performance Metric Registry is applicable to
Performance Metrics issued from Active Measurement, Passive
Measurement, and any other form of Performance Metric. Because of
that, it may be the case that some of the columns defined are not
applicable for a given type of metric. If this is the case, the
column(s) SHOULD be populated with the "NA" value (Non Applicable).
However, the "NA" value MUST NOT be used by any metric in the
following columns: Identifier, Name, URI, Status, Requester,
Revision, Revision Date, Description. In addition, it may be
possible that, in the future, a new type of metric requires
additional columns. Should that be the case, it is possible to add
new columns to the registry. The specification defining the new
column(s) must define how to populate the new column(s) for existing
entries.
The columns of the Performance Metric Registry are defined next. The
columns are grouped into "Categories" to facilitate the use of the
registry. Categories are described at the 8.x heading level, and
Bagnulo, et al. Expires January 7, 2016 [Page 10]
Internet-Draft Registry for Performance Metrics July 2015
columns are at the 8.x.y heading level. The Figure below illustrates
this organization. An entry (row) therefore gives a complete
description of a Registered Metric.
Each column serves as a check-list item and helps to avoid omissions
during registration and expert review.
Registry Categories and Columns, shown as
Category
------------------
Column | Column |
Summary
-------------------------------
Identifier | Name | URI | Description |
Metric Definition
-----------------------------------------
Reference Definition | Fixed Parameters |
Method of Measurement
---------------------------------------------------------------------
Reference | Packet | Traffic | Sampling | Run-time | Role |
Method | Generation | Filter | Distribution | Parameters | |
| Stream |
Output
-----------------------------
| Type | Reference | Units |
| | Definition | |
Administrative Information
----------------------------------
Status |Request | Rev | Rev.Date |
Comments and Remarks
--------------------
8.1. Summary Category
8.1.1. Identifier
A numeric identifier for the Registered Performance Metric. This
identifier MUST be unique within the Performance Metric Registry.
The Registered Performance Metric unique identifier is a 16-bit
integer (range 0 to 65535). When adding newly Registered Performance
Metrics to the Performance Metric Registry, IANA should assign the
Bagnulo, et al. Expires January 7, 2016 [Page 11]
Internet-Draft Registry for Performance Metrics July 2015
lowest available identifier to the next Registered Performance
Metric.
8.1.2. Name
As the name of a Registered Performance Metric is the first thing a
potential implementor will use when determining whether it is
suitable for a given application, it is important to be as precise
and descriptive as possible.
New names of Registered Performance Metrics:
1. "MUST be chosen carefully to describe the Registered Performance
Metric and the context in which it will be used."
2. "MUST be unique within the Performance Metric Registry."
3. "MUST use capital letters for the first letter of each component.
All other letters MUST be lowercase, even for acronyms.
Exceptions are made for acronyms containing a mixture of
lowercase and capital letters, such as 'IPv4' and 'IPv6'."
4. MUST use '_' between each component of the Registered Performance
Metric name.
5. MUST start with prefix Act_ for active measurement Registered
Performance Metric.
6. MUST start with prefix Pas_ for passive monitoring Registered
Performance Metric.
7. Other types of Performance Metric should define a proper prefix
for identifying the type.
8. Some examples of names of passive metrics might be:
Pas_L3_L4_Octets (Layer 3 and 4 level accounting of bytes
observed), Pas_DNS_RTT (Round Trip Time of in DNS query response
of observed traffic), and Pas_L3_TCP_RTT (Passively observed
round trip time in TCP handshake organized with L3 addresses)
9. The remaining rules for naming are left for the Performance
Metric Experts to determine as they gather experience, so this is
an area of planned update by a future RFC
An example is "Act_UDP_Latency_Poisson_99mean" for a active
monitoring UDP latency metric using a Poisson stream of packets and
producing the 99th percentile mean as output.
Bagnulo, et al. Expires January 7, 2016 [Page 12]
Internet-Draft Registry for Performance Metrics July 2015
8.1.3. URI
The URI column MUST contain a URI [RFC3986] that uniquely identified
the Registered Performance Metric. The URI is a URN [RFC2141]. The
URI is automatically generated by prepending the prefix
urn:ietf:params:ippm:metric: to the metric name. The resulting URI
is globally unique.
8.1.4. Description
A Registered Performance Metric description is a written
representation of a particular Registry entry. It supplements the
Registered Performance Metric name to help Registry users select
relevant Registered Performance Metrics.
8.2. Metric Definition Category
This category includes columns to prompt all necessary details
related to the metric definition, including the RFC reference and
values of input factors, called fixed parameters, which are left open
in the RFC but have a particular value defined by the performance
metric.
8.2.1. Reference Definition
This entry provides a reference (or references) to the relevant
section(s) of the document(s) that define the metric, as well as any
supplemental information needed to ensure an unambiguous definition
for implementations. The reference needs to be an immutable
document, such as an RFC; for other standards bodies, it is likely to
be necessary to reference a specific, dated version of a
specification.
8.2.2. Fixed Parameters
Fixed Parameters are Paremeters whose value must be specified in the
Registry. The measurement system uses these values.
Where referenced metrics supply a list of Parameters as part of their
descriptive template, a sub-set of the Parameters will be designated
as Fixed Parameters. For example, for active metrics, Fixed
Parameters determine most or all of the IPPM Framework convention
"packets of Type-P" as described in [RFC2330], such as transport
protocol, payload length, TTL, etc. An example for passive metrics
is for RTP packet loss calculation that relies on the validation of a
packet as RTP which is a multi-packet validation controlled by
MIN_SEQUENTIAL as defined by [RFC3550]. Varying MIN_SEQUENTIAL
Bagnulo, et al. Expires January 7, 2016 [Page 13]
Internet-Draft Registry for Performance Metrics July 2015
values can alter the loss report and this value could be set as a
Fixed Parameter
A Parameter which is a Fixed Parameter for one Registry entry may be
designated as a Run-time Parameter for another Registry entry.
8.3. Method of Measurement Category
This category includes columns for references to relevant sections of
the RFC(s) and any supplemental information needed to ensure an
unambiguous method for implementations.
8.3.1. Reference Method
This entry provides references to relevant sections of the RFC(s)
describing the method of measurement, as well as any supplemental
information needed to ensure unambiguous interpretation for
implementations referring to the RFC text.
Specifically, this section should include pointers to pseudocode or
actual code that could be used for an unambigious implementation.
8.3.2. Packet Generation Stream
This column applies to Performance Metrics that generate traffic for
a part of their Measurement Method purposes including but not
necessarily limited to Active metrics. The generated traffic is
referred as stream and this columns describe its characteristics.
Each entry for this column contains the following information:
o Value: The name of the packet stream scheduling discipline
o Stream Parameters: The values and formats of input factors for
each type of stream. For example, the average packet rate and
distribution truncation value for streams with Poisson-distributed
inter-packet sending times.
o Reference: the specification where the stream is defined
The simplest example of stream specification is Singleton scheduling
(see [RFC2330]), where a single atomic measurement is conducted.
Each atomic measurement could consist of sending a single packet
(such as a DNS request) or sending several packets (for example, to
request a webpage). Other streams support a series of atomic
measurements in a "sample", with a schedule defining the timing
between each transmitted packet and subsequent measurement.
Principally, two different streams are used in IPPM metrics, Poisson
Bagnulo, et al. Expires January 7, 2016 [Page 14]
Internet-Draft Registry for Performance Metrics July 2015
distributed as described in [RFC2330] and Periodic as described in
[RFC3432]. Both Poisson and Periodic have their own unique
parameters, and the relevant set of values is specified in this
column.
8.3.3. Traffic Filter
This column applies to Performance Metrics that observe packets
flowing through (the device with) the measurement agent i.e. that is
not necessarily addressed to the measurement agent. This includes
but is not limited to Passive Metrics. The filter specifies the
traffic that is measured. This includes protocol field values/
ranges, such as address ranges, and flow or session identifiers.
The traffic filter itself depends on needs of the metric itself and a
balance of operators measurement needs and user's need for privacy.
Mechanics for conveying the filter criteria might be the BPF (Berkley
Packet Filter) or PSAMP [RFC5475] Property Match Filtering which
reuses IPFIX [RFC7012]. An example BPF string for matching TCP/80
traffic to remote destination net 192.0.2.0/24 would be "dst net
192.0.2.0/24 and tcp dst port 80". More complex filter engines might
be supported by the implementation that might allow for matching
using Deep Packet Inspection (DPI) technology.
8.3.4. Sampling Distribution
The sampling distribution defines out of all the packets that match
the traffic filter, which one of those are actually used for the
measurement. One possibility is "all" which implies that all packets
matching the Traffic filter are considered, but there may be other
sampling strategies. It includes the following information:
Value: the name of the sampling distribution
Parameters: if any.
Reference definition: pointer to the specification where the
sampling distribution is properly defined.
Sampling and Filtering Techniques for IP Packet Selection are
documented in the PSAMP (Packet Sampling) [RFC5475], while the
Framework for Packet Selection and Reporting, [RFC5474] provides more
background information. The sampling distribution parameters might
be expressed in terms of the Information Model for Packet Sampling
Exports, [RFC5477], and the Flow Selection Techniques, [RFC7014].
Bagnulo, et al. Expires January 7, 2016 [Page 15]
Internet-Draft Registry for Performance Metrics July 2015
8.3.5. Run-time Parameters
Run-Time Parameters are Parameters that must be determined,
configured into the measurement system, and reported with the results
for the context to be complete. However, the values of these
parameters is not specified in the Registry (like the Fixed
Parameters), rather these parameters are listed as an aid to the
measurement system implementer or user (they must be left as
variables, and supplied on execution).
Where metrics supply a list of Parameters as part of their
descriptive template, a sub-set of the Parameters will be designated
as Run-Time Parameters.
Examples of Run-time Parameters include IP addresses, measurement
point designations, start times and end times for measurement, and
other information essential to the method of measurement.
8.3.6. Role
In some method of measurements, there may be several roles defined
e.g. on a one-way packet delay active measurement, there is one
measurement agent that generates the packets and the other one that
receives the packets. This column contains the name of the role for
this particular entry. In the previous example, there should be two
entries in the registry, one for each role, so that when a
measurement agent is instructed to perform the one way delay source
metric know that it is supposed to generate packets. The values for
this field are defined in the reference method of measurement.
8.4. Output Category
For entries which involve a stream and many singleton measurements, a
statistic may be specified in this column to summarize the results to
a single value. If the complete set of measured singletons is
output, this will be specified here.
Some metrics embed one specific statistic in the reference metric
definition, while others allow several output types or statistics.
8.4.1. Type
This column contain the name of the output type. The output type
defines the type of result that the metric produces. It can be the
raw results or it can be some form of statistic. The specification
of the output type must define the format of the output. In some
systems, format specifications will simplify both measurement
implementation and collection/storage tasks. Note that if two
Bagnulo, et al. Expires January 7, 2016 [Page 16]
Internet-Draft Registry for Performance Metrics July 2015
different statistics are required from a single measurement (for
example, both "Xth percentile mean" and "Raw"), then a new output
type must be defined ("Xth percentile mean AND Raw").
8.4.2. Reference Definition
This column contains a pointer to the specification where the output
type is defined
8.4.3. Metric Units
The measured results must be expressed using some standard dimension
or units of measure. This column provides the units.
When a sample of singletons (see [RFC2330] for definitions of these
terms) is collected, this entry will specify the units for each
measured value.
8.5. Administrative information
8.5.1. Status
The status of the specification of this Registered Performance
Metric. Allowed values are 'current' and 'deprecated'. All newly
defined Information Elements have 'current' status.
8.5.2. Requester
The requester for the Registered Performance Metric. The requester
MAY be a document, such as RFC, or person.
8.5.3. Revision
The revision number of a Registered Performance Metric, starting at 0
for Registered Performance Metrics at time of definition and
incremented by one for each revision.
8.5.4. Revision Date
The date of acceptance or the most recent revision for the Registered
Performance Metric.
8.6. Comments and Remarks
Besides providing additional details which do not appear in other
categories, this open Category (single column) allows for unforeseen
issues to be addressed by simply updating this informational entry.
Bagnulo, et al. Expires January 7, 2016 [Page 17]
Internet-Draft Registry for Performance Metrics July 2015
9. The Life-Cycle of Registered Metrics
Once a Performance Metric or set of Performance Metrics has been
identified for a given application, candidate Registry entry
specifications in accordance with Section 8 are submitted to IANA to
follow the process for review by the Performance Metric Experts, as
defined below. This process is also used for other changes to the
Performance Metric Registry, such as deprecation or revision, as
described later in this section.
It is also desirable that the author(s) of a candidate Registry entry
seek review in the relevant IETF working group, or offer the
opportunity for review on the WG mailing list.
9.1. Adding new Performance Metrics to the Registry
Requests to change Registered Metrics in the Performance Metric
Registry are submitted to IANA, which forwards the request to a
designated group of experts (Performance Metric Experts) appointed by
the IESG; these are the reviewers called for by the Expert Review
RFC5226 policy defined for the Performance Metric Registry. The
Performance Metric Experts review the request for such things as
compliance with this document, compliance with other applicable
Performance Metric-related RFCs, and consistency with the currently
defined set of Registered Performance Metrics.
Authors are expected to review compliance with the specifications in
this document to check their submissions before sending them to IANA.
The Performance Metric Experts should endeavor to complete referred
reviews in a timely manner. If the request is acceptable, the
Performance Metric Experts signify their approval to IANA, which
updates the Performance Metric Registry. If the request is not
acceptable, the Performance Metric Experts can coordinate with the
requester to change the request to be compliant. The Performance
Metric Experts may also choose in exceptional circumstances to reject
clearly frivolous or inappropriate change requests outright.
This process should not in any way be construed as allowing the
Performance Metric Experts to overrule IETF consensus. Specifically,
any Registered Metrics that were added with IETF consensus require
IETF consensus for revision or deprecation.
Decisions by the Performance Metric Experts may be appealed as in
Section 7 of RFC5226.
Bagnulo, et al. Expires January 7, 2016 [Page 18]
Internet-Draft Registry for Performance Metrics July 2015
9.2. Revising Registered Performance Metrics
A request for Revision is only permissible when the changes maintain
backward-compatibility with implementations of the prior Registry
entry describing a Registered Metric (entries with lower revision
numbers, but the same Identifier and Name).
The purpose of the Status field in the Performance Metric Registry is
to indicate whether the entry for a Registered Metric is 'current' or
'deprecated'.
In addition, no policy is defined for revising IANA Performance
Metric entries or addressing errors therein. To be certain, changes
and deprecations within the Performance Metric Registry are not
encouraged, and should be avoided to the extent possible. However,
in recognition that change is inevitable, the provisions of this
section address the need for revisions.
Revisions are initiated by sending a candidate Registered Performance
Metric definition to IANA, as in Section 8, identifying the existing
Registry entry.
The primary requirement in the definition of a policy for managing
changes to existing Registered Performance Metrics is avoidance of
interoperability problems; Performance Metric Experts must work to
maintain interoperability above all else. Changes to Registered
Performance Metrics may only be done in an inter-operable way;
necessary changes that cannot be done in a way to allow
interoperability with unchanged implementations must result in the
creation of a new Registered Metric and possibly the deprecation of
the earlier metric.
A change to a Registered Performance Metric is held to be backward-
compatible only when:
1. "it involves the correction of an error that is obviously only
editorial; or"
2. "it corrects an ambiguity in the Registered Performance Metric's
definition, which itself leads to issues severe enough to prevent
the Registered Performance Metric's usage as originally defined;
or"
3. "it corrects missing information in the metric definition without
changing its meaning (e.g., the explicit definition of 'quantity'
semantics for numeric fields without a Data Type Semantics
value); or"
Bagnulo, et al. Expires January 7, 2016 [Page 19]
Internet-Draft Registry for Performance Metrics July 2015
4. "it harmonizes with an external reference that was itself
corrected."
If an Performance Metric revision is deemed permissible by the
Performance Metric Experts, according to the rules in this document,
IANA makes the change in the Performance Metric Registry. The
requester of the change is appended to the requester in the Registry.
Each Registered Performance Metric in the Registry has a revision
number, starting at zero. Each change to a Registered Performance
Metric following this process increments the revision number by one.
When a revised Registered Performance Metric is accepted into the
Performance Metric Registry, the date of acceptance of the most
recent revision is placed into the revision Date column of the
Registry for that Registered Performance Metric.
Where applicable, additions to Registered Performance Metrics in the
form of text Comments or Remarks should include the date, but such
additions may not constitute a revision according to this process.
Older version(s) of the updated metric entries are kept in the
registry for archival purposes. The older entries are kept with all
fields unmodified (version, revision date) except for the status
field that is changed to "Deprecated".
9.3. Deprecating Registered Performance Metrics
Changes that are not permissible by the above criteria for Registered
Metric's revision may only be handled by deprecation. A Registered
Performance Metric MAY be deprecated and replaced when:
1. "the Registered Performance Metric definition has an error or
shortcoming that cannot be permissibly changed as in
Section Revising Registered Performance Metrics; or"
2. "the deprecation harmonizes with an external reference that was
itself deprecated through that reference's accepted deprecation
method; or"
A request for deprecation is sent to IANA, which passes it to the
Performance Metric Expert for review. When deprecating an
Performance Metric, the Performance Metric description in the
Performance Metric Registry must be updated to explain the
deprecation, as well as to refer to any new Performance Metrics
created to replace the deprecated Performance Metric.
Bagnulo, et al. Expires January 7, 2016 [Page 20]
Internet-Draft Registry for Performance Metrics July 2015
The revision number of a Registered Performance Metric is incremented
upon deprecation, and the revision Date updated, as with any
revision.
The use of deprecated Registered Metrics should result in a log entry
or human-readable warning by the respective application.
Names and Metric ID of deprecated Registered Metrics must not be
reused.
The deprecated entries are kept with all fields unmodified, except
the version, revision date, and the status field (changed to
"Deprecated").
10. Security considerations
This draft doesn't introduce any new security considerations for the
Internet. However, the definition of Performance Metrics may
introduce some security concerns, and should be reviewed with
security in mind.
11. IANA Considerations
This document specifies the procedure for Performance Metrics
Registry setup. IANA is requested to create a new Registry for
Performance Metrics called "Registered Performance Metrics" with the
columns defined in Section 8.
New assignments for Performance Metric Registry will be administered
by IANA through Expert Review [RFC5226], i.e., review by one of a
group of experts, the Performance Metric Experts, appointed by the
IESG upon recommendation of the Transport Area Directors. The
experts can be initially drawn from the Working Group Chairs and
document editors of the Performance Metrics Directorate among other
sources of experts.
The Identifier values from 64512 to 65536 are reserved for private
use. The name starting with the prefix Priv- are reserved for
private use.
This document requests the allocation of the URI prefix
urn:ietf:params:ippm:metric for the purpose of generating URIs for
registered metrics.
Bagnulo, et al. Expires January 7, 2016 [Page 21]
Internet-Draft Registry for Performance Metrics July 2015
12. Acknowledgments
Thanks to Brian Trammell and Bill Cerveny, IPPM chairs, for leading
some brainstorming sessions on this topic.
13. References
13.1. Normative References
[RFC2026] Bradner, S., "The Internet Standards Process -- Revision
3", BCP 9, RFC 2026, October 1996.
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119, March 1997.
[RFC2141] Moats, R., "URN Syntax", RFC 2141, May 1997.
[RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis,
"Framework for IP Performance Metrics", RFC 2330, May
1998.
[RFC3986] Berners-Lee, T., Fielding, R., and L. Masinter, "Uniform
Resource Identifier (URI): Generic Syntax", STD 66, RFC
3986, January 2005.
[RFC4148] Stephan, E., "IP Performance Metrics (IPPM) Metrics
Registry", BCP 108, RFC 4148, August 2005.
[RFC5226] Narten, T. and H. Alvestrand, "Guidelines for Writing an
IANA Considerations Section in RFCs", BCP 26, RFC 5226,
May 2008.
[RFC6248] Morton, A., "RFC 4148 and the IP Performance Metrics
(IPPM) Registry of Metrics Are Obsolete", RFC 6248, April
2011.
[RFC6390] Clark, A. and B. Claise, "Guidelines for Considering New
Performance Metric Development", BCP 170, RFC 6390,
October 2011.
[RFC6576] Geib, R., Morton, A., Fardid, R., and A. Steinmitz, "IP
Performance Metrics (IPPM) Standard Advancement Testing",
BCP 176, RFC 6576, March 2012.
Bagnulo, et al. Expires January 7, 2016 [Page 22]
Internet-Draft Registry for Performance Metrics July 2015
13.2. Informative References
[RFC2679] Almes, G., Kalidindi, S., and M. Zekauskas, "A One-way
Delay Metric for IPPM", RFC 2679, September 1999.
[RFC2681] Almes, G., Kalidindi, S., and M. Zekauskas, "A Round-trip
Delay Metric for IPPM", RFC 2681, September 1999.
[RFC3393] Demichelis, C. and P. Chimento, "IP Packet Delay Variation
Metric for IP Performance Metrics (IPPM)", RFC 3393,
November 2002.
[RFC3432] Raisanen, V., Grotefeld, G., and A. Morton, "Network
performance measurement with periodic streams", RFC 3432,
November 2002.
[RFC3550] Schulzrinne, H., Casner, S., Frederick, R., and V.
Jacobson, "RTP: A Transport Protocol for Real-Time
Applications", STD 64, RFC 3550, July 2003.
[RFC3611] Friedman, T., Caceres, R., and A. Clark, "RTP Control
Protocol Extended Reports (RTCP XR)", RFC 3611, November
2003.
[RFC4566] Handley, M., Jacobson, V., and C. Perkins, "SDP: Session
Description Protocol", RFC 4566, July 2006.
[RFC5474] Duffield, N., Chiou, D., Claise, B., Greenberg, A.,
Grossglauser, M., and J. Rexford, "A Framework for Packet
Selection and Reporting", RFC 5474, March 2009.
[RFC5475] Zseby, T., Molina, M., Duffield, N., Niccolini, S., and F.
Raspall, "Sampling and Filtering Techniques for IP Packet
Selection", RFC 5475, March 2009.
[RFC5477] Dietz, T., Claise, B., Aitken, P., Dressler, F., and G.
Carle, "Information Model for Packet Sampling Exports",
RFC 5477, March 2009.
[RFC5481] Morton, A. and B. Claise, "Packet Delay Variation
Applicability Statement", RFC 5481, March 2009.
[RFC5905] Mills, D., Martin, J., Burbank, J., and W. Kasch, "Network
Time Protocol Version 4: Protocol and Algorithms
Specification", RFC 5905, June 2010.
Bagnulo, et al. Expires January 7, 2016 [Page 23]
Internet-Draft Registry for Performance Metrics July 2015
[RFC6035] Pendleton, A., Clark, A., Johnston, A., and H. Sinnreich,
"Session Initiation Protocol Event Package for Voice
Quality Reporting", RFC 6035, November 2010.
[RFC6776] Clark, A. and Q. Wu, "Measurement Identity and Information
Reporting Using a Source Description (SDES) Item and an
RTCP Extended Report (XR) Block", RFC 6776, October 2012.
[RFC6792] Wu, Q., Hunt, G., and P. Arden, "Guidelines for Use of the
RTP Monitoring Framework", RFC 6792, November 2012.
[RFC7003] Clark, A., Huang, R., and Q. Wu, "RTP Control Protocol
(RTCP) Extended Report (XR) Block for Burst/Gap Discard
Metric Reporting", RFC 7003, September 2013.
[RFC7012] Claise, B. and B. Trammell, "Information Model for IP Flow
Information Export (IPFIX)", RFC 7012, September 2013.
[RFC7014] D'Antonio, S., Zseby, T., Henke, C., and L. Peluso, "Flow
Selection Techniques", RFC 7014, September 2013.
[I-D.ietf-lmap-framework]
Eardley, P., Morton, A., Bagnulo, M., Burbridge, T.,
Aitken, P., and A. Akhter, "A framework for Large-Scale
Measurement of Broadband Performance (LMAP)", draft-ietf-
lmap-framework-14 (work in progress), April 2015.
Authors' Addresses
Marcelo Bagnulo
Universidad Carlos III de Madrid
Av. Universidad 30
Leganes, Madrid 28911
SPAIN
Phone: 34 91 6249500
Email: marcelo@it.uc3m.es
URI: http://www.it.uc3m.es
Benoit Claise
Cisco Systems, Inc.
De Kleetlaan 6a b1
1831 Diegem
Belgium
Email: bclaise@cisco.com
Bagnulo, et al. Expires January 7, 2016 [Page 24]
Internet-Draft Registry for Performance Metrics July 2015
Philip Eardley
BT
Adastral Park, Martlesham Heath
Ipswich
ENGLAND
Email: philip.eardley@bt.com
Al Morton
AT&T Labs
200 Laurel Avenue South
Middletown, NJ
USA
Email: acmorton@att.com
Aamer Akhter
Consultant
118 Timber Hitch
Cary, NC
USA
Email: aakhter@gmail.com
Bagnulo, et al. Expires January 7, 2016 [Page 25]