IPFIX Working Group B. Trammell
Internet-Draft E. Boschi
Intended status: Standards Track ETH Zurich
Expires: December 31, 2011 A. Wagner
Consecom AG
B. Claise
Cisco Systems, Inc.
June 29, 2011
Exporting Aggregated Flow Data using the IP Flow Information Export
(IPFIX) Protocol
draft-trammell-ipfix-a9n-03.txt
Abstract
This document describes the export of aggregated Flow information
using IPFIX. An Aggregated Flow is essentially an IPFIX Flow
representing packets from multiple original Flows sharing some set of
common properties. The document describes Aggregated Flow export
within the framework of IPFIX Mediators and defines an interoperable,
implementation-independent method for Aggregated Flow export.
Status of this Memo
This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
This Internet-Draft will expire on December 31, 2011.
Copyright Notice
Copyright (c) 2011 IETF Trust and the persons identified as the
document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of
Trammell, et al. Expires December 31, 2011 [Page 1]
Internet-Draft IPFIX Aggregation June 2011
publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License.
Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4
2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 5
3. Use Cases for IPFIX Aggregation . . . . . . . . . . . . . . . 6
4. Architecture for Flow Aggregation . . . . . . . . . . . . . . 6
4.1. Aggregation within the IPFIX Architecture . . . . . . . . 7
4.2. Intermediate Aggregation Process Architecture . . . . . . 8
5. IP Flow Aggregation Operations . . . . . . . . . . . . . . . . 10
5.1. Temporal Aggregation through Interval Distribution . . . . 10
5.1.1. Distributing Values Across Intervals . . . . . . . . . 11
5.1.2. Time Composition . . . . . . . . . . . . . . . . . . . 13
5.2. Spatial Aggregation of Flow Keys . . . . . . . . . . . . . 13
5.2.1. Counting Distinct Key Values . . . . . . . . . . . . . 15
5.2.2. Counting Original Flows . . . . . . . . . . . . . . . 15
5.3. Spatial Aggregation of Non-Key Fields . . . . . . . . . . 16
5.3.1. Counter Statistics . . . . . . . . . . . . . . . . . . 16
5.4. Aggregation Combination . . . . . . . . . . . . . . . . . 17
6. Additional Considerations and Special Cases in Flow
Aggregation . . . . . . . . . . . . . . . . . . . . . . . . . 17
6.1. Exact versus Approximate Counting during Aggregation . . . 17
6.2. Considerations for Aggregation of Sampled Flows . . . . . 17
7. Export of Aggregated IP Flows using IPFIX . . . . . . . . . . 17
7.1. Time Interval Export . . . . . . . . . . . . . . . . . . . 18
7.2. Flow Count Export . . . . . . . . . . . . . . . . . . . . 18
7.2.1. originalFlowsPresent . . . . . . . . . . . . . . . . . 18
7.2.2. originalFlowsInitiated . . . . . . . . . . . . . . . . 18
7.2.3. originalFlowsCompleted . . . . . . . . . . . . . . . . 19
7.2.4. originalFlows . . . . . . . . . . . . . . . . . . . . 19
7.3. Distinct Host Export . . . . . . . . . . . . . . . . . . . 19
7.3.1. distinctCountOfSourceIPv4Address . . . . . . . . . . . 19
7.3.2. distinctCountOfDestinationIPv4Address . . . . . . . . 20
7.3.3. distinctCountOfSourceIPv6Address . . . . . . . . . . . 20
7.3.4. distinctCountOfDestinationIPv6Address . . . . . . . . 20
7.4. Aggregate Counter Distribution Export . . . . . . . . . . 20
7.4.1. Aggregate Counter Distribution Options Template . . . 21
7.4.2. valueDistributionMethod Information Element . . . . . 21
8. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
8.1. Traffic Time-Series per Source . . . . . . . . . . . . . . 24
8.2. Core Traffic Matrix . . . . . . . . . . . . . . . . . . . 28
Trammell, et al. Expires December 31, 2011 [Page 2]
Internet-Draft IPFIX Aggregation June 2011
8.3. Distinct Source Count per Destination Endpoint . . . . . . 28
8.4. Traffic Time-Series per Source with Counter
Distribution . . . . . . . . . . . . . . . . . . . . . . . 29
9. Security Considerations . . . . . . . . . . . . . . . . . . . 29
10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 29
11. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 29
12. References . . . . . . . . . . . . . . . . . . . . . . . . . . 30
12.1. Normative References . . . . . . . . . . . . . . . . . . . 30
12.2. Informative References . . . . . . . . . . . . . . . . . . 30
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 31
Trammell, et al. Expires December 31, 2011 [Page 3]
Internet-Draft IPFIX Aggregation June 2011
1. Introduction
The aggregation of packet data into Flows serves a variety of
different purposes, as noted in the requirements [RFC3917] and
applicability statement [RFC5472] for the IP Flow Information Export
(IPFIX) protocol [RFC5101]. Aggregation beyond the flow level, into
records representing multiple Flows, is a common analysis and data
reduction technique as well, with applicability to large-scale
network data analysis, archiving, and inter-organization exchange.
This applicability in large-scale situations, in particular, led to
the inclusion of aggregation as part of the IPFIX Mediators Problem
Statement [RFC5982], and the definition of an Intermediate
Aggregation Process in the Mediator framework
[I-D.ietf-ipfix-mediators-framework].
Aggregation is part of a wide variety of applications, including
traffic matrix calculation, generation of time series data for
visualizations or anomaly detection, or measurement data reduction.
Depending on the keys used for aggregation, it may additionally have
an anonymising affect on the data: for example, aggregation
operations which eliminate IP addresses make it impossible to later
identify nodes using those addresses.
Aggregation as defined and described in this document covers the
applications defined in [RFC5982], including 5.1 "Adjusting Flow
Granularity", 5.4 "Time Composition", and 5.5 "Spatial Composition".
However, this document specifies a more flexible architecture for an
Intermediate Aggregation Process in Section 4.2, which supports a
superset of these applications.
An Intermediate Aggregation Process may be applied to data collected
from multiple Observation Points, as aggregation is natural to apply
for data reduction when concentrating measurement data. This
document specifically does not address the protocol issues that arise
when combining IPFIX data from multiple Observation Points and
exporting from a single Mediator, as these issues are general to
IPFIX Mediation; they are therefore treated in detail in the Mediator
Protocol [I-D.claise-ipfix-mediation-protocol] document.
Since Aggregated Flows as defined in the following section are
essentially Flows, the IPFIX protocol [RFC5101] can be used to
export, and the IPFIX File Format [RFC5655] can be used to store,
aggregated data "as-is"; there are no changes necessary to the
protocol. This document provides a common basis for the application
of IPFIX to the handling of aggregated data, through a detailed
terminology, Intermediate Aggregation Process architecture, and
methods for original Flow counting and counter distribution across
intervals.
Trammell, et al. Expires December 31, 2011 [Page 4]
Internet-Draft IPFIX Aggregation June 2011
2. Terminology
Terms used in this document that are defined in the Terminology
section of the IPFIX Protocol [RFC5101] document are to be
interpreted as defined there.
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in [RFC2119].
In addition, this document defines the following terms
Aggregated Flow: A Flow, as defined by [RFC5101], derived from a
set of zero or more original Flows within a defined Aggregation
Interval. The two primary differences between a Flow and an
Aggregated Flow are (1) that the time interval of a Flow is
generally derived from information about the timing of the packets
comprising the Flow, while the time interval of an Aggregated Flow
are generally externally imposed; and (2) that an Aggregated Flow
may represent zero packets (i.e., an assertion that no packets
were seen for a given Flow Key in a given time interval). Note
that an Aggregated Flow is defined within the context of an
Intermediate Aggregation Process only. once an Aggregated Flow is
exported, it is essentially a Flow as in [RFC5101] and can be
treated as such.
Intermediate Aggregation Function: A mapping from a set of zero or
more original Flows into a set of Aggregated Flows across one or
more Aggregation Intervals. This function is hosted by an
Intermediate Aggregation Process, defined below.
Intermediate Aggregation Process: an Intermediate Process as in
[I-D.ietf-ipfix-mediators-framework] that aggregates records based
upon a set of Flow Keys or functions applied to fields from the
record; this is itself defined in
[I-D.ietf-ipfix-mediators-framework].
Aggregation Interval: A time interval imposed upon an Aggregated
Flow. Aggregation Functions may use a regular Aggregation
Interval (e.g. "every five minutes", "every calendar month"),
though regularity is not necessary. Aggregation intervals may
also be derived from the time intervals of the original Flows
being aggregated.
partially aggregated Flow: A Flow during processing within an
Intermediate Aggregation Process; refers to an intermediate data
structure during aggregation within the Intermediate Aggregation
Process architecture detailed in Section 4.2.
Trammell, et al. Expires December 31, 2011 [Page 5]
Internet-Draft IPFIX Aggregation June 2011
original Flow: A Flow given as input to an Aggregation Function in
order to generate Aggregated Flows.
contributing Flow: An original Flow that is partially or completely
represented within an Aggregated Flow. Each aggregated Flow is
made up of zero or more contributing Flows, and an original Flow
may contribute to zero or more Aggregated Flows.
3. Use Cases for IPFIX Aggregation
Aggregation, as a common data analysis method, has many applications.
When used with a regular Aggregation Interval, it generates time
series data from a collection of Flows with discrete intervals. Time
series data is itself useful for a wide variety of analysis tasks,
such as generating input for network anomaly detection systems, or
driving visualizations of volume per time for traffic with specific
characteristics. Traffic matrix calculation from flow data is
inherently an aggregation action, by aggregating the Flow Key down to
input or output interface, address prefix, or autonomous system.
Irregular or data-dependent Aggregation Intervals and key aggregation
operations can also be used to provide adaptive aggregation of
network flow data. Here, full Flow Records can be kept for Flows of
interest, while Flows deemed "less interesting" to a given
application can be aggregated. For example, in an IPFIX Mediator
equipped with traffic classification capabilities for security
purposes, potentially malicious Flows could be exported directly,
while known-good or probably-good Flows (e.g. normal web browsing)
could be exported simply as time series volumes per web server.
Note that an Intermediate Aggregation Function which removes
potentially sensitive information as identified in
[I-D.ietf-ipfix-anon] may tend to have an anonymising effect on the
Aggregated Flows, as well; however, any application of aggregation as
part of a data protection scheme should ensure that all the issues
raised in Section 4 of [I-D.ietf-ipfix-anon] are addressed.
4. Architecture for Flow Aggregation
This section specifies how an Intermediate Aggregation Process fits
into the IPFIX Architecture, and the architecture of the Intermediate
Aggregation Process itself.
Trammell, et al. Expires December 31, 2011 [Page 6]
Internet-Draft IPFIX Aggregation June 2011
4.1. Aggregation within the IPFIX Architecture
An Intermediate Aggregation Process may be deployed at three places
within the IPFIX Architecture. While aggregation applications are
most commonly deployed within a Mediator which collects original
Flows from an original Exporter and exports Aggregated Flows,
aggregation can also occur before initial export, or after final
collection, as shown in Figure 1.
+==========================================+
| Exporting Process |
+==========================================+
| |
| (Aggregated Flow Export) |
V |
+=============================+ |
| Mediator | |
+=============================+ |
| |
| (Aggregating Mediator) |
V V
+==========================================+
| Collecting Process |
+==========================================+
|
| (Aggregation for Storage)
V
+--------------------+
| IPFIX File Storage |
+--------------------+
Figure 1: Potential Aggregation Locations
The Mediator use case is further shown in Figures A and B in
[I-D.ietf-ipfix-mediators-framework].
Aggregation can be applied for either intermediate or final analytic
purposes. In certain circumstances, it may make sense to export
Aggregated Flows directly from an original Exporting Process, for
example, if the Exporting Process is applied to drive a time-series
visualization, or when flow data export bandwidth is restricted and
flow or packet sampling is not an option. Note that this case, where
the Aggregation Process is essentially integrated into the Metering
Process, is essentially covered by the IPFIX architecture [RFC5470]:
the Flow Keys used are simply a subset of those that would normally
be used. A Metering Process in this arrangement MAY choose to
simulate the generation of larger Flows in order to generate original
Flow counts, if the application calls for compatibility with an
Trammell, et al. Expires December 31, 2011 [Page 7]
Internet-Draft IPFIX Aggregation June 2011
Aggregation Process deployed in a separate location.
In the specific case that an Aggregation Process is employed for data
reduction for storage purposes, it can take original Flows from a
Collecting Process or File Reader and pass Aggregated Flows to a File
Writer for storage.
Deployment of an Intermediate Aggregation Process within a Mediator
[RFC5982] is a much more flexible arrangement. Here, the Mediator
consumes original Flows and produces aggregated Flows; this
arrangement is suited to any of the use cases detailed in Section 3.
In a mediator, aggregation can be applied as well to aggregating
original Flows from multiple sources into a single stream of
aggregated Flows; the architectural specifics of this arrangement are
not addressed in this document, which is concerned only with the
aggregation operation itself; see
[I-D.claise-ipfix-mediation-protocol] for details.
The data paths into and out of an Intermediate Aggregation Process
are showin in Figure 2.
packets --+ +- IPFIX Messages -+
| | |
V V V
+==================+ +====================+ +=============+
| Metering Process | | Collecting Process | | File Reader |
| | +====================+ +=============+
| | | original Flows |
| | V V
+ - - - - - - - - -+======================================+
| Intermediate Aggregation Process (IAP) |
+=========================================================+
| Aggregated Aggregated |
| Flows Flows |
V V
+===================+ +=============+
| Exporting Process | | File Writer |
+===================+ +=============+
| |
+------------> IPFIX Messages <----------+
Figure 2: Data paths through the aggregation process
4.2. Intermediate Aggregation Process Architecture
Within this document, an Intermediate Aggregation Process can be seen
as hosting an Intermediate Aggregation Function composed of four
types of operations on the intermediate results of aggregation, which
Trammell, et al. Expires December 31, 2011 [Page 8]
Internet-Draft IPFIX Aggregation June 2011
are called partially aggregated Flows in this document, as
illustrated in Figure 3.
original Flows
|
V
+-----------------------+
| interval distribution |
+-->| (temporal) |<--+
| +-----------------------+ |
| | | | |
|(*) |(*) |(*) |(*) |(*)
| | | | |
| V | V |
+-------------------+ | +--------------------+
| key aggregation | | | value aggregation |
| (spatial) | | | (spatial) |
+-------------------+ | +--------------------+
^ | | | ^
| |(*) | |(*) |
+-------|-------|-------|-------+
V V V
+-------------------------+
| aggregate combination |
+-------------------------+
|
V
Aggregated Flows
(*) partially aggregated Flows
Figure 3: Conceptual model of aggregation operations
Interval distribution is a temporal aggregation operation which
imposes an Aggregation Interval on the partially aggregated Flow.
This Aggregation Interval may be regular, irregular, or derived
from the timing of the original Flows themselves. Interval
distribution is discussed in detail in Section 5.1.
Key aggregation is a spatial aggregation operation which results in
the addition, modification, or deletion of Flow Key fields in the
partially aggregated Flows. New Flow Key fields may be derived
from existing Flow Key fields (e.g., looking up an AS number for
an IP address), or "promoted" from non-Key fields (e.g., when
aggregating Flows by packet count per Flow). Key aggregation can
also add new non-Key fields derived from Key Fields that are
deleted during key aggregation; mainly counters of unique reduced
keys. Key aggregation is discussed in detail in Section 5.2.
Trammell, et al. Expires December 31, 2011 [Page 9]
Internet-Draft IPFIX Aggregation June 2011
Value aggregation is a spatial aggregation operation which results
in the addition, modification, or deletion of non-Key fields in
the partially aggregated Flows. These non-Key fields may be
"demoted" from existing Key fields, or derived from existing Key
or non-Key fields. Value aggregation is discussed in detail in
Section 5.3.
Aggregate combination combines multiple partially aggregated Flows
having undergone interval distribution, key aggregation, and value
aggregation which share Flow Keys and Aggregation Intervals into a
single aggregated Flow per Flow Key and Aggregation Interval.
Aggregate combination is discussed in detail in Section 5.4.
The first three of these operations may be carried out any number of
times in any order, either on original Flows or on the results of one
of the Operations (called partially aggregated Flows), with one
caveat. Since Flows carry their own interval data, any spatial
aggregation operation implies a temporal aggregation operation, so at
least one interval distribution step, even if implicit, is required
by this architecture. This is shown as the first step for the sake
of simplicity in the diagram above. Once all aggregation operations
are complete, aggregate combination ensures that for a given
Aggregation Interval, Flow Key, and Observation Domain, only one Flow
is produced by the Intermediate Aggregation Process.
5. IP Flow Aggregation Operations
As stated in Section 2, an Aggregated Flow is simply an IPFIX Flow
generated from original Flows by an Aggregation Function. Here, we
detail the operations by which this is achieved within an
Intermediate Aggregation Process.
5.1. Temporal Aggregation through Interval Distribution
Interval distribution imposes a time interval on the resulting
Aggregated Flows. The selection of an interval is specific to the
given aggregation application. Intervals may be derived from the
original Flows themselves (e.g., an interval may be selected to cover
the entire interval containing the set of all Flows sharing a given
Key, as in Time Composition describe in Section 5.1.2) or externally
imposed; in the latter case the externally imposed interval may be
regular (e.g., every five minutes) or irregular (e.g., to allow for
different time resolutions at different times of day, under different
network conditions, or indeed for different sets of original Flows).
The length of the imposed interval itself has tradeoffs. Shorter
intervals allow higher resolution aggregated data and, in streaming
Trammell, et al. Expires December 31, 2011 [Page 10]
Internet-Draft IPFIX Aggregation June 2011
applications, faster reaction time. Longer intervals lead to greater
data reduction and simplified counter distribution. Specifically,
counter distribution is greatly simplified by the choice of an
interval longer than the duration of longest original Flow, itself
generally determined by the original Flow's Metering Process active
timeout; in this case an original Flow can contribute to at most two
Aggregated Flows, and the more complex value distribution methods
become inapplicable.
| | | |
| |<--Flow A-->| | | |
| |<--Flow B-->| | |
| |<-------------Flow C-------------->| |
| | | |
| interval 0 | interval 1 | interval 2 |
Figure 4: Illustration of interval distribution
In Figure 4, we illustrate three common possibilities for interval
distribution as applies with regular intervals to a set of three
original Flows. For Flow A, the start and end times lie within the
boundaries of a single interval 0; therefore, Flow A contributes to
only one Aggregated Flow. Flow B, by contrast, has the same duration
but crosses the boundary between intervals 0 and 1; therefore, it
will contribute to two Aggregated Flows, and its counters must be
distributed among these Flows, though in the two-interval case this
can be simplified somewhat simply by picking one of the two
intervals, or proportionally distributing between them. Only Flows
like Flow A and Flow B will be produced when the interval is chosen
to be longer than the duration of longest original Flow, as above.
More complicated is the case of Flow C, which contributes to more
than two Aggregated Flows, and must have its counters distributed
according to some policy as in Section 5.1.1.
[EDITOR'S NOTE: per Lothar: some implementation guidance here would
be good. specifically, advise that you need multiple rotating
intervals to do this right.]
5.1.1. Distributing Values Across Intervals
In general, counters in Aggregated Flows are treated the same as in
any Flow. Each counter is independently is calculated as if it were
derived from the set of packets in the original Flow. For the most
part, when aggregating original Flows into Aggregated Flows, this is
simply done by summation.
When the Aggregation Interval is guaranteed to be longer than the
longest original Flow, a Flow can cross at most one Interval
Trammell, et al. Expires December 31, 2011 [Page 11]
Internet-Draft IPFIX Aggregation June 2011
boundary, and will therefore contribute to at most two Aggregated
Flows. Most common in this case is to arbitrarily but consistently
choose to account the original Flow's counters either to the first or
the last aggregated Flow to which it could contribute.
However, this becomes more complicated when the Aggregation Interval
is shorter than the longest original Flow in the source data. In
such cases, each original Flow can incompletely cover one or more
time intervals, and apply to one or more Aggregated Flows; in this
case, the Aggregation Process must distribute the counters in the
original Flows across the multiple Aggregated Flows. There are
several methods for doing this, listed here in roughly increasing
order of complexity and accuracy; most of these are necessary only in
specialized cases.
End Interval: The counters for an original Flow are added to the
counters of the appropriate Aggregated Flow containing the end
time of the original Flow.
Start Interval: The counters for an original Flow are added to the
counters of the appropriate Aggregated Flow containing the start
time of the original Flow.
Mid Interval: The counters for an original Flow are added to the
counters of a single appropriate Aggregated Flow containing some
timestamp between start and end time of the original Flow.
Simple Uniform Distribution: Each counter for an original Flow is
divided by the number of time intervals the original Flow covers
(i.e., of appropriate Aggregated Flows sharing the same Flow Key),
and this number is added to each corresponding counter in each
Aggregated Flow.
Proportional Uniform Distribution: Each counter for an original
Flow is divided by the number of time _units_ the original Flow
covers, to derive a mean count rate. This mean count rate is then
multiplied by the number of time units in the intersection of the
duration of the original Flow and the time interval of each
Aggregated Flow. This is like simple uniform distribution, but
accounts for the fractional portions of a time interval covered by
an original Flow in the first and last time interval.
Simulated Process: Each counter of the original Flow is distributed
among the intervals of the Aggregated Flows according to some
function the Aggregation Process uses based upon properties of
Flows presumed to be like the original Flow. For example, Flow
Records representing bulk transfer might follow a more or less
proportional uniform distribution, while interactive processes are
Trammell, et al. Expires December 31, 2011 [Page 12]
Internet-Draft IPFIX Aggregation June 2011
far more bursty.
Direct: The Aggregation Process has access to the original packet
timings from the packets making up the original Flow, and uses
these to distribute or recalculate the counters.
A method for exporting the distribution of counters across multiple
Aggregated Flows is detailed in Section 7.4. In any case, counters
MUST be distributed across the multiple Aggregated Flows in such a
way that the total count is preserved, within the limits of accuracy
of the implementation (e.g., inaccuracy introduced by the use of
floating-point numbers is tolerable). This property allows data to
be aggregated and re-aggregated without any loss of original count
information. To avoid confusion in interpretation of the aggregated
data, all the counters for a set of given original Flows SHOULD be
distributed via the same method.
5.1.2. Time Composition
Time Composition as in section 5.4 of [RFC5982] (or interval
combination) is a special case of aggregation, where interval
distribution imposes longer intervals on Flows with matching keys and
"chained" start and end times, without any key reduction, in order to
join long-lived Flows which may have been split (e.g., due to an
active timeout shorter than the Flow.) Here, no Key aggregation is
applied, and the Aggregation Interval is chosen on a per-Flow basis
to cover the interval spanned by the set of aggregated Flows. This
may be applied alone in order to normalize split Flows, or in
combination with other aggregation functions in order to obtain more
accurate original Flow counts.
5.2. Spatial Aggregation of Flow Keys
Key aggregation generates a new Flow Key for the Aggregated Flows
from the original Flow Keys, non-Key fields in the original Flows, or
from correlation of the original Flow information with some external
source. There are two basic operations here. First, Aggregated Flow
Keys may be derived directly from original Flow Keys through
reduction, or the dropping of fields or precision in the original
Flow Keys. Second, an Aggregated Flow Key may be derived through
replacement, e.g. by removing one or more fields from the original
Flow and replacing them with a fields derived from the removed
fields. Replacement may refer to external information (e.g., IP to
AS number mappings). Replacement need not replace only key fields.
For example, consider an application which aggregates flows by packet
count (i.e., generating an Aggregated Flow for all one-packet Flows,
one for all two-packet Flows, and so on). This application would
promote the packet count to a Flow Key field.
Trammell, et al. Expires December 31, 2011 [Page 13]
Internet-Draft IPFIX Aggregation June 2011
Key aggregation may also result in the addition of new non-Key fields
to the Aggregated Flows, namely original Flow counters and unique
reduced key counters; these are treated in more detail in
Section 5.2.2 and Section 5.2.1, respectively.
In any key aggregation operation, reduction and/or replacement may be
applied any number of times in any order. Which of these operations
are supported by a given implementation is implementation- and
application-dependent. Key aggregation may aggregate original Flows
with different sets of Flow Key fields; only the Flow Keys of the
resulting Aggregated Flows of any given Key aggregation operation
need contain the same set of fields.
Original Flow Key
+---------+---------+----------+----------+-------+-----+
| src ip4 | dst ip4 | src port | dst port | proto | tos |
+---------+---------+----------+----------+-------+-----+
| | | | | |
retain mask /24 X X X X
V V
+---------+-------------+
| src ip4 | dst ip4 /24 |
+---------+-------------+
Aggregated Flow Key (by source address and destination class-C)
Figure 5: Illustration of key aggregation by reduction
Figure 5 illustrates an example reduction operation, aggregation by
source address and destination class C network. Here, the port,
protocol, and type-of-service information is removed from the Flow
Key, the source address is retained, and the destination address is
masked by dropping the low 8 bits.
Original Flow Key
+---------+---------+----------+----------+-------+-----+
| src ip4 | dst ip4 | src port | dst port | proto | tos |
+---------+---------+----------+----------+-------+-----+
| | | | | |
+-------------------+ X X X X
| ASN lookup table |
+-------------------+
V V
+---------+---------+
| src asn | dst asn |
+---------+---------+
Aggregated Flow Key (by source and dest ASN)
Figure 6: Illustration of key aggregation by reduction and
Trammell, et al. Expires December 31, 2011 [Page 14]
Internet-Draft IPFIX Aggregation June 2011
replacement
Figure 6 illustrates an example reduction and replacement operation,
aggregation by source and destination ASN without ASN information
available in the original Flow. Here, the port, protocol, and type-
of-service information is removed from the Flow Key, while the source
and destination addresses are run though an IP address to ASN lookup
table, and the Aggregated Flow Key is made up of the resulting source
and destination ASNs.
5.2.1. Counting Distinct Key Values
One common case in aggregation is counting distinct key values that
were reduced away during key aggregation. The most common use case
for this is counting distinct hosts per Flow Key; for example, in
host characterization or anomaly detection, distinct sources per
destination or distinct destinations per source are common metrics.
These new non-Ley fields are added during key aggregation.
For such applications, Information Elements for distinct counts of
IPv4 and IPv6 addresses are defined in Section 7.3. These are named
distinctCountOf(KeyName). Additional such Information Elements
SHOULD be registered with IANA on an as-needed basis.
5.2.2. Counting Original Flows
When aggregating multiple original Flows into an Aggregated Flow, it
is often useful to know how many original Flows are present in the
Aggregated Flow. This document introduces four new information
elements in Section 7.2 to export these counters.
There are two possible ways to count original Flows, which we call
here conservative and non-conservative. Conservative flow counting
has the property that each original Flow contributes exactly one to
the total flow count within a set of aggregated Flows. In other
words, conservative flow counters are distributed just as any other
counter during interval distribution, except each original Flow is
assumed to have a flow count of one. When a count for an original
Flow must be distributed across a set of Aggregated Flows, and a
distribution method is used which does not account for that original
Flow completely within a single Aggregated Flow, conservative flow
counting requires a fractional representation.
By contrast, non-conservative flow counting is used to count how many
contributing Flows are represented in an Aggregated Flow. Flow
counters are not distributed in this case. An original Flow which is
present within N Aggregated Flows would add N to the sum of non-
conservative flow counts, one to each Aggregated Flow. In other
Trammell, et al. Expires December 31, 2011 [Page 15]
Internet-Draft IPFIX Aggregation June 2011
words, the sum of conservative flow counts over a set of Aggregated
Flows is always equal to the number of original Flows, while the sum
of non-conservative flow counts is strictly greater than or equal to
the number of original Flows.
For example, consider Flows A, B, and C as illustrated in Figure 4.
Assume that the key aggregation step aggregates the keys of these
three Flows to the same aggregated Flow Key, and that start interval
counter distribution is in effect. The conservative flow count for
interval 0 is 3 (since Flows A, B, and C all begin in this interval),
and for the other two intervals is 0. The non-conservative flow
count for interval 0 is also 3 (due to the presence of Flows A, B,
and C), for interval 1 is 2 (Flows B and C), and for interval 2 is 1
(Flow C). The sum of the conservative counts 3 + 0 + 0 = 3, the
number of original Flows; while the sum of the non-conservative
counts 3 + 2 + 1 = 6.
Note that the active and inactive timeouts used to generate original
Flows, as well as the cache policy used to generate those Flows, have
an effect on how meaningful either the conservative or non-
conservative flow count will be during aggregation. In general, all
the original Exporters producing original Flows to be aggregated
SHOULD be aggregated using caches configured identically or
similarly. Original Exporters using the IPFIX Configuration Model
SHOULD be configured to export Flows with equal or similar
activeTimeout and inactiveTimeout configuration values, and the same
cacheMode, as defined in section 4.3 of
[I-D.ietf-ipfix-configuration-model].
5.3. Spatial Aggregation of Non-Key Fields
Aggregation operations may also lead to the addition of value fields
demoted from key fields, or derived from other value fields in the
original Flows. Specific cases of this are treated in the
subsections below.
5.3.1. Counter Statistics
Some applications of aggregation may benefit from computing different
statistics than those native to each non-key field (i.e., union for
flags, sum for counters). For example, minimum and maximum packet
counts per Flow, mean bytes per packet per aggregated Flow, and so
on. Certain Information Elements for these applications are already
provided in the IANA IPFIX Information Elements registry
(http://www.iana.org/assignments/ipfix/ipfix.html (e.g.
minimumIpTotalLength).
A complete specification of additional aggregate counter statistics
Trammell, et al. Expires December 31, 2011 [Page 16]
Internet-Draft IPFIX Aggregation June 2011
is outside the scope of this document, and should be added in the
future to the IANA IPFIX Information Elements registry on a per-
application, as-needed basis.
5.4. Aggregation Combination
Interval distribution and key aggregation together may generate
multiple partially aggregated Flows covering the same time interval
with the same Flow Key. The process of combining these partially
aggregated Flows into a single Aggregated Flow is called aggregation
combination. In general, non-Key values from multiple contributing
Flows are combined using the same operation by which values are
combined from packets to form Flows for each Information Element.
Counters are summed, averages are averaged, flags are unioned, and so
on.
6. Additional Considerations and Special Cases in Flow Aggregation
6.1. Exact versus Approximate Counting during Aggregation
In certain circumstances, particularly involving aggregation by
devices with limited resources, and in situations where exact
aggregated counts are less important than relative magnitudes (e.g.
driving graphical displays), counter distribution during key
aggregation may be performed by approximate counting means (e.g.
Bloom filters). The choice to use approximate counting is
implementation- and application-dependent.
6.2. Considerations for Aggregation of Sampled Flows
The accuracy of Aggregated Flows may also be affected by sampling of
the original Flows, or sampling of packets making up the original
Flows. The effect of sampling on flow aggregation is still an open
research question. However, to maximize the comparability of
Aggregated Flows, aggregation of sampled Flows SHOULD only use
original Flows sampled using the same sampling rate and sampling
algorithm, or Flows created from packets sampled using the same
sampling rate and sampling algorithm. For more on packet sampling
within IPFIX, see [RFC5476]. For more on Flow sampling within the
IPFIX Mediator Framework, see [I-D.ietf-ipfix-flow-selection-tech].
7. Export of Aggregated IP Flows using IPFIX
In general, Aggregated Flows are exported in IPFIX as any normal
Flow. However, certain aspects of Aggregated Flow export benefit
from additional guidelines, or new Information Elements to represent
Trammell, et al. Expires December 31, 2011 [Page 17]
Internet-Draft IPFIX Aggregation June 2011
aggregation metadata or information generated during aggregation.
These are detailed in the following subsections.
7.1. Time Interval Export
Since an Aggregated Flow is simply a Flow, the existing timestamp
Information Elements in the IPFIX Information Model (e.g.,
flowStartMilliseconds, flowEndNanoseconds) are sufficient to specify
the time interval for aggregation. Therefore, this document
specifies no new aggregation-specific Information Elements for
exporting time interval information.
Each Aggregated Flow SHOULD contain both an interval start and
interval end timestamp. If an exporter of Aggregated Flows omits the
interval end timestamp from each Aggregated Flow, the time interval
for Aggregated Flows within an Observation Domain and Transport
Session MUST be regular and constant. However, note that this
approach might lead to interoperability problems when exporting
Aggregated Flows to non-aggregation-aware Collecting Processes and
downstream analysis tasks; therefore, an Exporting Process capable of
exporting only interval start timestamps MUST provide a configuration
option to export interval end timestamps as well.
7.2. Flow Count Export
The following four Information Elements are defined to count original
Flows as discussed in Section 5.2.2.
7.2.1. originalFlowsPresent
Description: The non-conservative count of original Flows
contributing to this Aggregated Flow. Non-conservative counts
need not sum to the original count on re-aggregation.
Abstract Data Type: unsigned64
ElementId: TBD1
Status: Current
7.2.2. originalFlowsInitiated
Description: The conservative count of original Flows whose first
packet is represented within this Aggregated Flow. Conservative
counts must some to the original count on re-aggregation.
Trammell, et al. Expires December 31, 2011 [Page 18]
Internet-Draft IPFIX Aggregation June 2011
Abstract Data Type: unsigned64
ElementId: TBD2
Status: Current
7.2.3. originalFlowsCompleted
Description: The conservative count of original Flows whose last
packet is represented within this Aggregated Flow. Conservative
counts must some to the original count on re-aggregation.
Abstract Data Type: unsigned64
ElementId: TBD3
Status: Current
7.2.4. originalFlows
Description: The conservative count of original Flows contributing
to this Aggregated Flow; may be distributed via any of the methods
described in Section 5.1.1.
Abstract Data Type: float64
ElementId: 3
Status: Current
7.3. Distinct Host Export
The following four Information Elements represent the distinct counts
of source and destination addresses for IPv4 and IPv6, used to
exporting distinct host counts reduced away during key aggregation.
7.3.1. distinctCountOfSourceIPv4Address
Description: The count of distinct source IPv4 address values for
original Flows contributing to this Aggregated Flow.
Abstract Data Type: unsigned32
ElementId: TBD6
Trammell, et al. Expires December 31, 2011 [Page 19]
Internet-Draft IPFIX Aggregation June 2011
Status: Current
7.3.2. distinctCountOfDestinationIPv4Address
Description: The count of distinct destination IPv4 address values
for original Flows contributing to this Aggregated Flow.
Abstract Data Type: unsigned32
ElementId: TBD7
Status: Current
7.3.3. distinctCountOfSourceIPv6Address
Description: The count of distinct source IPv6 address values for
original Flows contributing to this Aggregated Flow.
Abstract Data Type: unsigned64
ElementId: TBD8
Status: Current
7.3.4. distinctCountOfDestinationIPv6Address
Description: The count of distinct destination IPv6 address values
for original Flows contributing to this Aggregated Flow.
Abstract Data Type: unsigned64
ElementId: TBD9
Status: Current
7.4. Aggregate Counter Distribution Export
When exporting counters distributed among Aggregated Flows, as
described in Section 5.1.1, the Exporting Process MAY export an
Aggregate Counter Distribution Record for each Template describing
Aggregated Flow records; this Options Template is described below.
It uses the valueDistributionMethod Information Element, also defined
below. Since in many cases distribution is simple, accounting the
counters from contributing Flows to the first Interval to which they
contribute, this is default situation, for which no Aggregate Counter
Distribution Record is necessary; Aggregate Counter Distribution
Records are only applicable in more exotic situations, such as using
an Aggregation Interval smaller than the durations of original Flows.
Trammell, et al. Expires December 31, 2011 [Page 20]
Internet-Draft IPFIX Aggregation June 2011
7.4.1. Aggregate Counter Distribution Options Template
This Options Template defines the Aggregate Counter Distribution
Record, which allows the binding of a value distribution method to a
Template ID. This is used to signal to the Collecting Process how
the counters were distributed. The fields are as below:
+-------------------------+-----------------------------------------+
| IE | Description |
+-------------------------+-----------------------------------------+
| templateId [scope] | The Template ID of the Template |
| | defining the Aggregated Flows to which |
| | this distribution option applies. This |
| | Information Element MUST be defined as |
| | a Scope Field. |
| valueDistributionMethod | The method used to distribute the |
| | counters for the Aggregated Flows |
| | defined by the associated Template. |
+-------------------------+-----------------------------------------+
7.4.2. valueDistributionMethod Information Element
Description: A description of the method used to distribute the
counters from contributing Flows into the Aggregated Flow records
described by an associated Template. The method is deemed to
apply to all the non-key Information Elements in the referenced
Template for which value distribution is a valid operation; if the
originalFlowsInitiated and/or originalFlowsCompleted Information
Elements appear in the Template, they are not subject to this
distribution method, as they each infer their own distribution
method. The distribution methods are taken from Section 5.1.1 and
encoded as follows:
+-------+-----------------------------------------------------------+
| Value | Description |
+-------+-----------------------------------------------------------+
| 1 | Start Interval: The counters for an original Flow are |
| | added to the counters of the appropriate Aggregated Flow |
| | containing the start time of the original Flow. This |
| | should be assumed the default if value distribution |
| | information is not available at a Collecting Process for |
| | an Aggregated Flow. |
| 2 | End Interval: The counters for an original Flow are added |
| | to the counters of the appropriate Aggregated Flow |
| | containing the end time of the original Flow. |
Trammell, et al. Expires December 31, 2011 [Page 21]
Internet-Draft IPFIX Aggregation June 2011
| 3 | Mid Interval: The counters for an original Flow are added |
| | to the counters of a single appropriate Aggregated Flow |
| | containing some timestamp between start and end time of |
| | the original Flow. |
| 4 | Simple Uniform Distribution: Each counter for an original |
| | Flow is divided by the number of time intervals the |
| | original Flow covers (i.e., of appropriate Aggregated |
| | Flows sharing the same Flow Key), and this number is |
| | added to each corresponding counter in each Aggregated |
| | Flow. |
| 5 | Proportional Uniform Distribution: Each counter for an |
| | original Flow is divided by the number of time _units_ |
| | the original Flow covers, to derive a mean count rate. |
| | This mean count rate is then multiplied by the number of |
| | time units in the intersection of the duration of the |
| | original Flow and the time interval of each Aggregated |
| | Flow. This is like simple uniform distribution, but |
| | accounts for the fractional portions of a time interval |
| | covered by an original Flow in the first and last time |
| | interval. |
| 6 | Simulated Process: Each counter of the original Flow is |
| | distributed among the intervals of the Aggregated Flows |
| | according to some function the Aggregation Process uses |
| | based upon properties of Flows presumed to be like the |
| | original Flow. This is essentially an assertion that the |
| | Aggregation Process has no direct packet timing |
| | information but is nevertheless not using one of the |
| | other simpler distribution methods. The Aggregation |
| | Process specifically makes no assertion as to the |
| | correctness of the simulation. |
| 7 | Direct: The Aggregation Process has access to the |
| | original packet timings from the packets making up the |
| | original Flow, and uses these to distribute or |
| | recalculate the counters. |
+-------+-----------------------------------------------------------+
Abstract Data Type: unsigned8
ElementId: TBD5
Status: Current
8. Examples
In these examples, the same data, described by the same template,
will be aggregated multiple different ways; this illustrates the
various different functions which could be implemented by
Trammell, et al. Expires December 31, 2011 [Page 22]
Internet-Draft IPFIX Aggregation June 2011
Intermediate Aggregation Processes. Templates are shown in iespec
format as introduced in [I-D.trammell-ipfix-ie-doctors]. The source
data format is a simplified flow: timestamps, traditional 5-tuple,
and octet count. The template is shown in Figure 7.
flowStartMilliseconds
flowEndMilliseconds
sourceIPv4Address
destinationIPv4Address
sourceTransportPort
destinationTransportPort
protocolIdentifier
octetDeltaCount
Figure 7: Input template for examples
The data records given as input to the examples in this section are
shown below, in the format "flowStartMilliseconds-flowEndMilliseconds
sourceIPv4Address:sourceTransportPort -> destinationIPv4Address:
destinationTransportPort (protocolIdentifier) octetDeltaCount";
timestamps are given in H:MM:SS.sss format.
9:00:00.138-9:00:00.138 192.0.2.2:47113 -> 192.0.2.131:53 (17) 119
9:00:03.246-9:00:03.246 192.0.2.2:22153 -> 192.0.2.131:53 (17) 83
9:00:00.478-9:00:03.486 192.0.2.2:52420 -> 198.51.100.2:443 (6) 1637
9:00:07.172-9:00:07.172 192.0.2.3:56047 -> 192.0.2.131:53 (17) 111
9:00:07.309-9:00:14.861 192.0.2.3:41183 -> 198.51.100.67:80 (6) 16838
9:00:03.556-9:00:19.876 192.0.2.2:17606 -> 198.51.100.68:80 (6) 11538
9:00:25.210-9:00:25.210 192.0.2.3:47113 -> 192.0.2.131:53 (17) 119
9:00:26.358-9:00:30.198 192.0.2.3:48458 -> 198.51.100.133:80 (6) 2973
9:00:29.213-9:01:00.061 192.0.2.4:61295 -> 198.51.100.2:443 (6) 8350
9:04:00.207-9:04:04.431 203.0.113.3:41256 -> 198.51.100.133:80 (6) 778
9:03:59.624-9:04:06.984 203.0.113.3:51662 -> 198.51.100.3:80 (6) 883
9:06:56.813-9:06:59.821 203.0.113.3:52572 -> 198.51.100.2:443 (6) 1637
9:06:30.565-9:07:00.261 203.0.113.3:49914 -> 197.51.100.133:80 (6) 561
9:06:55.160-9:07:05.208 192.0.2.2:50824 -> 198.51.100.2:443 (6) 1899
9:06:49.322-9:07:05.322 192.0.2.3:34597 -> 198.51.100.3:80 (6) 1284
9:07:05.849-9:07:09.625 203.0.113.3:58907 -> 198.51.100.4:80 (6) 2670
9:10:45.161-9:10:45.161 192.0.2.4:22478 -> 192.0.2.131:53 (17) 75
9:10:45.209-9:11:01.465 192.0.2.4:49513 -> 198.51.100.68:80 (6) 3374
9:10:57.094-9:11:00.614 192.0.2.4:64832 -> 198.51.100.67:80 (6) 138
9:10:59.770-9:11:02.842 192.0.2.3:60833 -> 198.51.100.69:443 (6) 2325
9:13:53.933-9:14:06.605 192.0.2.2:19638 -> 198.51.100.3:80 (6) 2869
9:13:02.864-9:14:08.720 192.0.2.3:40429 -> 198.51.100.4:80 (6) 18289
Figure 8: Input data for examples
Trammell, et al. Expires December 31, 2011 [Page 23]
Internet-Draft IPFIX Aggregation June 2011
8.1. Traffic Time-Series per Source
Aggregating flows by source IP address in time series (i.e., with a
regular interval) can be used in subsequent heavy-hitter analysis and
as a source parameter for statistical anomaly detection techniques.
Here, the Intermediate Aggregation Process imposes an interval,
aggregates the key to remove all key fields other than the source IP
address, then combines the result into a stream of Aggregated Flows.
For simplicity, the imposed interval of 30 minutes is defined to be
larger than the maximum active timeout of the original Flows; counter
distribution will be added to this example below in Section 8.4.
In this example the partially aggregated Flows after each conceptual
operation in the Intermediate Aggregation Process are shown. These
are meant to be illustrative of the conceptual operations only, and
not to suggest an implementation (indeed, the example shown here
would not necessarily be the most efficient method for performing
these operations). Subsequent examples will omit the partially
aggregated Flows for brevity.
The input to this process could be any Flow Record containing a
source IP address and octet counter; consider for this example the
template and data from the introduction. The Intermediate
Aggregation Process would then output records containing just
timestamps, source IP, and octetDeltaCount, as in Figure 9.
flowStartMilliseconds
flowEndMilliseconds
sourceIPv4Address
octetDeltaCount
Figure 9: Output template for time series per source
Assume the goal is to get 5-minute time series of octet counts per
source IP address. The aggregation operations would then be arranged
as in Figure 10.
Trammell, et al. Expires December 31, 2011 [Page 24]
Internet-Draft IPFIX Aggregation June 2011
original Flows
|
V
+-----------------------+
| interval distribution |
| * impose uniform |
| 300s time interval |
+-----------------------+
|
| partially aggregated Flows
V
+------------------------+
| key aggregation |
| * reduce key to only |
| sourceIPv4Address |
+------------------------+
|
| partially aggregated Flows
V
+-------------------------+
| aggregate combination |
| * sum octetDeltaCount |
+-------------------------+
|
V
Aggregated Flows
partially aggregated Flows
Figure 10: Aggregation operations for time series per source
After the interval distribution step, only the time intervals have
changed; the partially aggregated Flows are shown in Figure 11.
Trammell, et al. Expires December 31, 2011 [Page 25]
Internet-Draft IPFIX Aggregation June 2011
9:00:00.000-9:05:00.000 192.0.2.2:47113 -> 192.0.2.131:53 (17) 119
9:00:00.000-9:05:00.000 192.0.2.2:22153 -> 192.0.2.131:53 (17) 83
9:00:00.000-9:05:00.000 192.0.2.2:52420 -> 198.51.100.2:443 (6) 1637
9:00:00.000-9:05:00.000 192.0.2.3:56047 -> 192.0.2.131:53 (17) 111
9:00:00.000-9:05:00.000 192.0.2.3:41183 -> 198.51.100.67:80 (6) 16838
9:00:00.000-9:05:00.000 192.0.2.2:17606 -> 198.51.100.68:80 (6) 11538
9:00:00.000-9:05:00.000 192.0.2.3:47113 -> 192.0.2.131:53 (17) 119
9:00:00.000-9:05:00.000 192.0.2.3:48458 -> 198.51.100.133:80 (6) 2973
9:00:00.000-9:05:00.000 192.0.2.4:61295 -> 198.51.100.2:443 (6) 8350
9:00:00.000-9:05:00.000 203.0.113.3:41256 -> 198.51.100.133:80 (6) 778
9:00:00.000-9:05:00.000 203.0.113.3:51662 -> 198.51.100.3:80 (6) 883
9:05:00.000-9:10:00.000 203.0.113.3:52572 -> 198.51.100.2:443 (6) 1637
9:05:00.000-9:10:00.000 203.0.113.3:49914 -> 197.51.100.133:80 (6) 561
9:05:00.000-9:10:00.000 192.0.2.2:50824 -> 198.51.100.2:443 (6) 1899
9:05:00.000-9:10:00.000 192.0.2.3:34597 -> 198.51.100.3:80 (6) 1284
9:05:00.000-9:10:00.000 203.0.113.3:58907 -> 198.51.100.4:80 (6) 2670
9:10:00.000-9:15:00.000 192.0.2.4:22478 -> 192.0.2.131:53 (17) 75
9:10:00.000-9:15:00.000 192.0.2.4:49513 -> 198.51.100.68:80 (6) 3374
9:10:00.000-9:15:00.000 192.0.2.4:64832 -> 198.51.100.67:80 (6) 138
9:10:00.000-9:15:00.000 192.0.2.3:60833 -> 198.51.100.69:443 (6) 2325
9:10:00.000-9:15:00.000 192.0.2.2:19638 -> 198.51.100.3:80 (6) 2869
9:10:00.000-9:15:00.000 192.0.2.3:40429 -> 198.51.100.4:80 (6) 18289
Figure 11: Partially aggregated Flows: intervals imposed
After the key aggregation step, all the parts of the flow key except
the source IP address have been discarded, as shown in Figure 12.
This leaves duplicate partially aggregated Flows to be combined the
final operation.
Trammell, et al. Expires December 31, 2011 [Page 26]
Internet-Draft IPFIX Aggregation June 2011
9:00:00.000-9:05:00.000 192.0.2.2 119
9:00:00.000-9:05:00.000 192.0.2.2 83
9:00:00.000-9:05:00.000 192.0.2.2 1637
9:00:00.000-9:05:00.000 192.0.2.3 111
9:00:00.000-9:05:00.000 192.0.2.3 16838
9:00:00.000-9:05:00.000 192.0.2.2 11538
9:00:00.000-9:05:00.000 192.0.2.3 119
9:00:00.000-9:05:00.000 192.0.2.3 2973
9:00:00.000-9:05:00.000 192.0.2.4 8350
9:00:00.000-9:05:00.000 203.0.113.3 778
9:00:00.000-9:05:00.000 203.0.113.3 883
9:05:00.000-9:10:00.000 203.0.113.3 1637
9:05:00.000-9:10:00.000 203.0.113.3 561
9:05:00.000-9:10:00.000 192.0.2.2 1899
9:05:00.000-9:10:00.000 192.0.2.3 1284
9:05:00.000-9:10:00.000 203.0.113.3 2670
9:10:00.000-9:15:00.000 192.0.2.4 75
9:10:00.000-9:15:00.000 192.0.2.4 3374
9:10:00.000-9:15:00.000 192.0.2.4 138
9:10:00.000-9:15:00.000 192.0.2.3 2325
9:10:00.000-9:15:00.000 192.0.2.2 2869
9:10:00.000-9:15:00.000 192.0.2.3 18289
Figure 12: Partially aggregated Flows: key aggregation
Aggregate combination sums the counters per key and interval; the
summations of the first two keys and intervals are shown in detail in
Figure 13.
9:00:00.000-9:05:00.000 192.0.2.2 119
9:00:00.000-9:05:00.000 192.0.2.2 83
9:00:00.000-9:05:00.000 192.0.2.2 1637
+ 9:00:00.000-9:05:00.000 192.0.2.2 11538
-----
= 9:00:00.000-9:05:00.000 192.0.2.2 13377
9:00:00.000-9:05:00.000 192.0.2.3 111
9:00:00.000-9:05:00.000 192.0.2.3 16838
9:00:00.000-9:05:00.000 192.0.2.3 119
+ 9:00:00.000-9:05:00.000 192.0.2.3 2973
-----
= 9:00:00.000-9:05:00.000 192.0.2.3 20041
Figure 13: Summation during aggregate combination
Applying this to each set of partially aggregated Flows to produce
the final Aggregated Flows shown in Figure 14m to be exported by the
template in Figure 9.
Trammell, et al. Expires December 31, 2011 [Page 27]
Internet-Draft IPFIX Aggregation June 2011
9:00:00.000-9:05:00.000 192.0.2.2 13377
9:00:00.000-9:05:00.000 192.0.2.3 20041
9:00:00.000-9:05:00.000 192.0.2.4 8350
9:00:00.000-9:05:00.000 203.0.113.3 1661
9:05:00.000-9:10:00.000 192.0.2.2 1899
9:05:00.000-9:10:00.000 192.0.2.3 1284
9:05:00.000-9:10:00.000 203.0.113.3 4868
9:10:00.000-9:15:00.000 192.0.2.2 2869
9:10:00.000-9:15:00.000 192.0.2.3 20594
9:10:00.000-9:15:00.000 192.0.2.4 3587
Figure 14: Aggregated Flows
8.2. Core Traffic Matrix
Aggregating flows by source and destination autonomous system number
in time series is used to generate core traffic matrices. The core
traffic matrix provides a view of the state of the routes within a
network, and can be used for long-term planning of changes to network
design based on traffic demand. Here, imposed time intervals are
generally much longer than active flow timeouts. The traffic matrix
is reported in terms of octets, packets, and flows, as each of these
values may have a subtly different effect on capacity planning.
This example demonstrates key aggregation using derived keys and
Original Flow counting. While some original Flows may be generated
by Exporting Processes on forwarding devices, and therefore contain
the bgpSourceAsNumber and bgpDestinationAsNumber Information
Elements, original Flows from Exporting Processes on dedicated
measurement devices will contain only a destinationIPv[46]Address.
For these flows, the Mediator must look up a next hop AS from a IP to
AS table, replacing source and destination addresses with AS numbers.
[TODO: complete example. show AS map, output templates, and
processing in IAP.]
8.3. Distinct Source Count per Destination Endpoint
Aggregating flows by destination address and port, and counting
distinct sources aggregated away, can be used as part of passive
service inventory and host characterization approaches. This example
shows aggregation as an analysis technique, performed on source data
stored in an IPFIX File. As the Transport Session in this File is
bounded, removal of all timestamp information allows summarization of
the entire time interval contained within the interval. Removal of
timing information during interval imposition is equivalent to an
infinitely long imposed time interval. This demonstrates both how
infinite intervals work, and how unique counters work.
Trammell, et al. Expires December 31, 2011 [Page 28]
Internet-Draft IPFIX Aggregation June 2011
[TODO: complete example. show output templates and processing in
IAP.]
8.4. Traffic Time-Series per Source with Counter Distribution
Returning to the example in Section 8.1, consider a case where
aggregation by the maximum active timeout, here 30 minutes, is
incompatible with the processing interval, here defined to be 5
minutes. For this case, flows longer than 5 minutes must have their
counters distributed. This example demonstrates counter distribution
metadata export.
[TODO: complete example. show output metadata and processing in IAP.]
9. Security Considerations
[TODO]
10. IANA Considerations
This document specifies the creation of twelve new IPFIX Information
Elements in the IPFIX Information Element registry located at
http://www.iana.org/assignments/ipfix, as defined in Section 7 above.
IANA has assigned Information Element numbers to these Information
Elements, and entered them into the registry.
[NOTE for IANA: The text TBDn should be replaced with the respective
assigned Information Element numbers where they appear in this
document. Note that the originalFlows Information Element has been
assigned the number 3, as it is compatible with the corresponding
existing (reserved) NetFlow v9 Information Element. Other
Information Element numbers should be assigned outside the NetFlow V9
compatibility range, as these Information Elements are not supported
by NetFlow V9.]
11. Acknowledgments
This work is materially supported by the European Union Seventh
Framework Programme under grant agreement 257315 (DEMONS).
12. References
Trammell, et al. Expires December 31, 2011 [Page 29]
Internet-Draft IPFIX Aggregation June 2011
12.1. Normative References
[RFC5101] Claise, B., "Specification of the IP Flow Information
Export (IPFIX) Protocol for the Exchange of IP Traffic
Flow Information", RFC 5101, January 2008.
[RFC5102] Quittek, J., Bryant, S., Claise, B., Aitken, P., and J.
Meyer, "Information Model for IP Flow Information Export",
RFC 5102, January 2008.
12.2. Informative References
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119, March 1997.
[RFC3917] Quittek, J., Zseby, T., Claise, B., and S. Zander,
"Requirements for IP Flow Information Export (IPFIX)",
RFC 3917, October 2004.
[RFC5103] Trammell, B. and E. Boschi, "Bidirectional Flow Export
Using IP Flow Information Export (IPFIX)", RFC 5103,
January 2008.
[RFC5153] Boschi, E., Mark, L., Quittek, J., Stiemerling, M., and P.
Aitken, "IP Flow Information Export (IPFIX) Implementation
Guidelines", RFC 5153, April 2008.
[RFC5470] Sadasivan, G., Brownlee, N., Claise, B., and J. Quittek,
"Architecture for IP Flow Information Export", RFC 5470,
March 2009.
[RFC5472] Zseby, T., Boschi, E., Brownlee, N., and B. Claise, "IP
Flow Information Export (IPFIX) Applicability", RFC 5472,
March 2009.
[RFC5476] Claise, B., Johnson, A., and J. Quittek, "Packet Sampling
(PSAMP) Protocol Specifications", RFC 5476, March 2009.
[RFC5610] Boschi, E., Trammell, B., Mark, L., and T. Zseby,
"Exporting Type Information for IP Flow Information Export
(IPFIX) Information Elements", RFC 5610, July 2009.
[RFC5655] Trammell, B., Boschi, E., Mark, L., Zseby, T., and A.
Wagner, "Specification of the IP Flow Information Export
(IPFIX) File Format", RFC 5655, October 2009.
[RFC5835] Morton, A. and S. Van den Berghe, "Framework for Metric
Composition", RFC 5835, April 2010.
Trammell, et al. Expires December 31, 2011 [Page 30]
Internet-Draft IPFIX Aggregation June 2011
[RFC5982] Kobayashi, A. and B. Claise, "IP Flow Information Export
(IPFIX) Mediation: Problem Statement", RFC 5982,
August 2010.
[I-D.ietf-ipfix-anon]
Boschi, E. and B. Trammell, "IP Flow Anonymization
Support", draft-ietf-ipfix-anon-06 (work in progress),
January 2011.
[I-D.ietf-ipfix-mediators-framework]
Kobayashi, A., Claise, B., Muenz, G., and K. Ishibashi,
"IPFIX Mediation: Framework",
draft-ietf-ipfix-mediators-framework-09 (work in
progress), October 2010.
[I-D.claise-ipfix-mediation-protocol]
Claise, B., "Specification of the Protocol for IPFIX
Mediations", draft-claise-ipfix-mediation-protocol-03
(work in progress), February 2011.
[I-D.trammell-ipfix-ie-doctors]
Trammell, B. and B. Claise, "Guidelines for Authors and
Reviewers of IPFIX Information Elements",
draft-trammell-ipfix-ie-doctors-02 (work in progress),
June 2011.
[I-D.ietf-ipfix-configuration-model]
Muenz, G., Claise, B., and P. Aitken, "Configuration Data
Model for IPFIX and PSAMP",
draft-ietf-ipfix-configuration-model-09 (work in
progress), March 2011.
[I-D.ietf-ipfix-flow-selection-tech]
D'Antonio, S., Zseby, T., Henke, C., and L. Peluso, "Flow
Selection Techniques",
draft-ietf-ipfix-flow-selection-tech-06 (work in
progress), May 2011.
Trammell, et al. Expires December 31, 2011 [Page 31]
Internet-Draft IPFIX Aggregation June 2011
Authors' Addresses
Brian Trammell
Swiss Federal Institute of Technology Zurich
Gloriastrasse 35
8092 Zurich
Switzerland
Phone: +41 44 632 70 13
Email: trammell@tik.ee.ethz.ch
Elisa Boschi
Swiss Federal Institute of Technology Zurich
Gloriastrasse 35
8092 Zurich
Switzerland
Email: boschie@tik.ee.ethz.ch
Arno Wagner
Consecom AG
Bleicherweg 64a
8002 Zurich
Switzerland
Email: arno@wagner.name
Benoit Claise
Cisco Systems, Inc.
De Kleetlaan 6a b1
1831 Diagem
Belgium
Phone: +32 2 704 5622
Email: bclaise@cisco.com
Trammell, et al. Expires December 31, 2011 [Page 32]