Document Title: IP Flow Information Accounting and Export Benchmarking Methodology
Filename: draft-ietf-bmwg-ipflow-meth-07.txt
Intended Status: Informational
(1.a) Who is the Document Shepherd for this document? Has the
Document Shepherd personally reviewed this version of the
document and, in particular, does he or she believe this
version is ready for forwarding to the IESG for publication?
Al Morton has reviewed the memo, it is ready (+/- a few minor edits,
see the end of this write-up).
This will be a difficult read for those not familiar with at least
one of the areas covered here (IPFIX and Benchmarking), but the intended
audience of testers should be adequately served.
(1.b) Has the document had adequate review both from key WG members
and from key non-WG members? Does the Document Shepherd have
any concerns about the depth or breadth of the reviews that
have been performed?
The Current draft reflects extensive feedback, beginning with its
first review at a session in Dublin. Throughout the process, non-wg
people have been involved in reviews and WG last calls, especially
the IPFIX WG (obviously).
(1.c) Does the Document Shepherd have concerns that the document
needs more review from a particular or broader perspective,
e.g., security, operational complexity, someone familiar with
AAA, internationalization or XML?
No.
(1.d) Does the Document Shepherd have any specific concerns or
issues with this document that the Responsible Area Director
and/or the IESG should be aware of? For example, perhaps he
or she is uncomfortable with certain parts of the document, or
has concerns whether there really is a need for it. In any
event, if the WG has discussed those issues and has indicated
that it still wishes to advance the document, detail those
concerns here. Has an IPR disclosure related to this document
been filed? If so, please include a reference to the
disclosure and summarize the WG discussion and conclusion on
this issue.
No concerns and No IPR disclosures.
(1.e) How solid is the WG consensus behind this document? Does it
represent the strong concurrence of a few individuals, with
others being silent, or does the WG as a whole understand and
agree with it?
Quite a few bmwg participants and ipfix participants have given this a look
and now concur with the results. It took several WGLC before the version
reached consensus (with a few minor editorial changes).
Examples of Test Implementation and Results were presented
during development, which is compelling evidence of practicality.
There were WGLCs yielding long lists of comments/issues to deal with,
and this was finally accomplished.
(1.f) Has anyone threatened an appeal or otherwise indicated extreme
discontent? If so, please summarise the areas of conflict in
separate email messages to the Responsible Area Director. (It
should be in a separate email because this questionnaire is
entered into the ID Tracker.)
No Appeals Threatened.
(1.g) Has the Document Shepherd personally verified that the
document satisfies all ID nits? (See the Internet-Drafts Checklist
and http://tools.ietf.org/tools/idnits/). Boilerplate checks are
not enough; this check needs to be thorough. Has the document
met all formal review criteria it needs to, such as the MIB
Doctor, media type and URI type reviews?
All nits appear to be satisfied, with a few minor editorial changes needed.
(1.h) Has the document split its references into normative and
informative? Are there normative references to documents that
are not ready for advancement or are otherwise in an unclear
state? If such normative references exist, what is the
strategy for their completion? Are there normative references
that are downward references, as described in [RFC3967]? If
so, list these downward references to support the Area
Director in the Last Call procedure for them [RFC3967].
The Refs are split and the Normative Refs are stable.
No DownRefs.
(1.i) Has the Document Shepherd verified that the document IANA
consideration section exists and is consistent with the body
of the document? If the document specifies protocol
extensions, are reservations requested in appropriate IANA
registries? Are the IANA registries clearly identified? If
the document creates a new registry, does it define the
proposed initial contents of the registry and an allocation
procedure for future registrations? Does it suggest a
reasonable name for the new registry? See [RFC5226]. If the
document describes an Expert Review process has Shepherd
conferred with the Responsible Area Director so that the IESG
can appoint the needed Expert during the IESG Evaluation?
IANA section exists, making no IANA requests, as is usual in bmwg.
(1.j) Has the Document Shepherd verified that sections of the
document that are written in a formal language, such as XML
code, BNF rules, MIB definitions, etc., validate correctly in
an automated checker?
NA
(1.k) The IESG approval announcement includes a Document
Announcement Write-Up.
Technical Summary
For internetworking devices that perform routing or switching as
their primary function, the likely reduction in traffic-handling
capacity when traffic monitoring is active continues to be a relevant
question many years after it was first asked ("What happens when you
turn-on Netflow?").
This document provides a methodology and framework for quantifying
the performance impact of monitoring of IP flows on a network device
and export of this information to a collector. It identifies the rate
at which the IP flows are created, expired, and successfully exported
as a new performance metric in combination with traditional
throughput. The metric is only applicable to the devices compliant
with the Architecture for IP Flow Information Export [RFC5470].
The methods are applicable to both internetworking devices that
forward traffic and other devices that simply monitor traffic with
non-intrusive access to transmission facilities.
The Forwarding Plane and Monitoring Plane represent two separate
functional blocks, each with its own performance capability. The
Forwarding Plane handles user data packets and is fully characterised
by the metrics defined by [RFC2544].
The Monitoring Plane handles Flows which reflect the analysed
traffic. The metric for Monitoring Plane performance is Flow Export
Rate, and the benchmark is the Flow Monitoring Throughput.
Working Group Summary
Quite a few bmwg participants and ipfix participants have given this a look
and now concur with the results.
Examples of Test Implementation and Results were presented
during development, which is compelling evidence of practicality.
There were WGLCs yielding long lists of comments/issues to deal with,
and this was finally accomplished. It took several WGLCs before this version
reached consensus (with a few minor editorial changes).
Document Quality
All would agree that Paul Aitken provided very careful and complete
reviews throughout the development process; he left no stone unturned.
-=-=-=-=-=-=-=-=-=-=-=-
Minor Editorial Points:
Sec3.3
s/each with it's own performance/each with its own performance/
Sec3.4.2, last para
s/The details how/The details of how/
Sec4
s/4. Measurement Set Up/4. Measurement Set-up/
<at least one other header like this, 4.2
Sec4.2, last para
OLD
It is therefore possible to run both laboratory and
real deployment configurations, ...
NEW
It is therefore possible to run both non-production and
real deployment configurations in the laboratory,...
Spacing around Section 4.3.5: text is indented too many spaces
(should be 3).
Sec4.8
s/The MTU MUST be recorded/The Flow Export MTU MUST be recorded/
Sec4.9.2 Last para
s/same 100 of Flows twice./same number of Flows twice (100)./
Sec 5.6
<extra blank line in paragraph 2>
Section 6.4 and 6.5
>>> since these are options of 6.3, it makes more sense if
they are numbered 6.3.1 and 6.3.2:
s/6.4/6.3.1/
s/6.5/6.3.5/ everywhere on pages 24 and 25
Sec 7
s/c. all the possible Flow Record fields values/c. all the possible Flow Record field values/
Sec 8
s/Packet per flow/Packets per flow/
and
s/Be required to process/be required to process/