Document Title: Methodology for benchmarking MPLS protection mechanisms
Filename: draft-ietf-bmwg-protection-meth-09.txt
Intended Status: Informational
(1.a) Who is the Document Shepherd for this document? Has the
Document Shepherd personally reviewed this version of the
document and, in particular, does he or she believe this
version is ready for forwarding to the IESG for publication?
Al Morton, who has reviewed the memo. It is ready (+/- a few minor edits,
see the end of this write-up).
(1.b) Has the document had adequate review both from key WG members
and from key non-WG members? Does the Document Shepherd have
any concerns about the depth or breadth of the reviews that
have been performed?
This memo has been developed over a period of 8 years, beginning with the
merger of two independent efforts, then narrowing of the scope to become a
working group draft, followed by many reviews and WGLCs. No concerns
about the current degree of review and feedback.
(1.c) Does the Document Shepherd have concerns that the document
needs more review from a particular or broader perspective,
e.g., security, operational complexity, someone familiar with
AAA, internationalization or XML?
No.
(1.d) Does the Document Shepherd have any specific concerns or
issues with this document that the Responsible Area Director
and/or the IESG should be aware of? For example, perhaps he
or she is uncomfortable with certain parts of the document, or
has concerns whether there really is a need for it. In any
event, if the WG has discussed those issues and has indicated
that it still wishes to advance the document, detail those
concerns here. Has an IPR disclosure related to this document
been filed? If so, please include a reference to the
disclosure and summarize the WG discussion and conclusion on
this issue.
This memo follows the referenced Terminology RFC as expected,
although some of the terms agreed by the WG still feel awkward to me
(e.g., use of "Failover Event" where cases where "Failure" would do).
No IPR disclosures have been filed.
(1.e) How solid is the WG consensus behind this document? Does it
represent the strong concurrence of a few individuals, with
others being silent, or does the WG as a whole understand and
agree with it?
The WG reached consensus on this memo years ago, but it was agreed to
hold this draft until the IGP-Dataplane Benchmarking was completed.
Another WGLC was held in December 2011, and there were no additional
comments.
(1.f) Has anyone threatened an appeal or otherwise indicated extreme
discontent? If so, please summarise the areas of conflict in
separate email messages to the Responsible Area Director. (It
should be in a separate email because this questionnaire is
entered into the ID Tracker.)
No.
(1.g) Has the Document Shepherd personally verified that the
document satisfies all ID nits? (See the Internet-Drafts Checklist
and http://tools.ietf.org/tools/idnits/). Boilerplate checks are
not enough; this check needs to be thorough. Has the document
met all formal review criteria it needs to, such as the MIB
Doctor, media type and URI type reviews?
The nits check calls out some of the same reference issues mentioned in
the Editorial comments, attached below.
(1.h) Has the document split its references into normative and
informative? Are there normative references to documents that
are not ready for advancement or are otherwise in an unclear
state? If such normative references exist, what is the
strategy for their completion? Are there normative references
that are downward references, as described in [RFC3967]? If
so, list these downward references to support the Area
Director in the Last Call procedure for them [RFC3967].
The references are split, and no DownRefs.
Many of the Informative References provide essential terminology definitions,
and are therefore Normative.
(1.i) Has the Document Shepherd verified that the document IANA
consideration section exists and is consistent with the body
of the document? If the document specifies protocol
extensions, are reservations requested in appropriate IANA
registries? Are the IANA registries clearly identified? If
the document creates a new registry, does it define the
proposed initial contents of the registry and an allocation
procedure for future registrations? Does it suggest a
reasonable name for the new registry? See [RFC5226]. If the
document describes an Expert Review process has Shepherd
conferred with the Responsible Area Director so that the IESG
can appoint the needed Expert during the IESG Evaluation?
IANA section exists, making no IANA requests, as is usual in bmwg.
(1.j) Has the Document Shepherd verified that sections of the
document that are written in a formal language, such as XML
code, BNF rules, MIB definitions, etc., validate correctly in
an automated checker?
NA
(1.k) The IESG approval announcement includes a Document
Announcement Write-Up.
Technical Summary
Service providers and testing organizations need to benchmark the
performance of network protection mechanisims. The BMWG took-up this work
and defined a general terminology and set of benchmarks in RFC 6414.
This memo describes the methodology for benchmarking MPLS Protection
mechanisms for link and node protection. It provides test methodologies
and test setups for measuring failover times while considering all
dependencies that might impact the recovery of real-time applications
carried in MPLS-based traffic engineered tunnels.
The test procedures in this document cover local failure or remote failure
scenarios for comprehensive benchmarking and to evaluate failover
performance independent of the failure detection techniques.
Working Group Summary
This memo has been developed over a period of 8 years, beginning with the
merger of two independent efforts, then narrowing of the scope to become a
working group draft, followed by many reviews and WGLCs.
The WG reached consensus on this memo years ago, but it was agreed to
hold this draft until the IGP-Dataplane Benchmarking was completed.
Another WGLC was held in December 2011, and there were no additional
comments.
Document Quality
The reviews and suggestions of Jean Philip Vasseur, Curtis Villamizar, and
Bhavani Parise are acknowledged.
-=-=-=-=-=-=-=-=-=-=-=- MINOR EDITORIAL COMMENTS -=-=-=-=-=-=-=-=-=-=-=-
Abstract and everywhere else:
s/draft/document/
s/[TERM-ID]/[RFC6414]
Sec 1, 5th para
s/assurance/continuity/
Sec 1, 7th para
s/General Model/General Model (step 3b below)/
Sec 2, 3rd para (BFD mentioned in many places, but out of scope)
s/this document/the test procedures, but mentioned in discussion sections/
Sec 3,
These may not be Primary References, should be replaced with Primary RFC:
Out-of-order Packet [Ref.[Po06], section 3.3.2]
Duplicate Packet [Ref.[Po06], section 3.3.3]
Sec 4
s/stack is dependent of/stack is dependent on/
Spell-out PLR at first use, which is here:
(2) # of remaining hops of the primary tunnel from the PLR
Sec 5.7, 2nd para (very redundantly worded)
... At least 16 flows should be used, and more if possible. Prefix-
dependency behaviors are key in IP and tests with route-specific
flows spread across the routing table will reveal this dependency.
Generating traffic to all of the prefixes reachable by the protected
tunnel (probably in a Round-Robin fashion, where the traffic is
destined to all the prefixes but one prefix at a time in a cyclic
manner) is not recommended. The reason why traffic generation is not
recommended in a Round-Robin fashion to all the prefixes, one at a
time is that if there are many prefixes reachable through the LSP the
time interval between 2 packets destined to one prefix may be
significantly high and may be comparable with the failover time being
measured which does not aid in getting an accurate failover
measurement.
Sec 7.1
spell out "pps" at first use.
Sec 7.1.1 (and 7.1.2 and 7.1.3)
modify steps as follows:
6. Send the required MPLS traffic load over the primary LSP
to achieve the Throughput supported by the DUT (determined using
section X of RFC 2544).
...
9. Verify that the offered load gets mapped to the backup tunnel
and measure the Additive Backup Delay (as defined in section YY
of RFC 6414).
...
12. Record the final Throughput, which corresponds to the offered load that
will be used for the Headend PLR failover test cases.
Sec 7.1.2 only
s/of the 9 from section 6/of the 8 from section 6/
Sec 7.4 and 7.5, Procedure
4. Verify Fast Reroute protection. << is enabled?
Sec 8
s/recommended/RECOMMENDED/
s/Offered Load/Offered Load (Throughput)/
References:
Because these refs provide essential terms and definitions,
they should be normative:
[IGP-METH]
THIS is now RFC 6412
Poretsky, S., Imhoff, B., and K. Michielsen, "Terminology
for Benchmarking Link-State IGP Data Plane Route
Convergence", draft-ietf-bmwg-igp-dataplane-conv-term-23
(work in progress), February 2011.
[Br91] Bradner, S., "Benchmarking terminology for network
interconnection devices", RFC 1242, July 1991.
[RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for
Network Interconnect Devices", RFC 2544, March 1999.
[MPLS-FRR-EXT]
Pan, P., Swallow, G., and A. Atlas, "Fast Reroute
Extensions to RSVP-TE for LSP Tunnels", RFC 4090,
May 2005.
[MPLS-FWD] Akhter, A., Asati, R., and C. Pignataro, "MPLS Forwarding
Benchmarking Methodology for IP Flows", RFC 5695,
November 2009.
[RFC6414] Papneja, R., Poretsky, S., Vapiwala, S., and J. Karthik,
"Benchmarking Terminology for Protection Performance",
RFC 6414, October 2011.