Methodology for Benchmarking IPsec Devices
draft-ietf-bmwg-ipsec-meth-05
Revision differences
Document history
Date | Rev. | By | Action |
---|---|---|---|
2015-10-14
|
05 | (System) | Notify list changed from bmwg-chairs@ietf.org, draft-ietf-bmwg-ipsec-meth@ietf.org to (None) |
2010-07-08
|
05 | (System) | State Changes to Dead from AD is watching by system |
2010-07-08
|
05 | (System) | Document has expired |
2010-07-07
|
05 | Ron Bonica | State Changes to AD is watching from IESG Evaluation::Revised ID Needed by Ron Bonica |
2010-04-08
|
05 | Sean Turner | [Ballot discuss] I am picking up Pasi's DISCUSS on this document. I have reviewed draft-ietf-bmwg-ipsec-meth-05, and have couple of concerns/questions that I'd like to … [Ballot discuss] I am picking up Pasi's DISCUSS on this document. I have reviewed draft-ietf-bmwg-ipsec-meth-05, and have couple of concerns/questions that I'd like to discuss before recommending approval of the document: - Section 5/9.1/10.1/11.1: These sections suggest that the scope of this document is limited to IPsec devices that also work as ordinary routers, and these benchmarks can't be used with e.g. remote access IPsec VPN gateways (that would not usually support "without IPsec" mode at all). Is this the intent? (If it is, a short explanation in Section 5 would be in order.) - Section 7.6.1: This section requires testing transport mode; would this mean IPsec devices that are specifically intended for gateway use (and thus may not support transport mode at all) cannot be benchmarked by this methodology? - Section 8.2: This text doesn't seem to distinguish between "maximum number of IPsec SAs on a device" and "maximum number of IPsec SAs per IKE_SA/user". The methodology is clearly measuring the latter, which could be very different from the former? - Sections 9.3/9.4/11.2/11.3/11.4: these test methodologies talk about counting the frames to detect packet loss -- but if fragmentation occurs somewhere, the number of frames sent in and number of frames coming out would be different even without packet loss? - Section 10.3: Since the DUT will encrypt the frames, how would the tester see the tags? - Section 12.1: It seems this test is measuring the average duration of one tunnel setup, but you can't calculate the tunnel setup rate from this value? (it seems with this methodology, the DUT would be mostly sitting idle, and nowhere near its maximum SAs-per-second limit...) (Also applies to 12.2/12.3) - Finally, any changes to address my comments about bmwg-ipsec-term probably require changes in this document, too. |
2010-04-08
|
05 | Sean Turner | [Ballot Position Update] New position, Discuss, has been recorded by Sean Turner |
2009-11-19
|
05 | Cindy Morgan | State Changes to IESG Evaluation::Revised ID Needed from IESG Evaluation by Cindy Morgan |
2009-11-19
|
05 | Cullen Jennings | [Ballot comment] |
2009-11-19
|
05 | Cullen Jennings | [Ballot discuss] In talking to Ron about this - it sounds like in practice only a subset of these test would be run on a … [Ballot discuss] In talking to Ron about this - it sounds like in practice only a subset of these test would be run on a given device. The resolves the concern I had and I would like to see if we can update the working in the draft to reflect that. |
2009-11-19
|
05 | Cullen Jennings | [Ballot Position Update] Position for Cullen Jennings has been changed to Discuss from Abstain by Cullen Jennings |
2009-11-19
|
05 | Magnus Westerlund | [Ballot comment] I would say that including a NAT in benchamarking is a problematic issue. First due to the non-standardized behavior there can be difference … [Ballot comment] I would say that including a NAT in benchamarking is a problematic issue. First due to the non-standardized behavior there can be difference in behavior. Secondly, I think it is very important that one is very careful to not his any NAT induced performance issues. I think testing baselines for the NAT is very important and the text should probably be more explicit about the issues. |
2009-11-19
|
05 | Magnus Westerlund | [Ballot Position Update] New position, No Objection, has been recorded by Magnus Westerlund |
2009-11-19
|
05 | Dan Romascanu | [Ballot Position Update] New position, No Objection, has been recorded by Dan Romascanu |
2009-11-19
|
05 | Tim Polk | [Ballot comment] I support Cullen's discuss - the number of recommended combinations seems a serious impediment to anyone actually performing the tests as specified. How … [Ballot comment] I support Cullen's discuss - the number of recommended combinations seems a serious impediment to anyone actually performing the tests as specified. How many wg participants have actually completed these benchmarks? The wg and technical summaries on the ballot are silent on this point... |
2009-11-19
|
05 | Tim Polk | [Ballot comment] I support Cullen's discuss - the number of recommended combinations seems a serious impediment to anyone actually performing the tests as specified. How … [Ballot comment] I support Cullen's discuss - the number of recommended combinations seems a serious impediment to anyone actually performing the tests as specified. How many wg participants have actually completed these benchmarks? |
2009-11-19
|
05 | Tim Polk | [Ballot Position Update] New position, No Objection, has been recorded by Tim Polk |
2009-11-18
|
05 | Ross Callon | [Ballot Position Update] New position, No Objection, has been recorded by Ross Callon |
2009-11-18
|
05 | Cullen Jennings | [Ballot comment] I believe that publishing this document will cause more harm that good. By my count it recommends over 4000 configuration that should be … [Ballot comment] I believe that publishing this document will cause more harm that good. By my count it recommends over 4000 configuration that should be tested for each test. And it misses many important configuration such as single DES. Consider a test like 11.1. This is just not practical. If someone sends me the test results for a some IPsec devices that meet all the suggestions of this draft, I might change my mind but currently I don't believe that anyone would reasonable do all these test or that this set of tests would provide the most reasonable characterization of an ipsec device. For example the idea of low rates being defined with 0 packet loss would give a very unrealistic view of performance of many devices. Publishing this will cause RFPs that ask vendors to produce this data. |
2009-11-18
|
05 | Cullen Jennings | [Ballot comment] I believe that publishing this document will cause more harm that good. By my count it recommends over 4000 configuration that should be … [Ballot comment] I believe that publishing this document will cause more harm that good. By my count it recommends over 4000 configuration that should be tested for each test. And it misses many important configuration such as single DES. Consider a test like 11.1. This is just not practical. If someone sends me the test results for a some IPsec devices that meet all the suggestions of this draft, I might change my mind but currently I don't believe that anyone would reasonable do all these test or that this set of tests would provide the most reasonable characterization of an ipsec device. For example the idea of low rates being defined with 0 packet loss would give a very unrealistic view of performance of many devices. Publishing this will cause RFPs that ask vendors to produce this data. |
2009-11-18
|
05 | Cullen Jennings | [Ballot Position Update] New position, Abstain, has been recorded by Cullen Jennings |
2009-11-16
|
05 | Russ Housley | [Ballot comment] The Gen-ART Review by Sean Turner on 17-Oct-2009 suggests some editorial changes: Sec 4: s/in RFC 2119. RFC 2119/in … [Ballot comment] The Gen-ART Review by Sean Turner on 17-Oct-2009 suggests some editorial changes: Sec 4: s/in RFC 2119. RFC 2119/in [RFC2119]. [RFC2119] Secs 8.1-11.3: s:/Topology /Topology: Sec 8.1: s/If all packet are/If all packets are Sec 8.1: s/format should reflect/format SHOULD reflect Sec 10.1: s/Reporting Format/Reporting Format: Sec 13.1: s/(timestamp_B).The/(timestamp_B). The |
2009-11-16
|
05 | Russ Housley | [Ballot Position Update] New position, No Objection, has been recorded by Russ Housley |
2009-11-16
|
05 | Amy Vezza | State Changes to IESG Evaluation from IESG Evaluation - Defer by Amy Vezza |
2009-10-23
|
05 | (System) | Removed from agenda for telechat - 2009-10-22 |
2009-10-22
|
05 | Samuel Weiler | Request for Telechat review by SECDIR Completed. Reviewer: Joseph Salowey. |
2009-10-22
|
05 | Cullen Jennings | State Changes to IESG Evaluation - Defer from IESG Evaluation by Cullen Jennings |
2009-10-22
|
05 | Adrian Farrel | [Ballot discuss] idnits seems to not like the way you have handled references. Although the RFC Editor can sort this out, I think you should … [Ballot discuss] idnits seems to not like the way you have handled references. Although the RFC Editor can sort this out, I think you should have a go first. You'll also need a null IANA section. I don't see any IANA email about this I-D. |
2009-10-22
|
05 | Adrian Farrel | [Ballot Position Update] New position, Discuss, has been recorded by Adrian Farrel |
2009-10-22
|
05 | Pasi Eronen | [Ballot discuss] I have reviewed draft-ietf-bmwg-ipsec-meth-05, and have couple of concerns/questions that I'd like to discuss before recommending approval of the document: - Section … [Ballot discuss] I have reviewed draft-ietf-bmwg-ipsec-meth-05, and have couple of concerns/questions that I'd like to discuss before recommending approval of the document: - Section 5/9.1/10.1/11.1: These sections suggest that the scope of this document is limited to IPsec devices that also work as ordinary routers, and these benchmarks can't be used with e.g. remote access IPsec VPN gateways (that would not usually support "without IPsec" mode at all). Is this the intent? (If it is, a short explanation in Section 5 would be in order.) - Section 7.6.1: This section requires testing transport mode; would this mean IPsec devices that are specifically intended for gateway use (and thus may not support transport mode at all) cannot be benchmarked by this methodology? - Section 8.2: This text doesn't seem to distinguish between "maximum number of IPsec SAs on a device" and "maximum number of IPsec SAs per IKE_SA/user". The methodology is clearly measuring the latter, which could be very different from the former? - Sections 9.3/9.4/11.2/11.3/11.4: these test methodologies talk about counting the frames to detect packet loss -- but if fragmentation occurs somewhere, the number of frames sent in and number of frames coming out would be different even without packet loss? - Section 10.3: Since the DUT will encrypt the frames, how would the tester see the tags? - Section 12.1: It seems this test is measuring the average duration of one tunnel setup, but you can't calculate the tunnel setup rate from this value? (it seems with this methodology, the DUT would be mostly sitting idle, and nowhere near its maximum SAs-per-second limit...) (Also applies to 12.2/12.3) - Finally, any changes to address my comments about bmwg-ipsec-term probably require changes in this document, too. |
2009-10-22
|
05 | Pasi Eronen | [Ballot Position Update] New position, Discuss, has been recorded by Pasi Eronen |
2009-10-20
|
05 | Ralph Droms | [Ballot Position Update] New position, No Objection, has been recorded by Ralph Droms |
2009-10-19
|
05 | Lars Eggert | [Ballot comment] Section 7.1.3., paragraph 1: > It is OPTIONAL to perform the tests with TCP as the L4 protocol but > in … [Ballot comment] Section 7.1.3., paragraph 1: > It is OPTIONAL to perform the tests with TCP as the L4 protocol but > in case this is considered, the TCP traffic is RECOMMENDED to be > stateful. What does "the TCP traffic is RECOMMENDED to be stateful" mean? Section 7.1.4., paragraph 1: > It is RECOMMENDED to test the scenario where IPsec protected traffic > must traverse network address translation (NAT) gateways. This is > commonly referred to as Nat-Traversal and requires UDP encapsulation. Is the idea here to have a pass/fail test or a performance - throughput/latency/etc. - test. (Because NATs vary widely in their behavior, the latter is going to be much more problematic than the former.) |
2009-10-19
|
05 | Lars Eggert | [Ballot Position Update] New position, No Objection, has been recorded by Lars Eggert |
2009-10-16
|
05 | Samuel Weiler | Request for Telechat review by SECDIR is assigned to Joseph Salowey |
2009-10-16
|
05 | Samuel Weiler | Request for Telechat review by SECDIR is assigned to Joseph Salowey |
2009-10-15
|
05 | Ron Bonica | [Ballot Position Update] New position, Yes, has been recorded for Ronald Bonica |
2009-10-15
|
05 | Ron Bonica | Ballot has been issued by Ron Bonica |
2009-10-15
|
05 | Ron Bonica | Created "Approve" ballot |
2009-10-15
|
05 | (System) | Ballot writeup text was added |
2009-10-15
|
05 | (System) | Last call text was added |
2009-10-15
|
05 | (System) | Ballot approval text was added |
2009-10-15
|
05 | Ron Bonica | State Changes to IESG Evaluation from Waiting for Writeup by Ron Bonica |
2009-10-15
|
05 | Ron Bonica | Placed on agenda for telechat - 2009-10-22 by Ron Bonica |
2009-10-14
|
05 | Ron Bonica | State Changes to Waiting for Writeup from AD Evaluation::External Party by Ron Bonica |
2009-10-13
|
05 | Ron Bonica | State Changes to AD Evaluation::External Party from Publication Requested by Ron Bonica |
2009-07-30
|
05 | Cindy Morgan | (1.a) Who is the Document Shepherd for this document? Has the Document Shepherd personally reviewed this version of … (1.a) Who is the Document Shepherd for this document? Has the Document Shepherd personally reviewed this version of the document and, in particular, does he or she believe this version is ready for forwarding to the IESG for publication? Al Morton, chair of BMWG, has personally reviewed the documents and will be the document shepherd. Editorial and nit-compliance changes requested by Al were implemented in these versions. The documents are ready for publication (several minor nits have been fixed in these versions). (1.b) Has the document had adequate review both from key WG members and from key non-WG members? Does the Document Shepherd have any concerns about the depth or breadth of the reviews that have been performed? Yes, these documents have developed more or less continuously over the last 6 years, with good working group and external reviewer comments offered and addressed. The recent WGLC went quietly, indicating that the BMWG is now satisfied with the documents (term-11 and meth-04). (1.c) Does the Document Shepherd have concerns that the document needs more review from a particular or broader perspective, e.g., security, operational complexity, someone familiar with AAA, internationalization or XML? No, except that IESG review will be a good final check, of course. As a side note, XML formatting problems held-up progress on these drafts for a while, but there are no specifications involving XML. (1.d) Does the Document Shepherd have any specific concerns or issues with this document that the Responsible Area Director and/or the IESG should be aware of? For example, perhaps he or she is uncomfortable with certain parts of the document, or has concerns whether there really is a need for it. In any event, if the WG has discussed those issues and has indicated that it still wishes to advance the document, detail those concerns here. Has an IPR disclosure related to this document been filed? If so, please include a reference to the disclosure and summarize the WG discussion and conclusion on this issue. One of the more controversial issues was whether these drafts should be expanded to cover IKEv2. The WG reached consensus to retain the IKEv1 scope, and a follow-up effort would address IKEv2. This decision was supported by events - it was reported at the IETF-74 BMWG session that all known testing of implementations involves IKEv1 only. (1.e) How solid is the WG consensus behind this document? Does it represent the strong concurrence of a few individuals, with others being silent, or does the WG as a whole understand and agree with it? It's fair to say that WG consensus involves key individuals rather than the entire working group, but that's a byproduct of BMWG membership make-up with diverse work areas. I believe that the entire WG has contributed to discussion on aspects of these drafts, such as the exclusion of "IMIX" traffic patterns (because of limited relevance to benchmarks, and lack of consensus on the specifics - despite widespread use of the concept). (1.f) Has anyone threatened an appeal or otherwise indicated extreme discontent? If so, please summarise the areas of conflict in separate email messages to the Responsible Area Director. (It should be in a separate email because this questionnaire is entered into the ID Tracker.) No. (1.g) Has the Document Shepherd personally verified that the document satisfies all ID nits? (See http://www.ietf.org/ID-Checklist.html and http://tools.ietf.org/tools/idnits/). Boilerplate checks are not enough; this check needs to be thorough. Has the document met all formal review criteria it needs to, such as the MIB Doctor, media type and URI type reviews? Yes and Yes (but see below). (1.h) Has the document split its references into normative and informative? Are there normative references to documents that are not ready for advancement or are otherwise in an unclear state? If such normative references exist, what is the strategy for their completion? Are there normative references that are downward references, as described in [RFC3967]? If so, list these downward references to support the Area Director in the Last Call procedure for them [RFC3967]. The reference sections are split in both drafts, and these are Informational Status drafts - so down-refs are not possible. The terminology has normative references to several Obsolete RFCs: ** Obsolete normative reference: RFC 2393 (Obsoleted by RFC 3173) ** Obsolete normative reference: RFC 2401 (Obsoleted by RFC 4301) ** Obsolete normative reference: RFC 2402 (Obsoleted by RFC 4302, RFC 4305) ** Obsolete normative reference: RFC 2406 (Obsoleted by RFC 4303, RFC 4305) ** Obsolete normative reference: RFC 2407 (Obsoleted by RFC 4306) ** Obsolete normative reference: RFC 2408 (Obsoleted by RFC 4306) ** Obsolete normative reference: RFC 2409 (Obsoleted by RFC 4306) ** Obsolete normative reference: RFC 2547 (Obsoleted by RFC 4364) The Methodology has a normative reference to: [I-D.ietf-ipsec-properties] Krywaniuk, A., "Security Properties of the IPsec Protocol Suite", draft-ietf-ipsec-properties-02 (work in progress), July 2002. this needs to be dealt with, possibly by making it informative, or by deletion? (1.i) Has the Document Shepherd verified that the document IANA consideration section exists and is consistent with the body of the document? If the document specifies protocol extensions, are reservations requested in appropriate IANA registries? Are the IANA registries clearly identified? If the document creates a new registry, does it define the proposed initial contents of the registry and an allocation procedure for future registrations? Does it suggest a reasonable name for the new registry? See [RFC5226]. If the document describes an Expert Review process has Shepherd conferred with the Responsible Area Director so that the IESG can appoint the needed Expert during the IESG Evaluation? Neither draft has an IANA section. However, this should be simple to add, since neither document makes a request of IANA (as is typical of BMWG memos, the request for IPv6 address space last year is an exception). (1.j) Has the Document Shepherd verified that sections of the document that are written in a formal language, such as XML code, BNF rules, MIB definitions, etc., validate correctly in an automated checker? N/A (1.k) The IESG approval announcement includes a Document Announcement Write-Up. Please provide such a Document Announcement Write-Up? Recent examples can be found in the "Action" announcements for approved documents. The approval announcement contains the following sections: Technical Summary The BMWG produces two major classes of documents: Benchmarking Terminology documents and Methodology documents. The Terminology documents present the benchmarks and other related terms. The Methodology documents define the procedures required to collect the benchmarks cited in the corresponding Terminology documents. This purpose of these documents is to define terms and methods for benchmarking the performance of IPsec devices. It builds upon the tenets set forth in [RFC1242], [RFC2544], [RFC2285] and other IETF Benchmarking Methodology Working Group (BMWG) documents (used for benchmarking routers and switches). This document seeks to extend these efforts specific to the IPsec paradigm. Working Group Summary Although there were some controversial points during development, all were resolved, and these drafts represent the consensus of the BMWG. Document Quality The Acknowledgements sections list many of the experts who reviewed thses drafts. |
2009-07-30
|
05 | Cindy Morgan | Draft Added by Cindy Morgan in state Publication Requested |
2009-07-30
|
05 | Cindy Morgan | [Note]: 'Al Morton (acmorton@att.com) is the document shepherd.' added by Cindy Morgan |
2009-07-28
|
05 | (System) | New version available: draft-ietf-bmwg-ipsec-meth-05.txt |
2009-04-03
|
04 | (System) | New version available: draft-ietf-bmwg-ipsec-meth-04.txt |
2008-02-25
|
03 | (System) | New version available: draft-ietf-bmwg-ipsec-meth-03.txt |
2007-07-11
|
02 | (System) | New version available: draft-ietf-bmwg-ipsec-meth-02.txt |
2006-03-06
|
01 | (System) | New version available: draft-ietf-bmwg-ipsec-meth-01.txt |
2005-10-18
|
00 | (System) | New version available: draft-ietf-bmwg-ipsec-meth-00.txt |