Methodology for Benchmarking IPsec Devices
draft-ietf-bmwg-ipsec-meth-05
Discuss
Yes
(Ron Bonica)
No Objection
(Dan Romascanu)
(Ralph Droms)
(Ross Callon)
(Tim Polk)
No Record
Andy Newton
Deb Cooley
Erik Kline
Gorry Fairhurst
Gunter Van de Velde
Jim Guichard
Ketan Talaulikar
Mahesh Jethanandani
Mike Bishop
Mohamed Boucadair
Orie Steele
Paul Wouters
Roman Danyliw
Éric Vyncke
Summary: Needs a YES.
Andy Newton
No Record
Deb Cooley
No Record
Erik Kline
No Record
Gorry Fairhurst
No Record
Gunter Van de Velde
No Record
Jim Guichard
No Record
Ketan Talaulikar
No Record
Mahesh Jethanandani
No Record
Mike Bishop
No Record
Mohamed Boucadair
No Record
Orie Steele
No Record
Paul Wouters
No Record
Roman Danyliw
No Record
Éric Vyncke
No Record
Adrian Farrel Former IESG member
Discuss
Discuss
[Treat as non-blocking comment]
(2009-10-22)
Unknown
idnits seems to not like the way you have handled references. Although the RFC Editor can sort this out, I think you should have a go first. You'll also need a null IANA section. I don't see any IANA email about this I-D.
Cullen Jennings Former IESG member
(was Abstain)
Discuss
Discuss
[Treat as non-blocking comment]
(2009-11-19)
Unknown
In talking to Ron about this - it sounds like in practice only a subset of these test would be run on a given device. The resolves the concern I had and I would like to see if we can update the working in the draft to reflect that.
Pasi Eronen Former IESG member
Discuss
Discuss
[Treat as non-blocking comment]
(2009-10-22)
Unknown
I have reviewed draft-ietf-bmwg-ipsec-meth-05, and have couple of concerns/questions that I'd like to discuss before recommending approval of the document: - Section 5/9.1/10.1/11.1: These sections suggest that the scope of this document is limited to IPsec devices that also work as ordinary routers, and these benchmarks can't be used with e.g. remote access IPsec VPN gateways (that would not usually support "without IPsec" mode at all). Is this the intent? (If it is, a short explanation in Section 5 would be in order.) - Section 7.6.1: This section requires testing transport mode; would this mean IPsec devices that are specifically intended for gateway use (and thus may not support transport mode at all) cannot be benchmarked by this methodology? - Section 8.2: This text doesn't seem to distinguish between "maximum number of IPsec SAs on a device" and "maximum number of IPsec SAs per IKE_SA/user". The methodology is clearly measuring the latter, which could be very different from the former? - Sections 9.3/9.4/11.2/11.3/11.4: these test methodologies talk about counting the frames to detect packet loss -- but if fragmentation occurs somewhere, the number of frames sent in and number of frames coming out would be different even without packet loss? - Section 10.3: Since the DUT will encrypt the frames, how would the tester see the tags? - Section 12.1: It seems this test is measuring the average duration of one tunnel setup, but you can't calculate the tunnel setup rate from this value? (it seems with this methodology, the DUT would be mostly sitting idle, and nowhere near its maximum SAs-per-second limit...) (Also applies to 12.2/12.3) - Finally, any changes to address my comments about bmwg-ipsec-term probably require changes in this document, too.
Sean Turner Former IESG member
Discuss
Discuss
[Treat as non-blocking comment]
(2010-04-08)
Unknown
I am picking up Pasi's DISCUSS on this document. I have reviewed draft-ietf-bmwg-ipsec-meth-05, and have couple of concerns/questions that I'd like to discuss before recommending approval of the document: - Section 5/9.1/10.1/11.1: These sections suggest that the scope of this document is limited to IPsec devices that also work as ordinary routers, and these benchmarks can't be used with e.g. remote access IPsec VPN gateways (that would not usually support "without IPsec" mode at all). Is this the intent? (If it is, a short explanation in Section 5 would be in order.) - Section 7.6.1: This section requires testing transport mode; would this mean IPsec devices that are specifically intended for gateway use (and thus may not support transport mode at all) cannot be benchmarked by this methodology? - Section 8.2: This text doesn't seem to distinguish between "maximum number of IPsec SAs on a device" and "maximum number of IPsec SAs per IKE_SA/user". The methodology is clearly measuring the latter, which could be very different from the former? - Sections 9.3/9.4/11.2/11.3/11.4: these test methodologies talk about counting the frames to detect packet loss -- but if fragmentation occurs somewhere, the number of frames sent in and number of frames coming out would be different even without packet loss? - Section 10.3: Since the DUT will encrypt the frames, how would the tester see the tags? - Section 12.1: It seems this test is measuring the average duration of one tunnel setup, but you can't calculate the tunnel setup rate from this value? (it seems with this methodology, the DUT would be mostly sitting idle, and nowhere near its maximum SAs-per-second limit...) (Also applies to 12.2/12.3) - Finally, any changes to address my comments about bmwg-ipsec-term probably require changes in this document, too.
Ron Bonica Former IESG member
Yes
Yes
()
Unknown
Dan Romascanu Former IESG member
No Objection
No Objection
()
Unknown
Lars Eggert Former IESG member
No Objection
No Objection
(2009-10-19)
Unknown
Section 7.1.3., paragraph 1: > It is OPTIONAL to perform the tests with TCP as the L4 protocol but > in case this is considered, the TCP traffic is RECOMMENDED to be > stateful. What does "the TCP traffic is RECOMMENDED to be stateful" mean? Section 7.1.4., paragraph 1: > It is RECOMMENDED to test the scenario where IPsec protected traffic > must traverse network address translation (NAT) gateways. This is > commonly referred to as Nat-Traversal and requires UDP encapsulation. Is the idea here to have a pass/fail test or a performance - throughput/latency/etc. - test. (Because NATs vary widely in their behavior, the latter is going to be much more problematic than the former.)
Magnus Westerlund Former IESG member
No Objection
No Objection
(2009-11-19)
Unknown
I would say that including a NAT in benchamarking is a problematic issue. First due to the non-standardized behavior there can be difference in behavior. Secondly, I think it is very important that one is very careful to not his any NAT induced performance issues. I think testing baselines for the NAT is very important and the text should probably be more explicit about the issues.
Ralph Droms Former IESG member
No Objection
No Objection
()
Unknown
Ross Callon Former IESG member
No Objection
No Objection
()
Unknown
Russ Housley Former IESG member
No Objection
No Objection
(2009-11-16)
Unknown
The Gen-ART Review by Sean Turner on 17-Oct-2009 suggests some editorial changes: Sec 4: s/in RFC 2119. RFC 2119/in [RFC2119]. [RFC2119] Secs 8.1-11.3: s:/Topology /Topology: Sec 8.1: s/If all packet are/If all packets are Sec 8.1: s/format should reflect/format SHOULD reflect Sec 10.1: s/Reporting Format/Reporting Format: Sec 13.1: s/(timestamp_B).The/(timestamp_B). The
Tim Polk Former IESG member
No Objection
No Objection
(2009-11-19)
Unknown
I support Cullen's discuss - the number of recommended combinations seems a serious impediment to anyone actually performing the tests as specified. How many wg participants have actually completed these benchmarks? The wg and technical summaries on the ballot are silent on this point...