Last Call Review of draft-ietf-bmwg-sdn-controller-benchmark-meth-07

Request Review of draft-ietf-bmwg-sdn-controller-benchmark-meth
Requested rev. no specific revision (document currently at 09)
Type Last Call Review
Team General Area Review Team (Gen-ART) (genart)
Deadline 2018-02-02
Requested 2018-01-19
Authors Bhuvaneswaran Vengainathan, Anton Basil, Mark Tassinari, Vishwas Manral, Sarah Banks
Draft last updated 2018-01-30
Completed reviews Rtgdir Last Call review of -07 by Henning Rogge (diff)
Opsdir Last Call review of -07 by Scott Bradner (diff)
Genart Last Call review of -07 by Stewart Bryant (diff)
Secdir Last Call review of -07 by Russ Housley (diff)
Genart Telechat review of -08 by Stewart Bryant (diff)
Assignment Reviewer Stewart Bryant
State Completed
Review review-ietf-bmwg-sdn-controller-benchmark-meth-07-genart-lc-bryant-2018-01-30
Reviewed rev. 07 (document currently at 09)
Review result Ready with Nits
Review completed: 2018-01-30


I am the assigned Gen-ART reviewer for this draft. The General Area
Review Team (Gen-ART) reviews all IETF documents being processed
by the IESG for the IETF Chair.  Please treat these comments just
like any other last call comments.

For more information, please see the FAQ at


Document: draft-ietf-bmwg-sdn-controller-benchmark-meth-07
Reviewer: Stewart Bryant
Review Date: 2018-01-30
IETF LC End Date: 2018-02-02
IESG Telechat date: Not scheduled for a telechat


This is a well written comprehensive test set for SDN controllers. It could be published as is, but some thought about how to address the issues below might be helpful to the user of this technology.
Major issues: None

Minor issues:

I find the large amount of text on Openflow that appears out of the blue in the appendix somewhat strange. The test suit is controller protocol agnostic, so I wonder why so much text is devoted to this specific SDN control protocol. If they are there by way of illustrative example of packet exchanges, it might be useful to the reader to point to them from the measurement text. 

Something I am slightly surprised by is the lack of statistical sophistication. Average is a very crude metric giving no information on the distribution of the results.

I imagine that it is now ingrained in this aspect of the industry to specify graphs and tables, but I would have expected that the results would be specified in some machine readable format such as xml for input to a database rather than in the human readable format that is hard coded into this specification.

Nits/editorial comments: 


   This document defines the methodologies for benchmarking control
   plane performance of SDN controllers. Terminology related to
   benchmarking SDN controllers is described in the companion
   terminology document. 

SB> It would be convenient to the reader to provide the reference to or name of
SB> the companion document - the twin of the comment in the other review.

SB> it would also be useful to include such a reference early in the main text.
4. Test Considerations

4.1. Network Topology

   The test cases SHOULD use Leaf-Spine topology with at least 1
   Network Device in the topology for benchmarking. 
SB> Leaf-Spine could use a reference. In Fig 2 I am not sure this is SL rather than 
SB> a linear sequence of nodes. There is a better SL diagram later in the 
SB> document and it would be useful to the reader to forward reference it.


   The test traffic
   generators TP1 and TP2 SHOULD be connected to the first and the last
   leaf Network Device.

SB> I am sure I know what does first and last mean, but the meaning should be called out.



   5. Stop the trial when the discovered topology information matches
     the deployed network topology, or when the discovered topology
     information return the same details for 3 consecutive queries.

SB> What do you report in the latter case?