Last Call Review of draft-ietf-slim-negotiating-human-language-08

Request Review of draft-ietf-slim-negotiating-human-language
Requested rev. no specific revision (document currently at 24)
Type Last Call Review
Team Ops Directorate (opsdir)
Deadline 2017-02-20
Requested 2017-02-06
Authors Randall Gellens
Draft last updated 2017-03-06
Completed reviews Opsdir Last Call review of -08 by Mahesh Jethanandani (diff)
Secdir Last Call review of -22 by Taylor Yu (diff)
Genart Last Call review of -06 by Dale Worley (diff)
Genart Last Call review of -19 by Dale Worley (diff)
Assignment Reviewer Mahesh Jethanandani 
State Completed
Review review-ietf-slim-negotiating-human-language-08-opsdir-lc-jethanandani-2017-03-06
Reviewed rev. 08 (document currently at 24)
Review result Has Nits
Review completed: 2017-03-06


I have reviewed this document as part of the Operational directorate’s ongoing effort to review all IETF documents being processed by the IESG.  These comments were written with the intent of improving the operational aspects of the IETF drafts. Comments that are not addressed in last call may be
included in AD reviews during the IESG review.  Document editors and WG chairs should treat these comments just like any other last call comments.

Document reviewed:  draft-ietf-slim-negotiating-human-language-08


Ready with comments.


This document adds new SDP media-level attributes so that when establishing interactive communication sessions ("calls"), it is possible to negotiate (communicate and match) the caller's language and media needs with the capabilities of the called party.

The document is short and easy to read. And it seems to have considered many aspects of trying to negotiate a common human language or capability. This review looks at the document more from a operator or management perspective. 

Operational considerations:

From a operations perspective, there may be a need to troubleshoot the interface that sets up the negotiated human language. Identifying consistent methods of information that should be counted by both parties will go a long way in debugging a problem. For example, in this case, it would be helpful to start by collecting how many requests were made, how many found a language or medium in common and how many were rejected because a common match was not found.

Management considerations:

The old adage says, “Anything that can be configured, can also be misconfigured”, unless that is somehow made less possible by providing default values, modes or parameters. This can be something that can be defined using a data model in YANG.

I assume that the default behavior of receiving a SDP attribute that one does not support, results in a throw away of that particular attribute, and not the whole message, if combined with other attributes. Is this documented somewhere? If not, what does the deployment scenario look like, particularly with existing solutions?

What is the impact on network operations if for example either the translator or relay agent fails? How would that impact the negotiation?

Also, what is the test, both active and passive for the correct operation? Is there a counter being maintained for both correct and incorrect negotiation. Goes back to the question of what counters are being maintained. Such counters should include values that enable isolation of faults. For example, if negotiation fails, what are the more specific counters that isolate what within the negotiation failed?

Fault Management:

In addition to collection information on how the negotiation is working, it is important to be able to propagate both fault and health indicators to a management application. Such information needs to be documented.

Accounting Management:

Finally, it is always helpful to collect information on utilization from capacity, trend analysis, cost allocation, auditing and billing perspective.


A run of idnits came out clean.