IESG Narrative Minutes
Narrative Minutes of the IESG Teleconference on 2015-02-19. These are not an official record of the meeting.
Narrative scribe: John Leslie and Susan Hares (The scribe was sometimes uncertain who was speaking.)
Corrections from: John
1 Administrivia
2. Protocol Actions
2.1 WG Submissions
2.1.1 New Items
Telechat:
Telechat:
Telechat:
Telechat:
Telechat:
Telechat:
Telechat:
Telechat:
Telechat:
Telechat:
Telechat:
2.1.2 Returning Items
Telechat:
2.2 Individual Submissions
2.2.1 New Items
2.2.2 Returning Items
2.3 Status Changes
2.3.1 New Items
2.3.2 Returning Items
3. Document Actions
3.1 WG Submissions
3.1.1 New Items
Telechat:
Telechat:
Telechat:
Telechat:
Telechat:
Telechat:
3.1.2 Returning Items
Telechat:
3.2 Individual Submissions Via AD
3.2.1 New Items
3.2.2 Returning Items
3.3 Status Changes
3.3.1 New Items
3.3.2 Returning Items
3.4 IRTF and Independent Submission Stream Documents
3.4.1 New Items
Telechat:
3.4.2 Returning Items
1225 EST break
1230 EST back
4 Working Group Actions
4.1 WG Creation
4.1.1 Proposed for IETF Review
Telechat::
Telechat::
4.1.2 Proposed for Approval
4.2 WG Rechartering
4.2.1 Under evaluation for IETF Review
4.2.2 Proposed for Approval
5. IAB News We can use
6. Management Issues
Telechat::
Telechat::
Telechat::
Telechat::
Telechat::
7. Agenda Working Group News
1246 EST Adjourned
(at 2015-02-19 07:31:59 PST)
draft-ietf-mpls-seamless-mcast
Apologies, I didn't really have time to read and understand this, but I got triggered by the mention of "thousands of PEs" in the intro and wonder if we think the existing security mechanisms referred to in the security considerations would scale appropriately for such cases. Am I worrying over nothing or is there maybe a bit more work (or text) needed to secure such a setup?
For the Global Administrator field, it would be really nice to specify that the IP address needs to be routeable and within what scope (area, AS, inter-AS). I think the answer is just AS. This comes up because a recommendation is to use a router's loopback address - and frequently those are deliberately not routeable outside the AS.
draft-ietf-ccamp-general-constraint-encode
Some editorial points, as flagged by Jouni in his OPS-DIR review Editorials (I use the idnits line numbering on version 17): * line 124: WSON is never expanded. It might be obvious for the authors but expanding the acronym on the first use would be nice. * Line 1157 [Switch] G. Bernstein, Y. Lee, A. Gavler, J. Martensson, " Modeling ^^^ * Line 303: The "Switching Cap" gets used as a short name for "Switching Capability" but that is not described anywhere. One the other hand for "Connectivity" a short name "Conn" is described. * Line 355: The "RstType" gets introduced as a short name for "RestrictionType". * Line 344: RestrictType ^^^ * Line 708: "Note that that.." ^^^^^^^^^^ * Line 742: "..Num Label bits" ^^^ 's' missing
Thank you for addressing the editorial nits raised in the SecDir review. https://www.ietf.org/mail-archive/web/secdir/current/msg05398.html
draft-ietf-drinks-spp-framework
When Martin and I chatted about this draft, he was leaning toward a Discuss on the use of the phrase "transport protocol" in this draft. I would have supported that, but wanted to offer two other data points (in the great tradition of TSV, we argue with people even when we agree with them). We are seeing above-layer-four protocols referred to as "transport protocols" in many places. A much-previous IESG used the word "substrate" in http://tools.ietf.org/html/bcp56, "On the use of HTTP as a Substrate". If you wanted to switch terms to "substrate", I'd be fine with that, but I'm not sure that's a commonly understood term of art these days. So, a concrete suggestion - you get all the way through Section 4 before you uncloak this text: 4.11. Mandatory Transport At the time of this writing, a choice of transport protocol has been provided in SPP Protocol over SOAP document. To encourage interoperability, the SPPF server MUST provide support for this transport protocol. With time, it is possible that other transport layer choices may surface that agree with the requirements discussed above. Perhaps you could move this to the front of the line early in Section 4, and add a few words like this: None of the existing transport protocols carried directly over IP, appearing as "Protocol" in IPv4 headers or "Next Header" in IPv6 headers, meet the requirements for a "transport" listed in this section. One other quibble about basic terminology. I apologize for spending the last three days talking about IRRs at NANOG 63, but I'd think "Registry" with no qualifier meant something like an IRR in common usage. Would it be possible for you to characterize "registries" with an adjective on first use, in Section 1? I'm not asking for a wholesale terminology swap, of course. 1. Introduction Service providers and enterprises use routing databases known as registries to make session routing decisions for Voice over IP, SMS and MMS traffic exchanges. This document is narrowly focused on the provisioning framework for these registries. This framework prescribes a way for an entity to provision session-related data into a Registry. The data being provisioned can be optionally shared with other participating peering entities. The requirements and use cases driving this framework have been documented in [RFC6461].
- Figure 2: What is "rant" here? I don't see that explained. I guess registrant but had to wait for 5.1 to see that. - 6.2, p20, para 1: s/Identity/Identifier/ here? - 9.5: That's a surprise and I bet isn't met by any reasonable protocol.
I have a number of comments and one big near DISCUSS point: The definition of your meaning of " transport protocols" is stated just in Section 4.11 and you mean for instance SOAP. However, SOAP is not a transport protocol in the sense as the rest of the world AFAIK is using the term transport protocol. A transport protocol is a layer 4 protocol and not something that is running on top. Can you please change your terminology? Otherwise, all my points below become a DISCUSS, as your requirements basically rule out transport protocols to run over. - Section "4.4. Authentication" authenticated SPP Client is a Registrar. Therefore, the SPPF transport protocol MUST provide means for an SPPF server to authenticate an SPPF Client. This MUST requirement basically lets you without any transport protocol choice left. None to me known transport protocol is supporting the authentication between client and server. Unless you will wait for TCPINC. Perhaps you mean this: "Therefore, SPPF MUST leverage appropriate mechanisms provided by underlying protocol layers for an SPPF server to authenticate an SPPF Client". This will allow to use TLS which is not a transport protocol, but running on top of it. In case you have a different defintion of transport protocol, it would be good to state this. - Section "4.6. Confidentiality and Integrity" Therefore, the transport protocol MUST provide means for data integrity protection. Similar discuss to the point above: None of the IETF transport protocols is providing means for data integrity protection. So you won't ge too far. - Section "4.9. Request and Response Correlation": Same as the ones before: A transport protocol suitable for SPPF MUST allow responses to be correlated with requests. TCP, UDP and SCTP will not offer this. In Section "4.2. Request and Response Model" Therefore, a transport protocol for SPPF MUST follow the request- response model by allowing a response to be sent to the request initiator. The last part is worded a bit strange: "allowing a response to be sent..". How about saying "my ensuring a response to be sent to the..."? In Section "4.3. Connection Lifetime": What is in a quantity short and long-lived? This sentence does not make any sense, unless it is state what a short time period for such a protocol and what a long time period is. In Section "Near Real Time" I am not sure how good or bad one can determine if any protocol is reacting in near real-time. And what is realtime anyhow? Measured in nano seconds, milliseconds, etc?
I would be interested in hearing an answer at least with regards to the following items raised in Peter Yee's Gen-ART review. In both cases I too was left wondering what the text actually meant. Section 7.2: Is the "Delete" operation meant to be atomic? Should that be specified in that section? Section 9.7: this section discusses how the "transport protocol" provides connection protection services and then says that therefore a man-in-the-middle attack is possible. If that's the case, then the "transport protocol" is not (adequately) providing connection protection. And without connection protection, a man-in-the-middle attack would of course be possible, so saying that because there is connection protection, a man-in-the-middle attack is therefore possible seems misleading.
3.2: s/is not approved for use/MUST NOT be used 3.3: s/MUST/need to s/SHOULD/is expected to 4.1/4.2: s/MUST/will (These are both definitional, not requirements; how could you possibly do otherwise?) 4.5: Refer to the Security Considerations section for further guidance. Please use an xref in here in order to refer to the section number. There are several of these named references throughout the document. Please fix also 5.2.2, 6.1 (two occurrences), 6.3 (two occurrences), 6.4, 6.5 (two occurrences), 6.6 (two occurrences), 7.1, 7.2, 7.4 (two occurrences), 7.5 (two occurrences), 7.6, 9.1 (two occurrences) 4.11: As written, this needs a (normative) reference to -spp-protocol-over-soap. You can't have a MUST requirement without a normative reference. 5.1: I think it's really awful practice to include protocol requirements and syntax definitions inside IANA Considerations. IANA Considerations are for *IANA*, not for the implementer and not for the folks entering items in the registry. I strongly suggest moving the syntax requirements and the ABNF from 11.2 into 5.1 and simply reverse the pointer so that 11.2 points to 5.1. 5.2: (I'm still trying to figure out how to non-normatively define something. :-) ) Can name attributes really be non-ASCII? Aren't these all protocol elements, not user-interface items? I am icked-out by having to use toCasefold, and having to have a reference to specific Unicode version. 5.2.1/5.2.2/5.3: I always find this construction bizzarre: "Any conforming specification MUST define...". They're all MUSTs (save a few MAYs in 5.3), and those MUSTs seem pretty unnecessary. For 5.3, you should simply make the opening paragraph: The following table contains the list of response types that a transport protocol specification needs to provide. An SPPF server MUST implement all of the following at minimum. And then strike "Any conforming specification MUST define a response to indicate that" from all of the entries. Move the MAY bits out of the table, as those aren't part of the description of each of those response types. It'll shorten things up significantly. 5.3: o The value for Attribute Value MUST be the value of the data element to which the preceding Attribute Name refers. o Response type "Attribute value invalid" MUST be used whenever an element value does not adhere to data validation rules. What other choice could an implementation make? In other words, if I were to violate the first MUST, what do you think I'm going to put in to the attribute value that I need to be instructed that I MUST NOT do? 6.4: hostName: Root-relative host name of the name server. The additional term "root-relative" confused me. Are you somehow trying to say that these names MUST NOT have a terminating "." (i.e., they must be relative domain names)? If that's the point, then you should probably say that. Otherwise, I would strike "root-relative". An absolute name (with a terminating ".") should be OK in this context, yes? 10: OLD Where human-readable languages are used in the protocol, those messages SHOULD be tagged according to [RFC5646]... I think you mean that human-readable *messages* that might be displayed to the user are to get language tags, but I don't see anywhere in the spec where you produce human-readable messages. Can you point me to an example. If so, you should probably say: NEW Where human-readable messages that are presented to an end user are used in the protocol, those messages SHOULD be tagged with their language according to [RFC5646]... Also: If tags are absent, the language of the message defaults to "en" (English). That seems like a bad plan. If all of the characters are out of the Arabic script, I'm pretty darn sure that an implicit default language tag of "en" is unlikely to be helpful to an implementation. I would strike that sentence. 11.2: See comment on section 5.1 above.
Thanks for addressing the SecDir review from 2 years ago. I see that you have added text to 9.1 to say integrity protection and confidentiality protections are to be supported by the transport protocol. This and the other considerations look good. http://www.ietf.org/mail-archive/web/secdir/current/msg03495.html In section 8, would it be appropriate to require that the XML is well formed and validated to prevent application and security issues? I think a simple statement to that effect would be helpful in this document. Barry says this isn't needed for apps and is assumed. This surfaced as a possible concern for me as a result of it being in the INCH/MILE schema related drafts, so it may have been an apps request at the time or could have been that the WGs were aware of a possible issue since they involve incident responders. In case there is an issue, I put a question out to someone that can help, but suspect it may be a result of additional processing requirements that we had on the schema in addition to general conformance that could result in an issue. I didn't see any that are out of the ordinary in the subsequent draft, so this may not be needed. Hopefully I'll have a response later, but would say there is nothing to do unless that comes in with a reason good enough. Text in subsequent documents that tells you how to handle non-conformance to the schema or other issues that might result in a validation problem (if restrictions for this go beyond XML conformance) would be needed of that were the case, not here. This was a request for a simple statement, that may not be needed.
-- Section 2 -- This document reuses terms from [RFC3261], [RFC5486], use cases and requirements documented in [RFC6461] and the ENUM Validation Architecture [RFC4725]. These are all listed as informative references. If you use terminology defined elsewhere, those references (3261 and 5486) need to be normative (they're required in order to understand the terms used in this doument). -- Section 4.11 -- At the time of this writing, a choice of transport protocol has been provided in SPP Protocol over SOAP document. This would be a good place for a reference to that draft. I think the reference is important, as you've made it MTI; I think it's a normative one. I don't think "At the time of this writing" is necessary, though if you really like it I don't object. It's also missing a "the" and some quotes, as thus: NEW One choice of transport protocol has been provided in the document "SPP Protocol over SOAP" [reference]. END -- Section 11.2 -- Why does the policy need to be RFC Required? Why not Expert Review? For that matter, why not FCFS? You can either point me at mailing list archives where this was discussed, or explain the necessity in response to this comment. While we're talking about OrgIdType, I don't think the document makes it clear what this is, and why new ones would be registered in the first place. Why would we ever need an OrgIdType Namespace other than "iana-en"? Shouldn't the document say something about that?
-- Section 1 -- 1. A resolution system returns a Look-Up Function (LUF) that comprises the target domain to assist in call routing (as described in [RFC5486]). I don't know that it means for a LUF to "comprise the target domain"; perhaps its a meaning of "comprise" with which I'm unfamiliar. (Similarly for bullet 2.) Also, where in 5486 is this described? Is it Section 4.3.3? It'd be helpful to include that. -- Section 2 -- In addition, this document specifies the following additional terms: You can get rid of "In addition," (my preference) or "additional"; you don't need both. (I would also use "defines" rather than "specifies".) Server: In the context of SPPF, this is an application that receives a provisioning request and responds accordingly. It is sometimes referred to as a Registry. Registry: The Registry operates a master database of Session Establishment Data for one or more Registrants. The latter sentence in the first definition seems to say that "Server" and "Registry" are synonymous. How does it, then, make sense to have separate definitions that are different? And if they're not synonymous, perhaps it's unwise to sometimes refer to a Server as a Registry. In the definition of Registrant: Within the confines of a Registry, a Registrant is uniquely identified by a well-known ID. What is a "well-known ID"? What is well known about it? I ask because the term isn't otherwise used in this document. -- Section 4 subsections -- These subsections are inconsistent in how they refer to the transport protocol (and see Martin's comments about that). Some of those differences don't matter, but I think some do, and I think we'd be better off making the terminology consistent. 4.1, 4.2, 4.10: "a transport protocol for SPPF" 4.3: "a protocol suitable for SPPF" [is the word "suitable" significant here?] 4.4: "the SPPF transport protocol" 4.6: "the transport protocol" [doesn't mention SPPF] 4.7: "a DRINKS transport protocol" [DRINKS, as opposed to SPPF?] 4.8: "a suitable transport protocol for SPPF" 4.9: "a transport protocol suitable for SPPF" You're in a maze of little twisting passages, all different. I suggest picking one phrasing and using it in all nine subsections. -- Section 5.2 -- "Name" attributes that are used as components of object key types MUST be treated case insensitive, more specifically, comparison operations MUST use the toCasefold() function, as specified in Section 3.13 of [Unicode6.1]. It's a small point, but I think it would be better to lead with the more specific requirement, which makes the other unnecessary except by way of explanation: NEW "Name" attributes that are used as components of object key types MUST be compared using the toCasefold() function, as specified in Section 3.13 of [Unicode6.1]. That function performs case-insensitive comparisons. END -- Section 11.2 -- The ABNF allows an OrgIdType Namespace identifier to end with "-"; is that intentional?
I have one point to discuss that should be easy to resolve. = Section 6 = A bunch of places in this section reference the Create and Modify operations, neither of which are defined in Section 7. I think these are both meant to be Add operations? Or if not, a Modify operation needs to be defined and made distinct from Add.
= Section 3.3 = What does "RFC level document" mean? RFC? Or perhaps you want to use the "permanent and readily available" standard from RFC 5226? = Section 5.2.1 = s/SPPF object that/SPPF object/ = Section 5.2.2 = s/Refer the "Framework Data Model Objects"/Refer to the "Framework Data Model Objects"/ = Section 6 = s/refer the "Framework Operations"/refer to the "Framework Operations"/
draft-ietf-drinks-spp-protocol-over-soap
(resending after changing the notification to @tools.ietf.org) In this text: This document RECOMMENDS SOAP 1.2 [SOAPREF] or higher, and WSDL 1.1 [WSDLREF] or higher. I'm not sure why these are RECOMMENDS, but more to the point, am I reading this that there's no mandatory-to-implement version of SOAP or WSDL for SPP over SOAP? I note that you have HTTP/1.1 or higher as a "MUST use" in Section 4.
I have no objection to the publication of this document, but SOAP, really? I thought we had moved on.
I just want to check one thing... Section 5: why is there a MUST for Digest auth? What'd be wrong with TLS client auth here? I do wish the WG had considered some alternative to passwords, which don't make so much sense in this use-case. (BTW: You could chose HOBA here I guess, but that's still in the RFC editor queue and not supported by libraries so perhaps doesn't suit. But it'd work. I'm an author of the HOBA spec though, so I'm biased:-) Anyway - can you tell me if the WG considered dropping passwords entirely and mandating TLS client auth be implemented? If the WG seriously considered TLS client auth already, I'll just clear.
- General: why would one want to ever run this protocol without TLS? Did the WG consider saying that TLS MUST be used? Again, if you tell me you thought about it, I'll just clear. - 7.1.2: The framework uses "Identifier" but here you use "Identity" - it'd be better to be consistent I think and "Identifier" is a lot better. - section 11 is weaker than the corresponding section in the framework draft. Two things: 1) why not point back to the framework here? 2) shouldn't you say which of the vulns/mitigations called out in the framework are relevant or mitigated here?
Roni Even's Gen-ART review raised some questions that should be answered, I think.
All editorial. Note that I did not review sections 9 & 10. 1 or 2: Might be nice to define "SPPPoS" for "SPP Protocol over SOAP". Would save a lot of space and make things easier to read. 3: OLD This document RECOMMENDS SOAP 1.2 [SOAPREF] or higher, and WSDL 1.1 [WSDLREF] or higher. NEW SOAP 1.2 [SOAPREF] or higher, and WSDL 1.1 [WSDLREF] or higher are RECOMMENDED by this document. END 4: s/compliant with this document/of this protocol 5ff: I don't see how the word "conforming" adds anything to this document. Instead of "conforming SPPPoS clients/servers MUST do X", why not say "SPPPoS clients/servers MUST do X"? 7: Title: s/SPP Protocol SOAP Data Structures/SPP Protocol over SOAP Data Structures
Thanks for your work on this draft. I have some comments and suggestions that I'd like to be considered: Section 4: Instead of HTTP(S), I'd prefer to see HTTP/TLS. Would that cause any heartburn? It would make the text consistent with the next section. Please change SSL to TLS in this section as well. Section 5: OK, I see you have TLS listed here, along with a minimum version by reference to the RFC for TLS 1.2. All good, thanks. A pointer to the BCP from UTA: https://datatracker.ietf.org/doc/draft-ietf-uta-tls-bcp/ following the last sentence, might be helpful. It's in IETF last call now, so it shouldn't hold up this draft and could even be done as an informational reference so it won't matter that it's not published yet. Alternatively, this reference could be in section 11.1. Section 7.3 Is a response code needed when the XML does not validate to the schema or other requirements that may exist in addition to schema conformance? Or does this happen somewhere else or perhaps this should be stated as part of one of the existing response codes?
-- Section 4 -- Implementations compliant with this document MUST use HTTP 1.1 [RFC2616] or higher. Also, implementations SHOULD use persistent connections. You could remove "compliant with this document". But more importantly, the "SHOULD" is not an interoperability requirement. I'd rather see "implementations should use persistent connections for the performance reasons specified above." But this is non-blocking, and there's no need to discuss it. Also, RFC 2616 is obsolete. The current reference for HTTP 1.1 is RFC 7230, and this reference needs to be changed to that. -- Section 5 -- I support Stephen's DISCUSS here. Further on what he says in his comment, this MUST requirement locks you into Digest for all time, regardless of what other authentication mechanisms might be defined and deployed later. That doesn't seem wise. If the real point here is that there are two mechanisms (Basic and Digest), and you want to use Digest because you don't want Basic, then maybe that's how you should say it: ban Basic rather than requiring Digest.
draft-ietf-aqm-recommendation
I'm quite surprised that the introduction doesn't mention the problems that high or unpredictable latency can cause with flows that are attempting to do congestion control at the ends (e.g., TCP). If I were reading this without already knowing about that, I would assume that the goal of this document is to reduce latency for the benefit of applications that require low latency, like VoIP and gaming. It would be nice if the introduction made mention of the issue of high latency as it affects TCP flows. The document also talks about congestion collapse as a future risk to be prevented, but I think that this isn't telling the whole story: users of the Internet see localized congestion collapse quite frequently, and have done for quite some time. It's essentially normal network behavior in hotels, cafes and on airplanes: anywhere where available bandwidth is substantially short of demand. I don't think this is a problem with technical accuracy, but I think someone reading this document who isn't an expert on congestion control might not realize that this document is talking about that specific sort of failure mode as well as failures deep in the network. I'm really happy to see this document being published. The above comments are just suggestions based on my particular concerns about congestion, and do not reflect any degree of expertise, so if they seem exceptionally clueless you should just ignore them.
I appreciate very much the work on this document. I'm a Yes, with some niggling. In this text: Abstract The note largely repeats the recommendations of RFC 2309, and replaces these after fifteen years of experience and new research. I'm thinking that doesn't match the "replaces" language in section 1.4, which I think is about right. Perhaps something like The note replaces the recommendations of RFC 2309 based on fifteen years of experience and new research. In this text: 1.1. Congestion Collapse The original fix for Internet meltdown was provided by Van Jacobsen. Beginning in 1986, Jacobsen developed the congestion avoidance mechanisms [Jacobson88] that are now required for implementations of the Transport Control Protocol (TCP) [RFC0768] [RFC1122]. I'm wondering if RFC 7414 would be a helpful reference here, and elsewhere in the document. I'm bemused by the use of RFC 793 as the reference for TCP later in this document, since RFC 793 TCP behaves nothing like TCP as characterized here. I know I'm confused by the reference to RFC 768 - that's UDP, as cited correctly elsewhere in the document. In this text: 2. Non-Responsive Flows The User Datagram Protocol (UDP) [RFC0768] provides a minimal, best-effort transport to applications and upper-layer protocols (both simply called "applications" in the remainder of this document) and does not itself provide mechanisms to prevent congestion collapse and establish a degree of fairness [RFC5405]. I'm not entirely comfortable with the idea that non-responsive flows use UDP transport (especially in our tunneled world). If you guys think this is OK, I'll hold my nose, but if you wanted to say anything about "other flows that are as non-responsive as UDP transport", I'd think that would be helpful. It certainly fits at least as well as the "large number of short-lived TCP flows that are much less responsive" paragraph that you end this section with, which probably fits better under the following list item, 3. Transport Flows that are less responsive than TCP In this text: It is essential that all Internet hosts respond to loss [RFC5681], [RFC5405][RFC4960][RFC4340]. Packet dropping by network devices that are under load has two effects: It protects the network, which is the primary reason that network devices drop packets. The detection of loss also provides a signal to a reliable transport (e.g. TCP, SCTP) that there is potential congestion using a pragmatic heuristic; "when the network discards a message in flight, it may imply the presence of faulty equipment or media in a path, and it may imply the presence of congestion. To be conservative, a transport must assume it may be the latter." Unreliable transports (e.g. using UDP) need to ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ similarly react to loss [RFC5405] ^^^^^^^^^^^^^^^^^^^^^^^ would it be more correct to say "Applications using unreliable transports (e.g. UDP)"?
Very readable. Thanks.
The authors have acknowledged Elwyn's GEN-ART review [1] and they will integrate the comments in an updated version after the IESG review. [1] https://mailarchive.ietf.org/arch/msg/ietf/gHzWBxmv64q6PkbQ0AFW5q6TvpU
This document does not have the pre-5378 boilerplate. Have all of the authors of 2309 actually signed the appropriate things, or does this document need the pre-5378 boilerplate?
Hopefully an easy DISCUSS. 3. The algorithms that the IETF recommends SHOULD NOT require operational (especially manual) configuration or tuning. This sentence above could be understood in different ways. For example, that any configuration is wrong. The ability to activate AQM is a good thing IMO. The section 4.3 title is closer to what you intend to say: "AQM algorithms deployed SHOULD NOT require operational tuning" The issue is that you only define what you mean by "operational configuration" in section 4.3 Proposal: OLD: 3. The algorithms that the IETF recommends SHOULD NOT require operational (especially manual) configuration or tuning. NEW: 3. AQM algorithm deployment SHOULD NOT require tuning of initial or configuration parameters. OLD: 4.3 AQM algorithms deployed SHOULD NOT require operational tuning NEW: 4.3 AQM algorithm deployment SHOULD NOT require tuning
- RFC 2309 introduced the concept of "Active Queue Management" (AQM), a > class of technologies that, by signaling to common congestion- controlled transports such as TCP, manages the size of queues that Remove > - Network devices SHOULD use an AQM algorithm to measure local local congestion local local
Thanks for your work on this draft, it looks good. There are some tiny nits that the SecDir reviewer found that you might want to consider: https://www.ietf.org/mail-archive/web/secdir/current/msg05357.html
Thanks for the hard work on this document. I have a few comments below. -- General: I think it would be useful to define "network devices" up front, and in particular to clarify whether endpoint devices are subsumed in this category. Are the recommendations in this document meant to apply to queues in tablet/smartphone/laptop OSes as well as in routers, switches, etc.? -- Sec 1.2: "instead it provides recommendations on how to select appropriate algorithms and recommends that algorithms should be used that a recommended algorithm is able to automate any required tuning for common deployment scenarios." Seems like there are some extra words here. -- Sec 3: "There is a growing set of UDP-based applications whose congestion avoidance algorithms are inadequate or nonexistent (i.e, a flow that does not throttle its sending rate when it experiences congestion). Examples include some UDP streaming applications for packet voice and video, and some multicast bulk data transport. If no action is taken, such unresponsive flows could lead to a new congestion collapse. Some applications can even increase their traffic volume in response to congestion (e.g. by adding forward error correction when loss is experienced), with the possibility that they contribute to congestion collapse." Would be nice to have a citation or two in this paragraph (though I can see why you might not want to). "Lastly, some applications (e.g. current web browsers) open a large numbers of short TCP flows for a single session. This can lead to each individual flow spending the majority of time in the exponential TCP slow start phase, rather than in TCP congestion avoidance. The resulting traffic aggregate can therefore be much less responsive than a single standard TCP flow." I note that HTTP/2 is on its way to publication and there are a large number of existing implementations, so the characterization of "current web browsers" seems a bit off. I would suggest something like "(e.g. web browsers primarily supporting HTTP 1.1)." -- Sec 7: I think there's actually a really important privacy aspect that should be called out here, which is that by virtue of recommending that AQM algorithms not be dependent on specific transport or application behaviors, network devices need not gain insight into upper layer protocol information for the purpose of supporting AQM. That is, the document's explicit recommendation for algorithms to be able to operate in a transport- and application-agnostic fashion is a privacy-enhancing feature. -- Sec 9: I'm a little surprised that almost all of the research referenced in this document is from the 1990s, given recent attention that has been paid to this topic.
draft-ietf-tls-downgrade-scsv
Thank you for working through Last Call comments on this one. I did have one niggle. In this text: 6. Security Considerations However, it is strongly recommended to send TLS_FALLBACK_SCSV when downgrading to SSL 3.0 as the CBC cipher suites in SSL 3.0 have weaknesses that cannot be addressed by implementation workarounds like the remaining weaknesses in later (TLS) protocol versions. I'm wondering whether "recommended" is intended as an RFC 2119 RECOMMENDED, but wondering more why someone wouldn't do this (why is it not required/REQUIRED?).
Russ' Gen-ART observation needs to result in document a change. Hopefully that can be implemented as part of the approval process (or a new draft if otherwise required).
This is the second SCSV value going into the registry. I noted that the value proposed in Section 7 is not adjacent to the other SCSV value (0x00,0xFF TLS_EMPTY_RENEGOTIATION_INFO_SCSV), and in fact lies in the middle of a broad swath of unallocated code points. Would it be worthwhile to allocate a range of ciphersuite values to be used for these sorts of things (say 0x00,0xD0-0xFF), or at least make the code point assigned for this document adjacent to the other one so that if there are others, they can be managed in a range-like fashion?
The Abstract should mention DTLS also, and the two DTLS RFCs that are updated.
Glad folks came to consensus on this even if it was tough. The header of the document and the intro say that it updates RFCs 2246, 4346, 4347, 5246, and 6347, but the abstract only lists three of those. The IANA considerations section is a little weird, since the actual allocations are listed in the part that is to be removed and then there is a sentence that claims the allocations are already done. Why not do the usual "This document registers the following values in XYZ registry ..." and keep the registrations themselves in there?
draft-ietf-mpls-oam-ipv6-rao
Thanks for working with me to clear my Discuss, which was: This Discuss ballot is probably more accurately a "Please clue in a TSV AD who is trying to pattern match and failing" ballot, and likely quick to resolve. But, I'm looking at this text: 4. Updates to RFC 4379 [RFC4379] specifies the use of the Router Alert Option in the IP header. Sections 4.3 and 4.5 of [RFC4379] are updated as follows: for every time in which the "Router Alert IP option" is used, the following text is appended: In case of an IPv4 header, the generic IPv4 Router Alert Option value 0x0 [RFC2113] SHOULD be used. In case of an IPv6 header, ^^^^^^ the IPv6 Router Alert Option value TBD1 allocated through this document for MPLS OAM MUST be used. When I click over to Section 4.3 of [RFC4379], I see this text: 4.3. Sending an MPLS Echo Request An MPLS echo request is a UDP packet. The IP header is set as follows: the source IP address is a routable address of the sender; the destination IP address is a (randomly chosen) IPv4 address from the range 127/8 or IPv6 address from the range 0:0:0:0:0:FFFF:127/104. The IP TTL is set to 1. The source UDP port is chosen by the sender; the destination UDP port is set to 3503 (assigned by IANA for MPLS echo requests). The Router Alert option MUST be set in the IP header. ^^^^ Could you help me understand whether this is really a MUST in Section 4.3 of [RFC4379] that is morphing into a SHOULD for IPv4 and remaining a MUST in IPv6? I have the same confusion in Section 4.5, but I'm betting the same answer applies.
Thanks for addressing the non-security nits found by the SecDir reviewer. https://www.ietf.org/mail-archive/web/secdir/current/msg05420.html
draft-ietf-uta-tls-bcp
Thank you _very_ much for doing this work!
This is great. Thanks for putting it together. Just for my own edification, why would o Implementations MUST support, and SHOULD prefer to negotiate, cipher suites offering forward secrecy, such as those in the Ephemeral Diffie-Hellman and Elliptic Curve Ephemeral Diffie- Hellman ("DHE" and "ECDHE") families. not also be "MUST prefer to negotiate"? I found it strange that there's no hint of 5.2. Unauthenticated TLS and Opportunistic Security In summary: this document does not apply to unauthenticated TLS use cases. until about halfway through page 15. If it's important to say this, maybe it's better to say it earlier in the document?
One simple editorial thing: In the last paragraph of 7.5, I suggest changing "The foregoing considerations" to "The considerations in this section".
I've a bunch of nits below. The only non-bit is whether or not this has recently been compared to bettercrypto.org. Doing so again would be a fine thing if not. - abstract & intro: nit: maybe s/and modes of operation/and their modes of operation/ might be better, as modes are defined by ciphersuites - intro: maybe s/are/have been/ when you say CBC and RC4 are most common - that's changing fairly quickly - intro: maybe s/will have/should have/ fewer vulns. when deploying TLS1.3 - we can't control code quality - 3.2: SSL stripping could do with a reference maybe - 3.3: If it is true that compression attacks require the attacker to control the traffic, then saying so would be good, but only if there's an easily understood way to phrase that, and I can't think of one right now;-) - 3.6: add a reference to where SNI is defined (that's RFC 6066, section 3 I think?) - section 4: would a reference to bettercrypto.org be good here - they have specific configs one can use to implement these recommendations (or at least I hope they do!) - 4.1: I forget if the WG discussed adding a SHOULD NOT for RSA key transport. I think that'd be a fine addition, along with a statement that the justification is the lack of PFS. - 4.2.1: nitty, nit, nit: they MTI acronym should be defined on 1st use, not 2nd:-) - 4.4: "negotiated parameters" reads somewhat ambiguously as it could be read to mean chinese menu, and I don't think that's what you want - 7.1: did anyone compare this text to the "most dangerous code" paper? [1] [1] http://dl.acm.org/citation.cfm?id=2382204 - 7.3: "aka PKIX certificates" isn't correct, I'd delete the phrase (but leave the ref to 5280) both times
I really can't abide by the abdication in Section 5.2. Getting a cert is hard. Running reasonably recent software and configuring it properly is not. The possibility that a connection will not be authenticated is no excuse for using bad versions of TLS or using insecure ciphersuites. I appreciate that normally deference to WG consensus is appropriate, but this is a recommendation that could be actively harmful to the Internet by encouraging the continued use of broken code.
These COMMENTs are right on the edge of being DISCUSS points, because I think there are some pretty critical references missing. Please consider this a COMMENT of Unusual Strength. Section 1. "which together are the most widely deployed ciphers" Actually, at least in the web context, this isn't totally true. According to Firefox telemetry, AES-GCM has been the most widely deployed cipher since at least 3Q14, and is currently used in the majority of TLS handshakes that Firefox does (52%) [1]. Section 3.1.1. Implementations MUST NOT negotiate SSL version 3 A reference to draft-ietf-tls-sslv3-diediedie seems in order here. Section 3.1. It would be good for this section to mention that servers MUST implement TLS version negotiation. That is, they MUST NOT abort the handshake if the version offered by the client is higher than the version the server supports. This is, after all, the root cause of fallback. Section 3.1.3. I'm surprised that there's not even a SHOULD CONSIDER [RFC6919] for SCSV here. Did the WG discuss having any requirement for SCSV? Also, if you want a cite for the 3% number, it's in the proceedings of IETF 91 [2]. Section 3.3. You might point out HPACK [3] as an example of compression that is sensitive to things like CRIME. Section 3.5. Shouldn't this refer more specifically to draft-ietf-tls-session-hash [4]? As it is, the recommendations in this section are kind of vacuous; e.g., TLS without session-hash provides no way to "bind the master secret to the full handshake". Section 4.4. "Modular vs. Elliptic Curve" I think that "finite field" or "modp" are more common than "modular". [1] http://mzl.la/1AmwXsm [2] http://www.ietf.org/proceedings/91/slides/slides-91-saag-3.pdf [3] https://tools.ietf.org/html/draft-ietf-httpbis-header-compression [4] https://tools.ietf.org/html/draft-ietf-tls-session-hash
Thanks for your work on this very helpful draft! I just have a few comments/questions. Section 5. Applicability statement: Should this include application authors (mentioned in section 7.1) and Developers who can set the defaults for implementations of TLS to help operators that are mentioned in this applicability statement? I see the sentence is phrased for 'deployment recommendations', but maybe this should also have a sentence or two on development recommendations. Not for this draft, but this one raised a question for me. Section 7.3: If you look at the following text: Unfortunately, many TLS/DTLS cipher suites were defined that do not feature forward secrecy, e.g., TLS_RSA_WITH_AES_256_CBC_SHA256. This document therefore advocates strict use of forward-secrecy-only ciphers. Should we be thinking about updates to the TLS registry to reflect this recommendation? That's probably not this draft, but a follow on to provide the needed 'specification required'. I'm sure a lot more thought might be needed for that and maybe support for features like PFS is added in a table if older recommendations that don't meet this are not removed. http://www.iana.org/assignments/tls-parameters/tls-parameters.xhtml HTTPbis went to the trouble of creating a blacklist of cipher suites that includes ones in the TLS registry. They did take the MTI recommendation that is in this draft, which is good. See section 9.2 and appendix A. https://datatracker.ietf.org/doc/draft-ietf-httpbis-http2/
One very simple point: -- Section 2 -- A number of security-related terms in this document are used in the sense defined in [RFC4949]. Terminology definitions need to be in normative references; 4949 should be normative.
I don't want to make these a DISCUSS, but I would appreciate a discussion: -- Section 3.1.1 -- On the SHOULD NOTs here: Is there any reason one might violate them *other than* that the other side doesn't support TLS 1.2 ? If not, is it worth saying that explicitly? Is it even worth changing "SHOULD NOT negotiate TLS version [x]" to "MUST NOT negotiate TLS version [x] unless no higher version is available in the negotiation" ? How can I evaluate each of the following "SHOULD"s? Why might I have a good reason not to comply with each of them?: -- Section 3.2 -- o When applicable, Web servers SHOULD use HSTS to indicate that they are willing to accept TLS-only clients. -- Section 3.3 -- Implementations and deployments SHOULD disable TLS-level compression ([RFC5246], Section 6.2.2). -- Section 3.5 -- TLS clients SHOULD apply the same validation policy for all certificates received over a connection, bind the master secret to the full handshake, and bind the abbreviated session resumption handshake to the original full handshake. -- Section 4.2.1 -- Servers SHOULD prefer this cipher suite over weaker cipher suites whenever it is proposed, even if it is not the first proposal. For the above set of "SHOULD" questions, I'm looking for something in the document that can help readers understand why these are not "MUST", and when and why they might make an informed decision not to abide by them. --- One other non-blocking comment; no discussion needed: -- Section 3.1.2 -- Nit: "correlates to Version 1.2 of TLS 1.2" -- take out one of the "1.2" ?
Thanks for all your work on this. I have a quick question about how we expect this document to be used within the IETF. I note that the bulk of the requirements/recommendations are directed at implementers, not protocol designers/specs. And Section 4.2.1 also says: "This document does not change the mandatory-to-implement TLS cipher suite(s) prescribed by TLS or application protocols using TLS. ... Implementers should consider the interoperability gain against the loss in security when deploying that cipher suite. Other application protocols specify other cipher suites as mandatory to implement (MTI)." So my question is whether we should consider this document effectively silent about the choice of cipher suites to be used when we standardize a new application protocol in the IETF, or an update to an existing protocol. That is the impression that I get from the text right now, and it doesn't quite match the way we've been using/citing the document in some recent discussions of other drafts. On the other hand, if we're expecting new or updated application protocol specs to conform to or take into account the recommendations in this document, I think that should be made more clear.
-- Sec 4.1: 128-bit ciphers are expected to remain secure for at least several years, and 256-bit ciphers "until the next fundamental technology breakthrough". Is the quoted text quoting something? If not, why is it in quotes? -- Sec 5: Although the list here is non-exhaustive, it seems odd to me that no DTLS examples are listed.
draft-ietf-tram-turn-third-party-authz
Let's talk about Section 6.2 and custom crypto. (1) You have tried to invent your own authenticated encryption, and fallen into the trap of Encrypt-Then-MAC [0]. (EDIT: Actually, it's MAC-then-Encrypt that's bad. See why you should just use AEAD?) Please use a real AEAD mode, such as AES-GCM [1]. That will also remove the need for padding, which is fraught with peril as well [2]. (2) It's a bad idea to hard-wire cryptographic algorithms into protocols, because they inevitably go bad [3]. (STUN itself is an anti-pattern here.) Please add an algorithm indicator to the top of your token structure. You don't need to create a registry now, since you've only got one value. That gives you something like the following, much simpler structure: struct { uint8_t algorithm; uint16_t length; opaque encrypted_block[length]; } struct { uint16_t key_length; opaque mac_key[key_length]; uint64_t timestamp; uint32_t lifetime; } It also means that you can simplify the key management routines in Section 4.1, since you only need one key. (3) Section 5 should be more clear about how this mechanism changes STUN processing. Namely, it adds a third parallel method of computing the message integrity value, which the server MUST use if an ACCESS-TOKEN attribute is present. [0] https://eprint.iacr.org/2001/045 [1] http://tools.ietf.org/html/rfc5116 [2] http://en.wikipedia.org/wiki/Padding_oracle_attack [3] https://tools.ietf.org/html/draft-housley-crypto-alg-agility-00
3: OLD The value of the scope parameter explained in section 3.3 of [RFC6749] MUST be string 'stun'. NEW The string 'stun' is defined by this specification for use as the OAuth scope parameter (see section 3.3 of [RFC6749]) for the OAuth token. Are these things not in some IANA registry? How do we avoid scope parameter collisions? 4: s/MUST/needs to
Thanks for your work on this draft and addressing the SecDir review: https://www.ietf.org/mail-archive/web/secdir/current/msg05425.html At the end of the new text on DTLS and TLS, you may want to add a reference to https://datatracker.ietf.org/doc/draft-ietf-uta-tls-bcp, which is also close to publication. The cipher suite recommendations appear to be in agreement with the BCP from those specified in RFC7350 and the BCP provides other best practices for TLS and DTLS that may be helpful to developers and implementors.
= Section 4 = Is it assumed that once a particular STUN server indicates support for third party authorization, the client should include an OAuth token in all future requests to that server? Or is the client expected to check for support again at some point in the future by sending a request without authorization? Just wondering if the case where a server enables and later disables support for third party authz (for some operational reason) is covered. = Section 6.2 = "the client MUST NOT examine the ticket" I think you meant token, not ticket.
draft-ietf-httpauth-basicauth-update
I support Pete's No Objection, and have found the responses unconvincing. I would support this being raised as a DISCUSS rather than a comment, but I'll leave that to Pete.
Nice job on a specification that is better than the technology it describes (echoing Stephen's ballot)!
The current text on the use of TLS is an OK start, but I would prefer if it were refactored so that the recommendation against Basic were general to HTTP and HTTPS. Suggested: "Because Basic authentication involves the cleartext transmission of passwords it SHOULD NOT be used except over a secure channel such as HTTPS [RFC2818]. Likewise, due to the risk of compromise, Basic authentication SHOULD NOT be used to protect sensitive or valuable information." Likewise, it would be good to comment in the Security Considerations on the risk of leakage caused by sending an Authorization or Proxy-Authorization preemptively. Something like: "As discussed in Section [TODO] above, it is possible for a client to preemptively send a Basic authentication value in an Authorization or Proxy-Authorization header without first having received a challenge. In such cases, the client does not know whether the resource to which it is sending the Basic authentication value is part of the realm that should receive that value, or even whether the resource requires authentication at all. This mismatch can cause leakage of client passwords to unauthorized parties, so it is RECOMMENDED that preemptive transmission of Basic authentication values be disabled by default."
This is a pretty crappy auth scheme, but this is a pretty good update and fills a need, thanks for the latter:-) - section 2: is it worth saying somewhere that you can't really have >1 proxy-auth happening even if you transit >1 proxy? - section 2, last para: I assume this is because client and/or server behaviour varies for this? If so, maybe it'd be good to give some guidance or add a reference (if a good one exists). If there's some other reason, it'd be good to say too. - section 4: would it be worth adding some guidance that re-use of e.g. entreprise login/SSO passwords for proxy-auth is particularly dodgy as is not protected via TLS?
2: I'd at least like to hear an explanation about why this is unreasonable (if it is): OLD Furthermore, a user-id containing a colon character is invalid, as recipients will split the user-pass at the first occurrence of a colon character. Note that many user agents however will accept a colon in user-id, thereby producing a user-pass string that recipients will likely treat in a way not intended by the user. NEW Furthermore, a user-id MUST NOT contain a colon character, as recipients will split the user-pass at the first occurrence of a colon character. Many user agents will accept a colon in user-id, but this produces a user-pass string that recipients will likely treat in a way not intended by the user. END MUST NOT means that not using a colon is required for interoperation. Which is true. So I don't see why you don't come out and say that.
-- Section 1.1.1 -- This specification uses the Augmented Backus-Naur Form (ABNF) notation of [RFC5234]. Where? You do use 5234 as a reference to define CTL characters, so you need the reference. But that sentence can go. -- Section 5 -- The entry for the "Basic" Authentication Scheme shall be updated with a pointer to this specification. IANA might think this means that they should add this spec to the existing reference. It'd be clearer to say it this way, and less likely to result in an error by IANA: NEW The entry for the "Basic" Authentication Scheme shall be updated by replacing the reference with a pointer to this specification. END
draft-ietf-precis-framework
A nit: b. Comparing two output strings to determine if they equivalent, ^are typically through octet-for-octet matching to test for "bit- string identity" (e.g., to make an access decision for purposes of authentication or authorization as further described in [RFC6943]).
Thanks for your work on this draft, it reads very well! Thanks for addressing the prior SecDir review comments: https://www.ietf.org/mail-archive/web/secdir/current/msg04732.html
draft-ietf-tcpm-accecn-reqs
This text: 5.2. Using Other Header Bits Any proposal to use such bits would need to check the likelihood that some middleboxes might discard or 'normalize' the currently unused flag bits or a non-zero Urgent Pointer when the Urgent Flag is cleared. Assignment of any of these bits would then require an IETF standards action. doesn't read quite right to me. Just reversing the logic, I'm getting "no IETF standards action is required unless middleboxes are twiddling the bits you're using for your proposal". Is that what you mean? Or is this just while experimenting?
In the ack section, there is a statement that says, "The views expressed here are solely those of the authors." I know this is stated as to ensure it is not necessarily the views of the sponsoring project. If this will be listed as having consensus (not done yet), should this statement be reworded to avoid conflict between the consensus statement? This is just a non-blocking comment for the AD to consider.
draft-ietf-opsawg-coman-probstate-reqs
No objection to the publication of this draft, but of course a number of comments about Section 3.10 on Transport protocols: - Req-ID: 10.001: Not sure if this is really a requirement for a transport protocol. I would read this as a requirement for the implementation of a transport protocol. - Req-ID: 10.002 says Description: Diverse applications need a reliable transport of messages. The reliability might be achieved based on a transport protocol such as TCP or can be supported based on message repetition if an acknowledgment is missing. Repitition without any limitation on the number of repititions, etc is not a feature of a reliable transport protocol. I would remove "or can be supported based on message repetition if an acknowledgment is missing". Otherwise the text will blow up when you try to specific what features a reliable transport protocol should have. - Req-ID: 10.003: Multicast is not a feature of the transport layer.
I support Alissa's DISCUSS, but since she's already there, I'm heading straight to ABSTAIN.
I'm putting this in as a DISCUSS in the event that the authors/WG want to discuss it or that I'm just missing some context, but I will happily move to ABSTAIN if there is no appetite for such discussion -- I see no need to block the document from advancing on the basis of my comments. It's really hard to tell how the "requirements" listed in this document are intended to be used. In fact, it seems incorrect to call them "requirements" at all -- in the sense of somehow being "required" -- given the following: This document provides a problem statement and lists potential requirements for the management of a network with constrained devices. ... Depending on the concrete circumstances, an implementer may decide to address a certain relevant subset of the requirements. ... This document in general does not recommend the realization of any subset of the described requirements. As such this document avoids selecting any of the requirements as mandatory to implement. A device might be able to provide only a particular selected set of requirements and might not be capable to provide all requirements in this document. On the other hand a device vendor might select a specific relevant subset of the requirements to implement. It's hard to see how the approach described above will contribute towards useful standardization. The "requirements" seem more like a laundry list of all the properties that a management architecture, management protocols, networks of constrained devices, and/or individual implementations might find desirable. This also makes me wonder how the WG intends for these "requirements" to be used. What is the next step as far as standardization goes? To design the "management architecture" that is mentioned? Or the "management protocols" that are mentioned -- one or more, working together or separately? Or to consider how existing management protocols can be repurposed for constrained networks (which is sort of hinted at in section 2, but not stated explicitly), to meet some undefined subset of the listed "requirements"? I think publishing a laundry list of desirable properties is ok if people find value in it, but I'm having trouble seeing how this document specifies either a problem statement or requirements that will somehow contribute to standardization efforts in the future.
draft-ietf-mif-mpvd-arch
In this text: 5.2.3.1.2. Connectionless APIs For connectionless APIs, the host should provide an API that PvD- aware applications can use to query the PvD associated with the packet. For outgoing traffic on this transport API object, the OS should use the selected outgoing PvDs, determined as described above. does "above" mean "in section 5.2.2"? Whatever it means, perhaps a cross reference would be helpful.
I have No Objection to the publication of this document. Here are some comments that you can take or leave in discussion with your AD. Some of these Comments and nits come from a "training review" by Alvaro. --- I think you correctly avoid the use of 2119 language. You can delete the boilerplate and reference. --- I think you are talking about dual homed devices rather than dual homed networks. More precisely, the "node" that is dual homed is not a router but is a host. Possibly, you are extending to a dual homed home gateway. But I think (I hope) you are not intnding to cover dual homed ASBRs or ABRs. And you are not (I hope) covering dual-homed CPEs such as might provide access to a substantial enterprise network. I think that this would benefit from more explanation of scope in the document. In practice, you are discussing making connectivity choices rather than routing choices. If I have this wrong, please tell me and I can worry about whether this should have been a Discuss :-) [BTW Section 4 is great, but it addresses my specific concerns by example rather than statement.] --- The document uses "policy" a bit like a unicorn. Of course, there is a fine tradition of saying "the node will apply locally configured policy" but you have an opportunity to be much more specific and so far more helpful for protocol developers and for implementers. Policies are easy to write in pseduorcode, and I think you know the core set of policies you expect to see supported. So you could supply some guidance. --- I also think there is a problem with how policy is expected to be configured. The "nodes" you are talking to are (I think) end-system hosts rather than routers (see my prvious) and many of these will be relatively dumb devices and/or have relatively dumb users. These users will not be capable of making more than very basic policy decisions and their choics will need to be presented in different terms to the choices that the device itself makes. This would benefit from discussion because the policy model will need more work. --- Some references to other parts of the document are missing. 2.1 discusses about the possibility of using DHCP to carry information about the PvD, but there's no reference to the later section that talks about the same topic. 2.3 talks a little about authentication, but no reference to the trust section later. --- Section 2.1 Link-specific and / or vendor-proprietary mechanisms for the discovery of PvD information (differing from IETF-defined mechanisms) can be used by nodes either separate from, or in conjunction with, IETF-defined mechanisms; providing they allow the discovery of the . necessary elements of the PvD(s). In all cases, nodes must by default ensure that the lifetime of all dynamically discovered PvD configuration is appropriately limited by relevant events. For example, if an interface media state change is indicated, previously discovered information relevant to that interface may no longer be valid and so need to be confirmed or re- Discovered. The first paragraph seems to be superfluous to me (of course I can use proprietary mechanisms!), but then the second has (what should be) a normative directive: "must ensure appropriate lifetime". But, I tend to see "appropriate" and "relevant" as red flags! Why are you not able to give firmer directives? --- Section 2.4 PvD ID is a value that is, or has a high probability of being globally unique. If it's capable of not being unique, you have to handle conflict. If you have to handle conflict, it becomes less important that the probability of being unique is high. Maybe... If two PvDs have the same ID, this conflict must be detected and resolved. Using a mechanism that selects values that are more likely to be unique has the benefit of more rapid convergence and no need to execute the conflict resolution mechanism. And please be careful with "globally unique". Is "global" really that or is it constrained? --- Section 3.3 has references to what seems to be possible solutions or just other work. I think that just saying that "any new mechanisms should consider co-existence with deployed mechanisms" is enough. --- Section 5.2.3 introduces "PvD-aware applications". This is not clearly defined. Maybe this is just another example of a policy that is not defined in the document.
A number of comments were submitted by Francis Dupont in his Gen-ART review. Hopefully the authors will be able to see if those comments result in some changes to the text.
Thanks for your work on this draft. I'm fine with it, but have one minor text suggestion resulting from considering the SecDir review. http://www.ietf.org/mail-archive/web/secdir/current/msg05457.html In 5.1, you may need a clause at the end of this sentence to make sure the same PvD is used to prevent such issues (it is implied, but may be better stated explicitly - and is explicitly stated elsewhere). From: As an example, a node administrator could inject a DNS server which is not ISP-specific into PvDs for use on any of the networks that the node could attach to. Such creation / augmentation of PvD(s) could be static or dynamic. To: As an example, a node administrator could inject a DNS server which is not ISP-specific into PvDs for use on any of the networks that the node could attach via the same PvD. Such creation / augmentation of PvD(s) could be static or dynamic.
-- Section 1.1 -- As far as I can see, there are no 2119 key words in this document (and I'm glad about that). You should remove this section and the reference to RFC 2119.
draft-ietf-roll-admin-local-policy
Thank you for the additions in text that resulted from the SecDir review and subsequent discussion. I found the discussion helpful to better understand the draft and security concerns. The current text looks good, but I did get additional context from the discussion that is not in the draft. The 4 possibilities listed int he security considerations look good and I don't have any recommendations as reading it again after the SecDir discussion made more sense. https://www.ietf.org/mail-archive/web/secdir/current/msg05435.html
One question: In this text: 4.1. Legal multicast messages Multicast messages can be created within the node by an application or can arrive at an interface. A multicast message created at a source (MPL seed) is legal when it conforms to the properties described in section 9.1 of [I-D.ietf-roll-trickle-mcast]. A multicast message received at a given interface is legal when: o The message carries an MPL option (MPL message) and the incoming MPL interface is subscribed to the destination multicast address. o The message does not carry an MPL option, the multicast address is unequal to ALL_MPL_FORWARDERS scope 4 or scope 3, and the interface has expressed interest to receive messages with the specified multicast address via MLD [RFC3810] or via IGMP [RFC3376]. The message was sent on according to PIM-DM [RFC3973] or according to PIM-SM [RFC4601]. Illegal multicast messages are discarded. 4.2. Forwarding legal packets A legal multicast message received at a given interface is assigned the network identifier of the interface of the incoming link . A message that is created within the node is assigned the network identifier "any". Two types of legal multicast messages are considered: (1) MPL messages, and (2) multicast messages which do not carry the MPL option. Is "legal/illegal" the right terminology for this?
draft-ietf-opsawg-coman-use-cases
I was surprised to see no mention of the specific security requirements of the various use cases described here. E.g., the medical use case makes no mention at all of security. While in general security is required in all cases, I think there are differences in the level of security that is required for the various use cases described here, and I wonder if the authors considered this, and if so, why it wasn't mentioned. I don't necessarily want to delay the document's publication pending a resolution to this issue, but I'd like to have a quick discussion about it.
The write-up says something about additional reviews. Does this include reviews by an outside party of the IETF that, for instance, an entity that operates building automation systems?
draft-ietf-kitten-gss-loop
Thanks for your work on this draft. I can see that this is just grouping text from previous RFCs to put it all in one place so the security practices in play may have been fine at time. Was there any discussion about fixing the following from the Security Considerations section, so at least an error could be triggered? This seems like a bigger issue with the GSS-API than one specific to this draft, so this is just a question to understand where this is at. The GSS-API uses a request-and-check model for features. An application using the GSS-API requests certain features (confidentiality protection for messages, or anonymity), but such a request does not require the GSS implementation to provide that feature. The application must check the returned flags to verify whether a requested feature is present; if the feature was non- optional for the application, the application must generate an error. Phrased differently, the GSS-API will not generate an error if it is unable to satisfy the features requested by the application.
draft-ietf-sfc-problem-statement
<preamble> I'm going to have limited opportunities to participate in conversations about this draft before Thursday's telechat, and I'm a no-objection, so don't worry about resolving these comments before the telechat if you need to ask me questions. Your responsible AD will make sure the right thing happens, even if the draft is approved on the call. </preamble> I look forward to seeing how Adrian's Comments and Stephen's Discuss and Comments are resolved. I wish I was more comfortable with the idea that service function traversal might not be strictly ordered. I'm not the right guy to ask for more explanation about that, but it does seem a possible source of additional security problems. Adrian is concerned (at Comment level) about the idea that required service functions might be inadvertently bypassed in an overlay topology; I'm not thinking that having service functions being applied in flexible order in an overlay topology would make the network *more* secure. I could be confused, but I'm thinking you could get different answers depending on the order that service functions are applied (to guess at a possibly bogus example, if a path has a NAT and a firewall that's looking at source/destination IP addresses, a packet that's been NATed might be more or less acceptable than the same packet that would be NATed after the firewall looks at it). A simple "yes, you're confused", or a "that's not a problem in practice" could be a fine response to this comment. I'll defer to the SEC ADs to decide whether that's a problem, of course. I'm also wondering if everything you're running through the service function chain has a hop count. If not, is there any concern that one might end up with a loop because a service function transforms a packet in a way that would cause it to be sent back to a service function that's already processed the packet? We've had fabulous loops with SIP proxies forwarding the same request back and forth, and resetting Max-Forwards to a default value each time. A simple "yes, everything has hop counts", or a "no, that's not a concern" could be a fine response to this comment. In 4. Related IETF Work 4. [ALTO]: The Application Layer Traffic Optimization Working Group is chartered to provide topological information at a higher abstraction layer, which can be based upon network policy, and with application-relevant service functions located in it. The mechanism for ALTO obtaining the topology can vary and policy can apply to what is provided or abstracted. This work could be leveraged and extended to address the need for services discovery. This is probably OK for inclusion in the problem statement, but my impression after discussions in the ALTO session in Honolulu is that the topology ALTO is looking at, is ONLY a topology of ALTO servers, at a sufficiently abstract level that it's hard to imagine ALTO lookups being part of a service function chain. You do point out that ALTO is working at a higher abstraction layer in your text, and this question is still open in ALTO, so probably no need to change the text - just don't get anyone's hopes up!
I'm balloting No Objection on this document although I did not find it a satisfying or detailed read. I think it contains text that could be left out and would improve the document. I think it omits material (describing the deployment and operation of service function chains today, and discussing security) that should be included. None of these issues quite makes it to the level of a Discuss for me, but it was a close thing. Perhaps the authors and working group would like to look at the document more closely. --- Section 1 Furthermore there is a cascading effect: service changes affect other services. This is not clear (to me). Perhaps you intend s/affect/may affect/ Perhaps you intend s/service changes/service function changes/ And maybe you meant that the introduction of a new service function onto a path in order to change one service, causes that same service function to be applied to all traffic on that path thereby changing other services. --- Section 1.1 Thank you for the reference to OSI layers. It's been a long time and they have been sorely missed. In a document where you are hot on overlays (which imply layer inversions) the mention of OSI layers is certainly "interesting". --- Notwithstanding your definition in section 1.1, the term "Service Function" remains ambiguous. Under your definition, a packet forwarder is a service function. What about a packet classifier? --- Section 1.1 Service Function Chain I stumbled over "The implied order may not be a linear progression". I think you need s/implied order/ordering constraints/ --- I suspect 2.2 needs to say "physical topology" since the point of this work is to introduce overlays that make changes to the service topology possible with simple configuration. --- Section 2.3 Flip the order of the paragraphs so that the current first paragraph has context for its statements. However, I think I contest the scope of your statements. They are true when the failure is the failure of a service function but there is continued ability to forward traffic. They are not true when the failure is of connectivity (such as a link) or of forwarding (such as a service function node). In those cases "in the same topology" might be better phrased as "in parallel paths through the same topology." --- Section 2.4 is something of a marketing statement which is a shame. Anyway, it conflicts with two things when it says: Service function chains today are most typically built through manual configuration processes. These are slow and error prone. With the advent of newer service deployment models Firstly, the prior text gives the impression that the service function chain is most typically built through physical deployment of service function nodes along traffic paths and their subsequent configuration. Secondly, there is little (if any) difference between a manual configuration process and a "newer service deployment model". That is, automation of configuration is identical in effect to manual configuration. Surely the distinction you want to draw out is the change from physical placement of service function nodes and the consequent constraints on ordering with the proposed virtualisation of topology through the overlay that allows service function nodes to be located anywhere and chained in arbitrary orders. --- I think the concept of "transport" in section 2.6 will (or should) run into the classic problem of the two meanings of "transport". Can you make the text clearer that you are not discussing whether UDP, RTP, SCTP etc. are in use. --- Does 2.7 mean "flexible" instead of "elastic"? Would it be good to have explained the current state of the are described here for the first time? Maybe an early section of the document could spend some (more) time describing how SFC is done today. --- Shouldn't 2.8 say "...unless packets are reclassified and classification behaviors are configured at each service function node" ? The point being that a more flexible and granular SFC mechanism (such as the WG is producing) effectively performs fine-grained classification at the head of the chain and then "marks" each packet with the result of that classification through a chain identifier, through a composite chain, or in metadata. Where an overlay topology is used, you are not actually changing the behavior you describe in this section (the mapping of traffic on a segment into a service function is still coarse), but you are changing the granularity of the topology. --- Section 2.10 Is "may not" "might not", "must not", or "cannot"? --- Why doesn't section 3 mention encapsulation? Isn't this a large part of the work and solution? --- I should really be happier were Section 4 to be removed. I don't believe it adds anything, it is (by its own admission) incomplete and leaves one to wonder about the significance of omissions, and it is out of date even before it is published. Actually, I have this particular Comment almost at the level of a Discuss: this section is harmful to the work of the IETF and detracts from the value of this document. --- Section 5 (rightly) notes the content present in Section 3. Why doesn't the Abstract also mention what will be in Section 3? Why doesn't the Introduction mention the content of Section 3? What value does Section 5 add to the document? --- Section 7 is deficient, IMHO. The problem statement should describe the problem of security of configuration and construction of service chains today. It should also observe that some service functions are specifically security functions: placing such functions on the physical path ensures that they are executed, while allowing them to be by-passed in the overlay network or left out of a chain is a considerable risk. However, I will leave it for the Security ADs to decide whether this point needs to be Discussed. --- Dave Mcdysan needs a capital D
Thanks for handling my discuss via the additional security considerations text. I look forward to seeing the SFC architecture and subsequent documents and how they handle the security and privacy issues that will need to be tackled.
I agree with my esteemed co-AD.
- A little bit disappointed that there are no much operational aspects in this problem statement. I guess this is fine as the charter contains: 5. Manageability: Work on the management and configuration of SFC components related to the support of Service Function Chaining will certainly be needed, but first needs to be better understood and scoped. However, the goal for SFC is to reduce the OPEX. For this to happen, the operational aspects (Troubleshooting and OAM come to mind) can not be an afterthought. - I can't parse supports the movement of service functions and application workloads in the existing network, all the while retaining the network and service policies and the ability to easily bind service policy to granular information such as per-subscriber state. - OLD: Service Function Chain (SFC): A service function chain defines an ordered or partially ordered set of abstract service functions (SFs) NEW: Service Function Chain (SFC): A service function chain defines an ordered or partially ordered set of abstract service functions OLD: Service Function: A function that is responsible for specific NEW: Service Function (SF): A function that is responsible for specific - In the Service Function Chain definition, I'm not sure how the sentence "An example of an abstract service function is "a firewall" helps the SFC definition. Anyway, this is covered in the Service Function definition.
Thanks for addressing my question on multi-tenancy and adding in text to describe how that could be handled. I agree with Barry, Alissa, and maybe others that a wiki may be a better option, but I won't stand in the way of publication. I do think the problem SFC is working on is important and the work will be worthwhile, but this draft isn't ready ready or may not need to be published. There are still numerous security considerations to be included as pointed out in the SecDir review and in Stephen's DISCUSS points that I support. The draft mentions ordering for service functions, and it would be good to see some concrete examples of how security may be an issue with different options for ordering. Since SFC's scope is a single administrative domain, the service chaining could result in session decryption at various points in the chain that could result in security and privacy exposures within that domain (typically considered a manageable risk). Functionality may be limited for some of the service functions if the decryption does not happen prior to that point and risk prioritization will be necessary (exposure of data, session interception, corruption of data, etc. could result from this exposure) since this is likely to be used in hosted environments with multiple tenants. The SecDir review did mention crossover between management/control and data planes, but tenant isolation may also need to be mentioned. Some nits (I don't think others mentioned these, but sorry if they were already addressed): Section 2.1: 3rd paragraph: Is this intended to mean after a new service function is added? I can't imagine that this our happen on the fly, so I think that's the case and adding a word or two may help: from: As more service functions are required - often with strict ordering - topology changes are needed before and after each service function is added resulting in complex network changes and device configuration. 4th paragraph: I'm have trouble reading this paragraph as I think it contradicts itself, but the example in the following paragraph is helpful. If topology dictates placement, how could using topology not be viable? Maybe rewording it would help: The topological coupling limits placement and selection of service functions: service functions are "fixed" in place by topology and therefore placement and service function selection taking into account network topology information is not viable. Furthermore, altering the services traversed, or their order, based on flow direction is not possible. Thanks, Kathleen
It seems to me that this document would best serve its purpose as something in the sfc working group wiki, not as a published RFC. That said, I will not object to its publication. But note that the L3VPN working group, which is mentioned in the document, no longer exists. I agree with the comments that there are things described herein that introduce security considerations that should be explored here, broadly, and that should not wait for the protocol documents. If this document will set the stage for protocol development, setting out the security considerations early is important.
The WG wiki seems like a more logical place to publish the content of this document. It doesn't seem to really refine the scope of the WG much beyond what is in the charter; is sufficiently high-level to be describing a generic technology problem; and lists a number of existing problems without indicating whether or how the work of the WG is expected to address them. I will not stand in the way of publication, but we need not spin up the IETF machinery for documents like this.
2.3. Constrained High Availability An effect of topological dependency is constrained service function high availability. Worse, when modified, inadvertent non-high availability or downtime can result. This seems to say, "when you break it, it's broken" which as a tautology I agree with. I don't see any particular reason a set of elements in a sfc should be be lower availability then if they were all static physical objects and were assembled to support the same application.
conflict-review-irtf-cfrg-chacha20-poly1305
We need a ballot position that says, "Of course this is OK, the TLS WG asked for it!"