IESG Narrative Minutes
Narrative Minutes of the IESG Teleconference on 2010-03-04. These are not an official record of the meeting.
Narrative scribe: John Leslie (The scribe was sometimes uncertain who was speaking.)
Corrections from: Adrian, Robert, Dan, Magnus, Russ
1 Administrivia
2. Protocol Actions
2.1 WG Submissions
2.1.1 New Items
Telechat:
Telechat:
Telechat:
Telechat:
Telechat:
2.1.2 Returning Items
Telechat:
2.2 Individual Submissions
2.2.1 New Items
Telechat:
Telechat:
2.2.2 Returning Items
3. Document Actions
3.1 WG Submissions
3.1.1 New Items
Telechat:
Telechat:
Telechat:
3.1.2 Returning Items
3.2 Individual Submissions Via AD
3.2.1 New Items
Telechat:
Telechat:
3.2.2 Returning Items
3.3 Independent Submissions Via RFC Editor
3.3.1 New Items
3.3.2 Returning Items
Telechat:
???? EST break
???? EST back
4 Working Group Actions
4.1 WG Creation
4.1.1 Proposed for IETF Review
4.1.2 Proposed for Approval
Telechat:
Telechat:
Telechat:
4.2 WG Rechartering
4.2.1 Under evaluation for IETF Review
4.2.2 Proposed for Approval
Telechat:
5. IAB News We can use
6. Management Issues
Telechat:
Telechat:
Telechat:
Telechat:
Telechat:
Telechat:
Telechat:
Telechat:
Telechat:
Telechat:
Telechat:
7. Agenda Working Group News
1347 EST Adjourned
(at 2010-03-04 07:31:55 PST)
draft-ietf-mpls-ldp-typed-wildcard
Please spell out LDP and FEC in the title.
Shouldn't this update 5036? The last sentence of the Introduction implies that it does...
I am in favor of approving this document, but I would like to raise a clarification question origibated by the OPS-DIR review by Menachem Dodge. It is not a show-stopper, but it points to a possible inconsistency between this document and RFC 5036: In Section 8: "IANA Considerations" the following is stated: "The 'Typed Wildcard FEC' Capability requires a code point from the TLV Type name space. [RFC5036] partitions the TLV TYPE name space into 3 regions: IETF Consensus region, First Come FirstServed region, and Private Use region. The authors recommend that a code point from the IETF Consensus range be assigned to the 'Typed Wildcard FEC' Capability." When checking RFC 5036 Section 4.2 "TLV Type Name Space" I find the following text: "LDP divides the name space for TLV types into three ranges. The following are the guidelines for managing these ranges: - TLV Types 0x0000 - 0x3DFF. TLV types in this range are part of the LDP base protocol. Following the policies outlined in [IANA], TLV types in this range are allocated through an IETF Consensus action. - TLV Types 0x3E00 - 0x3EFF. TLV types in this range are reserved for Vendor-Private extensions and are the responsibility of the individual vendors (see Section "LDP Vendor-Private TLVs"). IANA management of this range of the TLV Type Name Space is unnecessary. - TLV Types 0x3F00 - 0x3FFF. TLV types in this range are reserved for Experimental extensions and are the responsibility of the individual experimenters (see Sections "LDP Experimental Extensions" and "Experiment ID Name Space"). IANA management of this range of the TLV Name Space is unnecessary; however, IANA is responsible for managing part of the Experiment ID Name Space (see below)." The TLV Type name space is divided into 3 regions but they appear to be not as stated - IETF Consensus region, First Come First Served region, and Private Use region. Rather the division appears to be as follows: 1. LDP Base protocol - allocated through IETF Consensus action. 2. Vendor-Private extensions - IANA management is unnecessar 3. Experimental - IANA management is unnecessary
draft-ietf-tsvwg-port-randomization
This is a good and much needed document, thanks for writing it. I did have one issue, however. Perhaps I'm missing something but the document first says: Port numbers that are currently in use by a TCP in the LISTEN state should not be allowed for use as ephemeral ports. but then later the algorithms say: if(resulting five-tuple is unique) return next_ephemeral; This does not appear to be sufficient to prevent the use of a port in the LISTEN state. The if statement simply checks whether there's an open connection between this host and some other specific host. It does NOT check whether there could in the future be a connection between this host and the specific host. If we are opening a connection for application X between hosts A and B, you cannot choose a port that another application Y is already listening on host A. Even if A and B at the moment are not having an open connection for application Y between them.
The document says: As mentioned in Section 2.1, the dynamic ports consist of the range 49152-65535. However, ephemeral port selection algorithms should use the whole range 1024-49151. Since this range includes ports numbers assigned by IANA, this may not always be possible, though. A possible workaround for this potential problem would be to maintain a local list of the port numbers that should not be allocated as ephemeral ports. Thus, before allocating a port number, the ephemeral port selection function would check this list, avoiding the allocation of ports that may be needed for specific applications. Ephemeral port selection algorithms SHOULD use the largest possible port range, since this improves obfuscation. First, what does the document actually recommend as the default policy? The use of the entire range, or the entire range minus locally configured list? Second, I think the comment about IANA isn't quite right above. The issue is not the IANA allocation, its the possibility that some application would be running on a port. You already discussed avoiding ports that are in the listen state. So this appears to leave only the case where the application is not yet running but will later run and want to use its well known port. Please be more specific about what the problem actually is.
I support Tim's discuss regarding port range selection.
I think there needs to be some text between this text in section 2.1: The dynamic port range defined by IANA consists of the 49152-65535 range, and is meant for the selection of ephemeral ports. and this text in section 3.1: It is important to note that a number of applications rely on binding specific port numbers that may be within the ephemeral ports range. If such an application was run while the corresponding port number was in use, the application would fail. Therefore, ephemeral port selection algorithms avoid using those port numbers. that explains the (as far as I can tell) unstated assumption that ephemeral ports could be selected from the IANA "registered" port range, 1024-49151. Reading on, it seems the issue is addressed here: 3.2. Ephemeral port number range As mentioned in Section 2.1, the dynamic ports consist of the range 49152-65535. However, ephemeral port selection algorithms should use the whole range 1024-49151. I suggest clarifying to: 3.2. Ephemeral port number range As mentioned in Section 2.1, the dynamic ports consist of the range 49152-65535. However, ephemeral port selection algorithms should also use available ports in the range of registered ports, 1024-49151. Therefore, the port selection algorithm should be applied to the whole range 1024-65535.
I have reviewed draft-ietf-tsvwg-port-randomization-06, and have couple of small concern that I'd like to discuss before recommending approval of the document: - Section 3.3.1 says '"random()" is a function that returns a pseudo-random unsigned interger number in the range 0-65535'. The document is not very clear on exactly what the requirements for this function are. If I recall right, the output of typical implementations of POSIX random() may look random to simple statistical tests, but it is not unpredictable (seeing couple of values allows you to fully predict future outputs). While this use probably doesn't need a cryptographically secure strong random number generator, it looks like some degree of unpredictability would be needed? - Section 3.4 suggests use of a 32 bit key, which has exploitable security problems -- to make the sequence unpredictable (even after seeing couple of values), more is needed (and since bits here are cheap, so there's no real reason to use less than 128).
Charlie Kaufman's SecDir review identified a number of minor clarifications/editorial nits that should be addressed; it seems the authors are already addressing those.
It is interesting that Algorithms 1, 3, and 4 statistically favor port numbers one greater than allocated port numbers. But probably not worth noting.
Since we are trying to replace the TCP MD5 signature option [RFC2385] with TCP AO, it seems like a bad idea to reference it in the document as a security solution. As pointed out in the Gen-ART Review by Avshalom Houri on 2010-03-03, the first portion of the document (up to and including Section 3.2) is lengthy and repeating while it is lacking some background. When and how are the techniques described in the document to be used? Is this texpected to be used in every transport protocol implementation in every environment? The Gen-ART Review by Avshalom Houri also makes many suggestions for improving the document. Please consider them.
This is a very long discuss and many of the appoints in are purely asking we certain attacks considered. It's perfectly reasonable to resolve theses with a "Yes" and pointer to the list. RFC3605 is not commonly implemented for RTP and I find it very concerning that this would break RTP. The recommendation here violate the recombination in Req-4 of BCP 127. It would be very easy to define the algorithm here such that they preserved port parity. Why not do that? If one does not, some RTP receivers when told to send RTP to port x, will deice the parity is wrong and actually send it to x-1. This is not good. Breaking port+1 continuity breaks RTCP but that has not turned out to be as critical as breaking RTP. However, it would nice to see an this draft support that unless there was a reason it was not possible. Regardless of how we resolve this, I believe this draft needs to be changed so it is consistent with BCP 127, or we need to change BCP 127 before this can be published. We should not be publishing a draft that violates an existing BCP. I only find two normative statements. The first Ephemeral port selection algorithms SHOULD use the largest possible port range, since this improves obfuscation. This relies on the suggestion that somehow one would maintain a local list of port numbers that should not be allocated as ephemeral ports. How is a OS such as Linux supposed to actually implement this? Is there a list IANA is providing with real time updates? When IANA allocated a new port to a protocol, how long before they could reliably use that across existing computers. I don't find this to be implementable. I would like the draft updated such that it has advice that is clear to implementers what they need to do. I worry the current advice will result in ports such as 5060 being allocated and then servers tripping to run on that port not being able to get it for no reasons that is parent to the end user who will see it as an intermittent problem that goes away when they reboot. That not a design I would consider good for a BCP. The second normative statement is one SHOULD obfuscate the allocation of their ephemeral ports. It then goes on to describe a series of possible algorithm to do this which all seem to lack any crypto analysis. The problem of having a algorithm that generates a number that is hard to predict by an attacker that has seen the previous sequence of numbers the algorithm produced is pretty well understood so I expected to see pointers to concrete analysis here. Alg 1. If the attacker knows that port x in in use, they know that port x+1 is twice as likely to be chosen as the next port as say x-1. I'm not a crypto person but this sort of property always makes me pretty uncomfortable about deciding what the security properties of this are. Did crypto people look at it? Can we describe the security properties of this. Alg 2: You have count = num_ephemeral but I have a hard time imagining anyone would set it this high. It still won't guarantee 100% port usage as you point out in the note. It seem from the text below figure 3 you are saying that count = 2 would be fine. Same issue in some of the other ALGs. Alg 3: I'm fairly skeptical of the advice on choosing the key sizes. I'm not a crypto person but I'd love to see some analysis of this. Let's consider some different keys sizes (yes, I realize the draft recommends 32 or 64, I'll get to that). If the key size was 16, and the attacker could see what port was used for a single connection, they could brute force the key space and have the key, or at least a small number of possible entries for the key if there were collisions in the MD5 space. Now lets consider a 32 bit key. Again if the attacker could see two connections, they could brute force the space (the machine I am on right now looks like it would do that in about 5 seconds) on the first connection which would get them to about 2^16 possible keys, but on the second connection, they could filter these keys and get down to a very small number. Now I realize the draft says to use 64 bits it attackers can probe for ports but do we have any crypto analysis of any of this? If 64 bits enough? 32 bits clearly is not. If I could probe for several ports and had and FPGA card in my computer, could I easily figure out 64 bits? Is there discussion on this you can point me at? I suspect I would be much more conformable with something that had been looked at lots, like AES counter mode. Again, I'm not even qualified to suggest anything here but I'm looking for evidence the crypto stuff was seriously looked at. Alg 5: The idea that an end user should configure N does not seem practical. How would they figure out what to do. Most the implementors I know would choose 500 for the default because it was in the RFC and the RFC was golden and you MUST do exactly what it says, unless of course it is a SHOULD in which case they don't even bother to read it much less do it but I degrees. The 500 is going to have a wrap around after a mere 256 ports while at the same time only proving a 8 bits of security which seems like it would be inadequate for many cases. This is harmful in that it passes an impossible hard problem of choosing a good value of N to the end user which will not provide security while at the same time providing the illusion of being more useful than it probably is. If this algorithm is not a good choice, remove it from the BCP. If it is only a good choice for certain cases, make it clear which cases this should be used instead of the other ones. Section 3.5 provides some ideas about pros and cons of various algorithms but no real advice one which ones to use and when. Would if be possible to pick one algorithm and just recommend that? If the view is we need to develop experience to find out which one of these is the best for a general OS, then this should be experimental not BCP. I find this far from what I would expect in a BCP on such an important topic. I think it could be vastly improved by having the security folks define an algorithm, working with the Apps, RAI, and behave folks to make sure that it does no more harm than necessary to existing applications. And overall make it be a tight specification where it is clear what an implementation MUST do to be compliant. A draft where vendors can have very poor implementation and still claim to be compliant is not good.
For the IESG more than the authors .... My current understanding of BCP would imply this should be a PS that update TCP, UPD, DCCP, and SCTP but I don't really understand why it is a BCP. Though ephemeral pot sounds of some relevance to my discuss, I suspect you want a s/ephemeral pot/ephemeral port/ Having made a nearly infinite number of typos in my life, I did like this one.
There are a couple of issues I would like to discuss before moving to No Object for this document... (1) Section 3.2 states that "ephemeral port selection algorithms should use the whole range 1024-49151." [As noted in the comment section, I believe that 49151 should be 65535.] I get the concept of using the largest possible range, but this seems to violate the spirit of RFC 4340, among others. The following paragraph notes that this range includes assigned ports so "this may not be possible". Upon review, a very significant number of ports in the range 1024-49151 have been assigned. I would like to understand how to determine which of the set of IANA registered ports should be made available for ephemeral port selection. (2) There are issues with the computation of next_ephemeral in Algorithm #1 which will skew the selected. While the impact of this issue in isolation is relatively minor, the fix is very straightforward. (See follow up email.) (3) Depending upon which IANA registered ports are available for ephemeral port selection, issues 1 and 2 in combination with algorithm #1 can create a situation where certain ports in the range 1024-49151 are significantly more likely to be selected. (See follow up email.)
section 3.1, final paragraph s/DCCP is not affected is not affected/DCCP is not affected/ Section 3.2 states: As mentioned in Section 2.1, the dynamic ports consist of the range 49152-65535. However, ephemeral port selection algorithms should use the whole range 1024-49151. Shouldn't the whole range be "1024-65535"?
I support the issues raised by Pasi, Robert and Tim in their DISCUSSes
Before suggesting that NATs follow the recommendations in this document, there should be more discussion of the impact of the recommendations on deployed systems using symmetric RTP/RTCP that expect sequential binding.
The text should acknowledge that applications using RTP are really at the mercy of what their underlying UDP implementation (for the current majority of RTP users anyway) chooses to do with this recommendation.
Section 4. If this document is supposed to make recommendations on NAT behavior I think it needs to discuss when it makes sense in the context of the terminology of the NAT behavior documents, like RFC 4787 and 5382 that do discuss port assignment in the NAT. As I see the NAT behavior around port obfuscation is dependent on at least three things. The nat's port assignment rule, if it is preserving. Secondly what it does when it fails to preserve and if it has port parity preservation. So I find the text under specified, but still gives recommendations that do go against the current BCPs. Thus we must also consider if this document actually updates theses BCPs.
draft-ietf-dnsext-dnssec-gost
I support Tim's and Russ's DISCUSS.
Section 6.1 says: > > DNSSEC aware implementations SHOULD be able to support RRSIG and > DNSKEY resource records created with the GOST algorithms as > defined in this document. > Yet, the IANA Considerations in Section 8 say that support for this algorithm is OPTIONAL. These seem to be in conflict. The 'SHOULD' needs to be removed, and the sentence reworded to clearly state that support for this algorithm is OPTIONAL.
Please consider the comments from the Gen-ART Review by Vijay Gurbani on 19-Feb-2010: 1) In the Abstract, the draft has references of the form "[DRAFT1, DRAFT2, DRAFT3]". I would humbly suggest that these be removed from the Abstract and placed in the body of the document. In their current form, these references appear, well ... temporary, given their names (DRAFT1, etc.) I have come across services that index IETF RFCs and also include the abstract in the index. In that context, having an Abstract include references to seemingly impermanent placeholders appears disconcerting. Note that references to RFC numbers themselves -- as the Abstract also shows -- is okay since RFC numbers denote some sort of permanence. 2) The references "DRAFT1" etc. seem to best fit in Section 1, paragraph 4.
When I read it was chosen to send these blobs on the wire "as is" without transformation of endianness. Do I understand this correctly that the byte order is not specified? Needless to say that does not sound interoperable to me.
I generally have no objections to this work. But I have a couple of issues that need to be discussed before I can recommend approval of this document: 1) In Section 6.2: Any DNSSEC-GOST implementation is required to have either NSEC or NSEC3 support. (COMMENT) I think this should use RFC 2119 language. But more importantly, I think this is missing a Normative Reference to RFC 5155. If that is the case, then you should also register the new hashing alrogithm in the following IANA registry: <http://www.iana.org/assignments/dnssec- nsec3-parameters/dnssec-nsec3-parameters.xhtml> 2) In Section 8: This document updates the RFC 4034 Digest Types assignment (section A.2)by adding the value and status for the GOST R 34.11-94 algorithm: Value Algorithm Status {TBA2} GOST R 34.11-94 OPTIONAL I think you meant the following IANA registry: <http://www.iana.org/assignments /ds-rr-types/ds-rr-types.xhtml>? Can you please confirm.
6.1. Support for GOST signatures DNSSEC aware implementations SHOULD be able to support RRSIG and DNSKEY resource records created with the GOST algorithms as defined in this document. Use of this SHOULD was debated in details on SecDir mailing list. People has suggested that this should be a MAY. I don't think a choice of SHOULD versa MAY actually matters in this case, because this document doesn't say that it "Updates" RFC 4034 (And I think it a good case should be made that it shouldn't include "Updates: RFC 4034".) and because the document clearly states that the newly registered algorithms are OPTIONAL to support. Note that I also generally agree with concerns about introducing additional signature/hashing algorithms for use in DNSSEC, however I think that any DNSSEC policy on this is out of scope for the document. And the document is currently silent on this anyway.
6.1. Support for GOST signatures DNSSEC aware implementations SHOULD be able to support RRSIG and DNSKEY resource records created with the GOST algorithms as defined in this document. There has been extensive discussion of this topic on the ietf and secdir lists. IMHO, this document has demonstrated community consensus but with a "MAY support" rather than a "MUST support".
draft-ietf-ccamp-gmpls-mln-extensions
draft-ietf-pkix-tamp
An extensive document, so it is not surprising that I am able to find some small concerns. I don't think that any is a major show- stopper, but they do all need attention. --- Slightly puzzled by This specification is intended to satisfy the protocol-related requirements expressed in Trust Anchor Management Requirements [I-D.draft-ietf-pkix-ta-mgmt-reqs] and uses vocabulary from that document. Since that document is work in progress and not yet finalized, how can this document know whether it does/will satisfy the requirements? It sounds as though this document should wait for the other to at least complete WG last call, if not be approved. --- I had some trouble working out what an implementation was to do in the event of some error condiditions. For example: 1. In section 2.2.3.1 A content-type attribute MUST contain the same object identifier as the content type contained in the EncapsulatedContentInfo. 2. Two places in the document say that in the event of specific errors, messages MUST be rejected as malformed. However, Section 5 does not list such a status code. It is probable that both of the malformation cases are actually covered by more specific status codes and the text needs to be updated. 3. Section 4.3 Attempts to change a trust anchor added as a TBSCertificate using a TrustAnchorChangeInfo MUST fail. Attempts to change a trust anchor added as a TrustAnchorInfo using a TBSCertificateChangeInfo MUST fail. I think a careful pass through the document to examine the use of "MUST" will reveal a number of cases where the protocol action in the event of a breach is not clear. --- Section 4.1 If the digital signature on the TAMP Status Query message is valid, sequence number checking is successful, the signer is authorized, and the trust anchor store is an intended recipient of the TAMP message, then a TAMP Status Response message SHOULD be returned. If a TAMP Status Response message is not returned, then a TAMP Error message SHOULD be returned. Can you say why the final SHOULD is not a MUST? I.e., under what circumstances an implementation that decides to not return a Status Response MAY simply swallow the Status Query.
Question for Apps ADs ... does the HTTP usage need to say anything about caches.
Updated, issues starting from # 14 are new (and the COMMENT # 8 is new as well): 1) 1.3. Architectural Elements A globally unique algorithm identifier MUST be assigned for each one- way hash function, digital signature generation/validation algorithm, and symmetric key unwrapping algorithm that is implemented. To support CMS, an object identifier (OID) is assigned to name a one-way hash function, and another OID is assigned to name each combination of a one-way hash function when used with a digital signature algorithm. Similarly, certificates associate OIDs assigned to public key algorithms with subject public keys, and certificates make use of an OID that names both the one-way hash function and the digital signature algorithm for the certificate issuer digital signature. Is there any particular IANA registry to choose OIDs from? This might affect interoperability. 3) 1.3.2. Trust Anchor Store o The trust anchor store SHOULD support the use of an apex trust anchor. If apex support is provided, the trust anchor store MUST support the secure storage of exactly one apex trust anchor. The trust anchor store SHOULD support the secure storage of at least one additional trust anchor. Each trust anchor MUST contain a unique public key. A public key MAY appear at most one time in a trust anchor store. I think use of the last MAY is wrong, it looks like you are trying to say "MUST NOT appear more than once". 4) 2.2.3.3. Content-Hints Attribute o contentDescription is OPTIONAL. The TAMP message signer MAY provide a brief description of the purpose of the TAMP message. The text is intended for human consumption, not machine processing. The text is encoded in UTF-8 [RFC3629], which accommodates most of the world's writing systems. <<This requires a Language tag.>> Russ: Section 2.9 of RFC 2634 defines the content-hints attribute, and it does not include a language tag. It is being used here in the same manner that it is used in S/MIME. It seem too late to bring on new requirements. That said, I'm pleased to work with you to define a new attribute that includes a language tag that can be used in all of the places that content-hints is used today. I do not think we should block this document for it. The implementation MUST provide the capability to constrain the character set. How can this MUST be specified? UTF-8 is already a character set, so it is not clear what you mean here. Maybe you meant a script? 5) I think the following Informative reference should be Normative: [RFC4049] Housley, R., "BinaryTime: An Alternate Format for Representing Date and Time in ASN.1", RFC 4049, April 2005. due to the following text in Section 2.2.3.4: The TAMP message originator MAY include a binary-signing-time attribute, specifying the time at which the digital signature was applied to the TAMP message. The binary-signing-time attribute is defined in [RFC4049]. 6) 4. Trust Anchor Management Protocol Messages o The TAMP Error message SHOULD be signed. It uses the following object identifier: { id-tamp 9 }. Support for Trust Anchor Update messages is REQUIRED. Support for all other message formats is RECOMMENDED. Even support for the "TAMP Error message" is not a MUST? I alto think you need to say which end can generate which type of message. For example, can a Trust Anchor Store generate TAMP Status Query message? 7) 4. Trust Anchor Management Protocol Messages Each TAMP query and update message include an indication of the type of response that is desired. The response can either be terse or verbose. All trust anchor stores SHOULD support both the terse and verbose responses and SHOULD generate a response of the type indicated in the corresponding request. What would happen if the first SHOULD is violated? Does this mean that the recipient MUST support both versions (as the sender might not send the version requested)? 8) 4.1. TAMP Status Query If the digital signature on the TAMP Status Query message is valid, sequence number checking is successful, the signer is authorized, and the trust anchor store is an intended recipient of the TAMP message, then a TAMP Status Response message SHOULD be returned. If a TAMP Status Response message is not returned, then a TAMP Error message SHOULD be returned. It looks like the 2 SHOULDs allow for no message to be returned. I think this would affect interoperability. (The same issue for all other commands.) 9) 4.1. TAMP Status Query The uri field can be used to identify a target, i.e., a trust anchor store, using a Uniform Resource Identifier. I think you need a normative reference to the URI spec (RFC 3986) here. 10) 4.1. TAMP Status Query TargetIdentifier ::= CHOICE { hwModules [1] HardwareModuleIdentifierList, communities [2] CommunityIdentifierList, allModules [3] NULL, uri [4] IA5String, otherName [5] AnotherName} Is any of the choices mandatory to implement? 11) 4.2. TAMP Status Query Response TrustAnchorChoiceList ::= SEQUENCE SIZE (1..MAX) OF TrustAnchorChoice } This doesn't look like a valid ASN.1: extra "}"? 12) 4.2. TAMP Status Query Response o tampSeqNumbers is OPTIONAL. When present, it is used to indicate the currently held sequence number for each trust anchor authorized to sign TAMP messages. The keyId field identifies the trust anchor and the seqNumber field provides the current sequence number associated with the trust anchor. I am confused here. How can this be converted to TAMPMsgRef ::= SEQUENCE { target TargetIdentifier, seqNum SeqNumber } ? 13) 4.3.1. Trust Anchor List [I-D.ietf-pkix-ta-format] defines the TrustAnchorList structure to You lost me, I can't find where in the document the TrustAnchorList is used. Can you please clarify? convey a list of trust anchors. TAMP implementations MAY process TrustAnchorList objects as TAMPUpdate objects with terse set to terse, msgRef set to allModules (with a suitable sequence number) and all elements within the list contained within the add field. 14) In Section 5: Is the list of error codes extensible? 15). In Section 5: o badUnsignedAttrs is used to indicate that the unsignedAttrs within SignerInfo contains an attribute other than the contingency- public-key-decrypt-key unsigned attribute, which is the only unsigned attribute supported by this specification. But section 2.2.4 says: The TAMP message originator SHOULD NOT include other unsigned attributes, and any unrecognized unsigned attributes MUST be ignored. Which means that this error code can never be returned by a compliant implementation. 16) In Section 5: o notAuthorized is used to indicate one of two possible error situations. In one case the sid within SignerInfo leads to an installed trust anchor, but that trust anchor is not an authorized signer for the received TAMP message content type. Identity trust anchors are not authorized signers for any of the TAMP message content types. Is there any way in TAMP to discover or manage who is authorized to perform an action? (Is this partially covered by section 7?) 17) 4.11. TAMP Error TAMPError ::= SEQUENCE { version [0] TAMPVersion DEFAULT v2, msgType OBJECT IDENTIFIER, What happens if the msgType couldn't be parsed? No TAMP Error message can be generated? Maybe it would be better to make this field OPTIONAL. 18) Were MIME media types submitted for review to the ietf-types@ mailing list? 19) C.2. TAMP Status Response Message An HTTP-based TAMP Status Response message is composed of the appropriate HTTP headers, followed by the binary value of the DER encoding of the TAMPStatusResponse, wrapped in a CMS body as described in Section 2. Here and in other sections describing "response messages": Is this an HTTP response? 20) HTTP mapping seems to be underspecified. For example it needs to discuss cache control behavior for responses.
1) 1.3.1. Cryptographic Module If only one one-way hash function is present, it MUST be consistent with the digital signature validation and digital signature generation algorithms. If only one digital signature validation algorithm is present, it must be consistent with the apex trust anchor operational public key. If only one digital signature generation algorithm is present, it must be consistent with the cryptographic module digital signature private key. Change a couple of "must"s to "MUST"s? 2) 1.3.3. TAMP Processing Dependencies TAMP processing MUST include the following capabilities: o TAMP processing MUST have a means of locating an appropriate trust anchor. Two mechanisms are available. The first mechanism is based on the public key identifier for digital signature verification, and the second mechanism is based on the trust anchor X.500 distinguished name and other X.509 certification path controls for certificate path discovery and validation. The first mechanism MUST be supported, but the second mechanism can also be used. Does this mean that the second mechanism is OPTIONAL? I don't think the text is very clear. 3). Also in 1.3.3: o TAMP processing MUST have read and write access to secure storage for trust anchors in order to update them. Update operations include adding trust anchors, removing trust anchors, and modifying trust anchors. Application-specific access controls MUST be securely stored with each management trust anchor as described in Section 1.3.4. I am not sure. Does 1.3.4 cover this? 4) 1.3.4. Application-Specific Protocol Processing The application-specific protocol processing MUST be provided the following services: It looks like a preposition is missing here. 5) 2.2.3.3. Content-Hints Attribute o contentType is mandatory. This field indicates the content type that will be discovered when CMS protection content types are removed. I think it would be good to add here that if the content-type discovered after removing encapsulation doesn't match this value, then the message MUST be discarded. 6) 4. Trust Anchor Management Protocol Messages TAMP specifies eleven message types. The following provides the content type identifier for each TAMP message type, and it indicates whether a digital signature is REQUIRED. This doesn't look like the right use for an RFC 2119 keyword. 7) In Section 4.3: o change is used to update the information associated with an existing management or identity trust anchor in the trust anchor store. [...] Attempts to change a trust anchor added as a TBSCertificate using a TrustAnchorChangeInfo MUST fail. Attempts to change a trust anchor added as a TrustAnchorInfo using a TBSCertificateChangeInfo MUST fail. As you already listed appropriate error codes for some other failures, it would be a good idea to list the corresponding error code(s) here as well. 8). Security consideration fields for registered MIME media types are not well written. For example, section B.1 says: Security considerations: Carries a request for status information. So what? What needs to be protected? What are the risks from using this media type to applications? Etc. -------------------------- The following [former] DISCUSSes are listed here with some explanation of why they are not going to be addressed in this document: 2) 1.3.1. Cryptographic Module o The cryptographic module MUST support at least one one-way hash function, one digital signature validation algorithm, one digital signature generation algorithm, and, if contingency keys are supported, one symmetric key unwrapping algorithm. Is there a mandatory to implement algorithm? Russ: PKIX has never specified mandatory to implement algorithms. The reason is that other protocols make use of certificates, and these other protocols often dictate algorithm requirements. For example, S/MIME does have mandatory to implement algorithms, and S/MIME depends on PKIX certificate specifications. The same convention is being followed here.
draft-ietf-ipfix-export-per-sctp-stream
I agree with the point raised by Lars' Discuss. With respect to Alexey's Discuss, I think the current SCTP-RESET reference and text is just the right way to handle reference to future work that would otherwise block this RFC from proceeding.
The RFC Editor is bound to ask you to move the Introduction to be Section 1. - - - - I agree with the question about Informational status.
The Gen-ART Review by Ben Campbell on 1 March 2010 raised a major issue. Ben said: > -- section 4.5, general: > > I am confused as to how the collector determines the > exporter supports this extension. If I understand correctly > (and it's probable that I do not, since this is my first real > exposure to IPFix), the collector basically has to infer > exporter support from the behavior of the exporter. But then > the second paragraph after the numbered list (i.e. 2 > paragraphs after item 4) says: > > "In the case where the Exporting Process does not support the > per-SCTP-stream extension, then the first Data Record received > by the Collecting Process will disable the extension for the > specific Exporter on the Collecting side." > > This seems to conflict. Why would the collector need to worry > about items 1-4 if it can categorically determine exporter > support from the first data record? > > In general, though, I think that having the collector infer > support is not the right way to do this. It would be far > better to explicitly signal support, if that is at all > possible in IPFix. Otherwise, it seems like the collector has > to watch every record for violations of 1-4, and make fairly > complex decisions on a per-record basis. Are heuristics the best that can be done to determine whether the exporter supports the per-SCTP-stream extension?
The Gen-ART Review by Ben Campbell on 1 March 2010 includes some minor issues n addition to the major one that prompted my DISCUSS position. Please consider them.
draft-lha-gssapi-delegate-policy
Hilarie Orman's SecDir review has some editorial suggestions and nits that should be considered: http://www.ietf.org/mail-archive/web/secdir/current/msg01474.html
draft-singh-geopriv-pidf-lo-dynamic
It wouldn't hurt to add a definition or pointer to a definition of "presentity" to the Terminology section.
I'm not entering a Discuss, but I have a number of fairly strong Comments that I hope you will feel able to debate in email and make updates to the draft accordingly. --- Your Abstract says... This document defines PIDF-LO extensions that are intended to convey information about moving objects. Perhaps you could be a little more afirmative? Such as: This document defines PIDF-LO extensions to convey information about moving objects. --- Why is the directional component of acceleration not supplied? --- In Section 3.1 The <orientation> and <heading> establish a direction. Aren't they both directions in their own right? And can't they be different? <orientation> establishes a "direction of facing" while <heading> establishes a "direction of travel". --- In Section 3.1 Angular measures are expressed in degrees and values MAY be negative. Are you sure that this is an RFC 2119 "MAY"? Wouldn't "may" be perfectly adequate? --- In Section 3.1 The first measure specifies the horizontal direction from the current position of the presentity to a point that it either pointing towards or travelling towards. You (I hope) don't mean "either". Hopefully there is a little more predictability! I think you mean: The first measure specifies the horizontal direction from the current position of the presentity to a point that it is pointing towards (for <orientation>) or travelling towards (for <heading>). --- In Section 3.1 The second measure, if present, specifies the vertical component of this angle. This angle is the elevation from the local horizontal plane. If the second angle value is omitted, the vertical component is unknown and the speed measure MAY be assumed to only contain the horizontal component of speed. Well, surely it is only if the second angle value of <heading> is omitted that you can make that assumption. If the second angle of <orientation> is absent, it says nothing about speed. Additionally, when you say "MAY" in this case, it implies that the normal case is something else that you have not stated. --- Section 5 At the very least, you are introducing additional information that may be distributed. Knowledge of that information makes a presentity more vulnerable, therefore the definition of additional Presence Information puts further weight behind the need to use security mechanisms.
Waiting to see if any LC comment are received. LC End Mar 24.
I am agreeing with a comment from Adrian/Tim on velocity. 3.1. Angular Measures and Coordinate Reference Systems [RFC5491] constrains the coordinate reference system (CRS) used in PIDF-LO to World Geodetic System 1984 (WGS 84) using either the two- dimensional (latitude, longitude) CRS identified by "urn:ogc:def:crs:EPSG::4326" or the two-dimensional (latitude, s/two-dimensional/three-dimensional longitude, altitude) CRS identified by "urn:ogc:def:crs:EPSG::4979".
I would also like to see some discussion of Adrian's comments. I am particularly interested in the decision to adopt the commonly used scalar definition of acceleration instead of the vector definition of acceleration. Was this considered?
I would like have a short discussion and the confirmation of the security experts that the following issue does not constitute a problem. The Security Considerations section is remarkably short saying just: > This document defines additional location elements carried by PIDF-LO. No additional security considerations beyond those described in RFC 4119 [RFC4119] are applicable to this document. RFC 4119 points back to RFC 3694 and RFC 3693 (section 7.4) to describe the threat model and the security requirements imposed on geopriv as result of the threat model. However, in my reading of these two documents they seem to take into consideration only the threats related to the current location information, while this drat introduces dynamic information that may be used by attackers to anticipate the future location of a host. To my understanding the security considerations refered in RFC 4119 may still be enough to cover the new scenarions, but I would like to have this confirmed by the security reviewers.
draft-ietf-sipping-update-pai
Please consider the comments from the Gen-ART Review by Francis Dupont on 2009-05-06: - Behaviour -> Behavior (i.e., American spelling) - ToC page 2: Acknowledgements -> Acknowledgments - 1 page 3: the right place to introduce common abbrevs: UAC, UAS, URI... - 2 page 3: UAC and URI abbrevs should be introduced - 2 page 4: same for UAS - 2 page 4: standardised -> standardized - 3.1 page 4: same for PSTN (I suggest in "o PSTN gateways;") - 3.2 page 6: poor wording: "with methods that are not provided for in RFC 3325 or any other RFC." - 6 page 10: standardised -> standardized - 7 page 10 (title): Acknowledgements -> Acknowledgments
draft-ietf-pkix-attr-cert-mime-type
A quick question: should this document point to RFC 5755 (which obsoletes 3281), or is the reference to 3281 intentional?
The RFC Editor notes appear to have been implemented in the latest revision.
draft-ietf-ipsecme-esp-null-heuristics
Its good that we have this document. However, I had a few issues that we should atleast discuss before shipping the document to the RFC Editor. First, the document says: In both IPv4 and IPv6 the heuristics can also check the IP addresses either to be in the known range (for example check that both IPv6 source and destination have same prefix etc), or checking addresses across more than one packet. I do not understand what you mean by the same prefix here. Obviously in IPv6 the source and destination do not have to have the same prefix. Second, the document says: One way to enforce deep inspection for all traffic, is to forbid encrypted ESP completely, in which case ESP-NULL detection is easier, as all packets must be ESP-NULL based on the policy, and further restrictions can eliminate ambiguities in ICV and IV sizes. Is this a circular argument? If per policy everything is ESP-NULL, why do you have to check to begin with? And if there's someone who might be sending encrypted ESP, then you don't appear to be able to assume anything special about the packets. Perhaps you wanted to say that if there are some servers that are in control of the network's owner, they can require ESP-NULL and hence everyone who talks with those servers is forced to do ESP-NULL. But I'm not sure what it has to do with the topic of this document. Finally, I think the document should be clearer about the failure modes and implications of those failures. Even if the probability of misclassification is small, if it causes the packet to be dropped per policy this can have significant negative effects. Its not even clear that retransmission would always help. (Or would it, if there was a new IV and the packet would look randomly different?) Does the document recommend dropping all encrypted packets as one mode of operation? Please describe what is the effect of classification failures is in this case (false positive for detecting encrypted packet). Does the document recommend dropping packets that match a certain pattern? Please describe what the effect of failures is in this case (false negative for detecting encrypted packet).
A thorough piece of work. Thanks. I think the Abstract may be a little terse. to quickly decide whether given packet flow is interesting or not This phrase doesn't make anything clear. I would prefer you say what you are attempting to determine and why.
The heuristics seem too weak to recommend for UDP. The misclassification of UDP such as RTP as IPSEC seems like it will do more harm than good. DPI devices will misclassify then fail to apply the right policy. It will be extremely hard to debug in the network as it will only happen to some of the RTP stream.
draft-westerlund-mmusic-3gpp-sdp-rtsp
The Gen-ART Review by Wassim Haddad on 2010-03-01 included some editorial improvements. Please consider them.
draft-krawczyk-hkdf
The Gen-ART Review by Avshalom Houri on 2010-03-02 rasied an editorial comment. Please consider it. > > Section 2.2 Step1 :Extract > > It is not easy to understand the way that the functions are defined. > For example what is the relationship between: > > PRK = HKDF-Extract(salt, IKM) > and > PRK = HMAC-Hash(salt, IKM) > > Why Hash is an option?
draft-nottingham-http-stale-controls
Just a little Discuss that i hope to clear on the telechat after discussion with other ADs. I don't believe that the use of RFC 2119 language and the reference to that document are consistent with this being an Informational RFC. On the other hand, I don't understand why this is not Standards Track.