Skip to main content

Update to the Cryptographic Message Syntax (CMS) for Algorithm Identifier Protection
draft-ietf-lamps-cms-update-alg-id-protect-05

Yes

Roman Danyliw
Warren Kumari

No Objection

Murray Kucherawy
Éric Vyncke
(Alissa Cooper)
(Alvaro Retana)
(Barry Leiba)
(Deborah Brungard)
(Martin Duke)
(Martin Vigoureux)
(Robert Wilton)

Note: This ballot was opened for revision 03 and is now closed.

Roman Danyliw
Yes
Warren Kumari
Yes
Erik Kline
No Objection
Comment (2020-08-19 for -03) Sent
[ section 1 ]

* "the associate parameters" -> "the associated parameters" perhaps

[ section 3.5 ]

* "the TSA MUST use same digest" -> "the TSA MUST use the same digest"
  I think
Murray Kucherawy
No Objection
Éric Vyncke
No Objection
Alissa Cooper Former IESG member
No Objection
No Objection (for -03) Not sent

                            
Alvaro Retana Former IESG member
No Objection
No Objection (for -03) Not sent

                            
Barry Leiba Former IESG member
No Objection
No Objection (for -03) Not sent

                            
Benjamin Kaduk Former IESG member
No Objection
No Objection (2020-08-25 for -03) Sent
Thanks for this; it's perhaps a bit overdue.  Section-by-section comments below.

Section 1

   In an algorithm substitution attack, the attacker looks for a
   different algorithm that produces the same result as the algorithm
   used by the originator.  As an example, if the signer of a message
   used SHA-256 [SHS] as the digest algorithm to hash the message
   content, then the attacker looks for a weaker hash algorithm that
   produces a result that is of the same length.  The attacker's goal is
   to find a different message that results in the same hash value,
   which is commonly called a collision.  [...]

The described scenario seems to be a cross-algorithm collision, which is
not, in my experience, the most common usage of the unqualified term
"collision".  In some sense, it seems that the task for an attacker is
to find (within the context of the "weak" algorithm) a first preimage
for the digest value that is computed by the honest participant (and is
likely to be using a "strong" algorithm).

   Further, when a digest algorithm produces a larger result than is
   needed by a digital signature algorithm, the digest value is reduced
   to the size needed by the signature algorithm.  This can be done both
   by truncation and modulo operations, with the simplest being
   straightforward truncation.  [...]

(But which of truncation and modulo is to be used is fixed by the
algorithm ID, right?  Perhaps a slight rewording to avoid indicating
that the attacker has a free choice is in order.)

   This document makes two updates to CMS to provide similar protection
   for the algorithm identifier.  [...]

nit: the discussion of how X.509 protects the algorithm
identifier/parameters was four paragraphs ago, already; I'd suggest a
bit more exposition about what we're providing "similar protection" as.

Section 3.1

The preexisting text allows implementations to fail to validate
signatures in some cases (when using a digest algorithm not included in
the SignedData digestAlgorithms set); do we want to say anything about
allowing (or requiring?) implementations to fail to validate signatures
if the two digest algorithms are different?

Section 3.2

      When the signedAttrs field is present, the same digest algorithm
      MUST be used to compute the digest of the encapContentInfo
      eContent OCTET STRING, which is carried in the message-digest
      attribute, and the collection of attributes that are signed.

nit: there may be a grammar nit here, relating to the parallelism of
"compute the digest of" -- I think "the collection of attributes that
are signed" should also have an "of" or "digest of" prefix.

Section 3.5

   When producing the TimeStampToken, the TSA MUST use same digest
   algorithm to compute the digest of the encapContentInfo eContent,
   which is an OCTET STRING that contains the TSTInfo, and the message-
   digest attribute within the SignerInfo.

(There's an implicit "in order to comply with the requirement introduced
above" in here, right?)

   To ensure that TimeStampToken values that were generated before this
   update remain valid, no requirement is placed on a TSA to ensure that
   the digest algorithm for the TimeStampToken matches the digest
   algorithm for the MessageImprint embedded within the TSTTokenInfo.

I assume that "TSTTokenInfo" is a typo for "TSTInfo"?

Section 4

I like this quote from RFC 6211:

%                     There is a convention, undocumented as far as I
% can tell, that the same hash algorithm should be used for both the
% content digest and the signature digest.  [...]

It seems we are now documenting this as more than just convention :)

   This section updates [RFC5652] to recommend that the originator
   include the CMSAlgorithmProtection attribute [RFC6211] whenever
   signed attributes or authenticated attributes are present.

Why is the recommendation scoped to only the case when protected
attributes are already present?  Is the recommendation not generically
applicable even when this would be the only protected attribute?

Section 6

   The security properties of the CMS [RFC5652] signed-data and
   authenticated-data content types are updated to ensure that algorithm
   identifiers are adequately protected, which makes algorithm
   substitution attacks significantly more difficult.

Is "ensure" the right word when we only recommend (not require) the use
of the CMSAlgorithmProtection attribute?

   Therefore, it remains important that a signer have a way to signal to
   a recipient which digest algorithms are allowed to be used in
   conjunction with the verification of an overall signature.  This
   signalling can be done as part of the specification of the signature
   algorithm in an X.509v3 certificate extension [RFC5280], or some

I'm not entirely sure I'm picturing what is intended by "part of the
specification of the signature algorithm in an X.509v3 certificate
extension" -- how is the signature algorithm relying on an X.509v3
extension?

   The CMSAlgorithmProtection attribute [RFC6211] offers protection for
   the algorithm identifiers used in the signed-data and authenticated-
   data content types.  However, no protection is provided for the
   algorithm identifiers in the enveloped-data, digested-data, or
   encrypted-data content types.  Likewise, The CMSAlgorithmProtection
   attribute provides no protection for the algorithm identifiers used
   in the authenticated-enveloped-data content type defined in
   [RFC5083].

I feel like we should say something about why we do not provide
protection for those content types (e.g., why it is believed to be safe
to not have such protection).

Section 8.2

The reference to RFC 3161 in Section 3.5 is facially adding a new
MUST-level requirement for processing of the structures from RFC 3161,
which would qualify as a normative reference in my interpretation of
https://www.ietf.org/about/groups/iesg/statements/normative-informative-references/
.  (However, I believe that the "MUST" in that section is just repeating
the requirement from a previous section in the more-specific context, so
could safely be rewritten to not have the appearance of a new normative
requirement, in which case RFC 3161 could properly remain as an
informative reference.)
Deborah Brungard Former IESG member
No Objection
No Objection (for -03) Not sent

                            
Martin Duke Former IESG member
No Objection
No Objection (for -03) Not sent

                            
Martin Vigoureux Former IESG member
No Objection
No Objection (for -03) Not sent

                            
Robert Wilton Former IESG member
No Objection
No Objection (for -03) Not sent