Skip to main content

Last Call Review of draft-baeuerle-netnews-cancel-lock-05
review-baeuerle-netnews-cancel-lock-05-secdir-lc-mandelberg-2017-06-15-00

Request Review of draft-baeuerle-netnews-cancel-lock
Requested revision No specific revision (document currently at 09)
Type Last Call Review
Team Security Area Directorate (secdir)
Deadline 2017-06-28
Requested 2017-05-31
Authors Michael Bäuerle
I-D last updated 2017-06-15
Completed reviews Genart Last Call review of -05 by Paul Kyzivat (diff)
Secdir Last Call review of -05 by David Mandelberg (diff)
Genart Telechat review of -06 by Paul Kyzivat (diff)
Secdir Telechat review of -06 by David Mandelberg (diff)
Opsdir Telechat review of -06 by Joel Jaeggli (diff)
Assignment Reviewer David Mandelberg
State Completed
Request Last Call review on draft-baeuerle-netnews-cancel-lock by Security Area Directorate Assigned
Reviewed revision 05 (document currently at 09)
Result Not ready
Completed 2017-06-15
review-baeuerle-netnews-cancel-lock-05-secdir-lc-mandelberg-2017-06-15-00
I have reviewed this document as part of the security directorate's
ongoing effort to review all IETF documents being processed by the IESG.
These comments were written primarily for the benefit of the  security
area directors.  Document editors and WG chairs should treat  these
comments just like any other last call comments.

The summary of the review is Not ready.

The authentication in this document is single-use per article. I.e.,
once a single supersede or cancel is issued for an article, anybody can
"forge" other valid supersedes or cancels for the same article. I assume
that a cancel followed by a forged cancel is unimportant, but what about
cancel->supersede, supersede->cancel, or supersede->supersede? Also,
what about the race condition if the attacker can propagate the forgery
faster than the original propagates?

This document recommends calculating a single key K for each article
(section 4), then publishing base64(hash(base64(K))) values for multiple
different hash algorithms. This means that the preimage resistance of
the weakest hash algorithm places an upper bound on the security of the
authentication, even if the receiver ignores weaker algorithms. (An
attacker who can calculate K from the weak hash can generate valid keys
for the stronger hashes.) Additionally, while plenty of research goes
into preimage security of individual hash algorithms, I don't think as
much research goes into preimage security of multiple algorithms used in
parallel on the same input. While I don't know of any non-brute-force
attacks that can find X given sha256(X) and sha512(X), I see no reason
that it wouldn't be easier than the easiest of the two individual
preimage attacks. (I am not an expert though, there might be something
I'm missing.)

Section 4: Is it ever possible for two different (uid, mid) pairs to
have the same concatenated value? E.g., alice@example.co + mfoo and
alice@example.com + foo. If that ever happened and one of the two
articles was canceled, an attacker would be able to cancel the other
article.

Section 4: I don't understand Q1. Are you asking me if the existing
implementations are doing something insecure? (It's not specified well
enough for me to tell.)

Section 4: I think you should run any user-supplied password through a
key derivation function before using it as a MAC key.

Section 7: As I understand the terms, you care about preimage
resistance, but not second preimage. (I think preimage covers finding
any input that results in the specified output, not only the input that
originally generated the specified output. But I might be
misunderstanding the terms.)

Section 7: I don't know where the minimum key sizes come from, but they
seem a bit low to me. And for Q2, I don't know, sorry.

-- 
David Eric Mandelberg / dseomn
http://david.mandelberg.org/