Skip to main content

Last Call Review of draft-ietf-httpbis-message-signatures-16
review-ietf-httpbis-message-signatures-16-artart-lc-alvestrand-2023-03-06-00

Request Review of draft-ietf-httpbis-message-signatures
Requested revision No specific revision (document currently at 19)
Type Last Call Review
Team ART Area Review Team (artart)
Deadline 2023-02-20
Requested 2023-02-06
Authors Annabelle Backman , Justin Richer , Manu Sporny
I-D last updated 2023-03-06
Completed reviews Secdir Telechat review of -17 by Daniel Migault (diff)
Artart Telechat review of -17 by Harald T. Alvestrand (diff)
Secdir Early review of -05 by Daniel Migault (diff)
Artart Last Call review of -16 by Harald T. Alvestrand (diff)
Opsdir Last Call review of -16 by Bo Wu (diff)
Secdir Last Call review of -16 by Daniel Migault (diff)
Genart Last Call review of -16 by Dan Romascanu (diff)
Assignment Reviewer Harald T. Alvestrand
State Completed
Request Last Call review on draft-ietf-httpbis-message-signatures by ART Area Review Team Assigned
Posted at https://mailarchive.ietf.org/arch/msg/art/xA1XHD5SxX6p1uNtL3YTxPv1mIc
Reviewed revision 16 (document currently at 19)
Result Not ready
Completed 2023-03-06
review-ietf-httpbis-message-signatures-16-artart-lc-alvestrand-2023-03-06-00
Overall opinion: This approach is wrong.

There are two basic problems with this document.

One: This document does not describe security function.
Instead, it is a toolbox of security components that can be applied in an
application in various combinations depending on the application’s security
needs and tolerance for risk.

This means that it’s impossible to evaluate, based on this document alone,
whether it is fit for purpose or not.

Two: The approach taken - that of assuming that a bit exact canonical form can
be regenerated from a message transferred via any combination of HTTP
functional units - is a very tall order. This approach resembles DKIM, which
has caused widespread havoc in the email ecosystem by its intolerance of common
mailing list behaviors. The complexity of even trying this task is shown by the
fact that ¼ of this 108-page spec is devoted just to the canonicalization
mechanisms - even when several complex topics are handled by referencing other
specifications.

As such, I would not recommend this going on the standards track at this time.

IF it is possible to:
- Describe 2 or more “applications” (in the document’s terminology) that serve
an useful function in securing some part of the ecosystem against some attack -
Implement these functions in a way that exercises a fairly comprehensive subset
of the behaviors mandated in this document - Run the resulting application in a
real environment for some significant period of time, and observe that the
number of canonicalization errors resulting in validation failure is
insignificant to zero THEN it seems to me reasonable to place this on the
standards track.

Until then, I think this best belongs as an experimental protocol that people
can implement to gather experience with, not something that the IETF should
publish as a consensus standards-track protocol.

The rest of this review concerns smaller issues.

Larger issues
==============
Versioning of the protocol is not defined. For example, in 2.1.1, the
serialization of structured fields says that the signer MAY include the sf
parameter, and MUST do STRUCTURED-FIELDS “extensions and updates”. There is no
mechanism to indicate which version of STRUCTURED-FIELDS the signer uses; how
can one be sure that we always get a version that the verifier can reconstruct?

This can be handwaved away by saying “this must be specified by the
application” - but since we have no description of what an application spec
would look like - neither in examples nor in rules - we can’t know if this will
be handled at that level.

The Accept-Signature: field seems more dangerous than described in the spec. In
particular, if the attacker knows the value of some field set, the attacker can
use it as an oracle; it can get a valid signature over that field set in the
signer’s signing key by specifying an Accept-Signature: that includes that
field only (plus overhead). This can then be used in a replay attack together
with unsigned components against other entities that trust the signer.

Smaller issues
==============
These are more at the level of nits - worth fixing or making the meaning more
obvious, but they are not show-stoppers by any means. These are listed by
section, sequentially.

1.1 - Definitions: “Unix time” is not defined. “Key identifier” is used but not
defined.

2 - Canonicalization. The text is not explicit that case differences in field
names do not matter; it just implies it (by lowercasing everything).
Cache-Control: and cache-control: are the same header, and if both occur, they
must be merged. Be explicit.

The spec assumes that case is not being changed in any field value over which
signatures are computed. This should be called out.

2.1.1 - use of ;bs - the term “known by the application to cause problems with
canonicalization” is handwaving. Step 3 of this algorithm seems to assume that
all field values have an unique ASCII representation; is this assumption
warranted?

2.2 - the term “printable character” is undefined. Are we dealing with
0x20-0x7E (ASCII) or some subset thereof, or do Unicode characters occur here?

All of section 2.2 seems to assume that we’re dealing only with HTTP URLs. This
assumption should be made explicit.

For @authority, the reference for normalization does NOT specify lowercasing;
it says “The scheme and host are case-insensitive and normally provided in
lowercase”. Please be explicit that lowercasing MUST be done when computing the
signature base.

(Since I’m working with IDNs, I have to ask if the @authority is comprised of
A-labels or U-labels; I suspect that the answer is “obviously A-labels”, but
offhand, I can’t find the sentence that states this.)

Section 2.3 assumes that “Unix time” is an integer number of seconds. This
depends on what definition of “Unix time” is used (Unix “man 2 time” gives you
the integer representation; other representations include fractional seconds.)

In section 2.4, when multiple modifiers are used, is there a convention for
their order, or do you depend on the verifier using the signer’s order when
reconstructing the signature base?

In section 2.5, the signature base is computed with LF as a line ending, as
opposed to the CRLF line ending conventionally used in HTTP/1.1. This should be
called out, justified, or changed.

Section 4.3, discussing a proxy re-signing a message where it knowingly damages
the message so that its original signature can’t be verified, is confusing. The
text seems to be saying that the original (now failing) signature will be
forwarded, so the final verifier will probably try to verify both signatures,
have one fail and one succeed, and has to take the proxy’s word that the
original signature was OK. This means, of course, that the proxy can carry out
any attack it desires.

More worrisome is that the text does not call out explicitly that this is what
is expected: That the signature from a trusted signer saying that another
signature is to be believed even when it verifies as bad should cause the final
verifier to suspend disbelief. Being very explicit here would be good.

Section 5.2 uses the undefined notion of “fail the processing” for an
Accept-Signature. What is supposed to happen to the request in that case? 500
failure, or just ignoring the Accept-Signature request?

In reading section 7, there seems to be a number of things that are punted on
in the direction of “the application”. This calls out again that there is no
guidance in the document about what an application needs to look like.

Section 7.5.6 details the difficulties in signing the Set-Cookie header (a
major attack target). If the mechanism can’t handle this, is it worth doing?

Section 7.5.7 assumes that all header values can be validated. This seems like
a tall order, since the concept of “validation” isn’t well defined. (You can’t
validate an x-undefined: header)