Last Call Review of draft-ietf-tcpm-tcp-lcd-

Request Review of draft-ietf-tcpm-tcp-lcd
Requested rev. no specific revision (document currently at 03)
Type Last Call Review
Team Security Area Directorate (secdir)
Deadline 2010-08-24
Requested 2010-07-30
Authors Arnd Hannemann, Alexander Zimmermann
Draft last updated 2010-08-25
Completed reviews Secdir Last Call review of -?? by Catherine Meadows
Assignment Reviewer Catherine Meadows 
State Completed
Review review-ietf-tcpm-tcp-lcd-secdir-lc-meadows-2010-08-25
Review completed: 2010-08-25


I have reviewed this document as part of the security directorate's

ongoing effort to review all IETF documents being processed by the

IESG.  These comments were written primarily for the benefit of the

security area directors.  Document editors and WG chairs should treat

these comments just like any other last call comments.

This document proposes an algorithm to make TCP more robust to long connectivity

disruptions.  Currently TCP has no way of distinguishing disruptions due to connectivity

loss from disruptions due to congestion.   Thus, TCP will back off when faced with connectivity

loss, which will lead to further delays.  The proposed algorithm uses the ICMP destination unreachable

messages as indications of a connectivity disruption, and alters the behavior of TCP accordingly.

My impression from reading this draft is that the behavior and utility of this algorithm will depend on

further research and experimentation.  There are a number of situations in which it will still be possible

to confuse congestion and long connectivity disruptions that may need further exploration.  The authors of the document do a good job of pointing

these out, but I would have liked to have seen more evidence that the solutions recommended are the optimal

ones, and under what situations.  This is especially the case for the security issues, although it is not

limited to those.  For example, in the discussion of probing frequency in Section 5.4 the authors make a claim

that in their belief the approach of their algorithm is preferable to others that would give higher probing

frequency, but they need to provide more evidence to back this up.

The security considerations section itself is rather sketchy, and doesn't support that authors' assertions

that the algorithm is "considered to be secure."  The greatest security threat posed by this

algorithm is that an attacker could exploit it to persuade a TCP sender that communication problems

due to congestion are actually due to a connectivity problem, leading the sender to further contribute to the

congestion.  However, the authors mention only one possible attack: forging ICMP destination unreachable

messages, which they present only as an "example" of an attack.   I would recommend a more complete

discussion, considering each of the potential ambiguity cases discussed in the document, and discussing

how an attacker could exploit them and how such exploitation could be prevented or mitigated.  You might

also want to discuss the opposite problem: how an attacker could convince a sender that a connectivity

problem is a congestion problem.  This is less serious, at least for the moment, since in the current

situation that is exactly what happens, but it could be more of a threat further down the line if people come

to rely more on this ability to disambiguate.  

Catherine Meadows

Naval Research Laboratory

Code 5543

4555 Overlook Ave., S.W.

Washington DC, 20375

phone: 202-767-3490

fax: 202-404-7942


catherine.meadows at