Internet Engineering Task Force                               S. Dawkins
INTERNET DRAFT                                             G. Montenegro
                                                                 M. Kojo
                                                               V. Magret
                                                               N. Vaidya

                                                        October 21, 1999

        End-to-end Performance Implications of Links with Errors

                      draft-ietf-pilc-error-02.txt

Status of This Memo

   This document is an Internet-Draft and is in full conformance
   with all provisions of Section 10 of RFC2026.

   Comments should be submitted to the PILC mailing list at
   pilc@grc.nasa.gov.

   Distribution of this memo is unlimited.

   This document is an Internet-Draft.  Internet-Drafts are working
   documents of the Internet Engineering Task Force (IETF), its areas,
   and its working groups.  Note that other groups may also distribute
   working documents as Internet-Drafts.

   Internet-Drafts are draft documents valid for a maximum of six
   months and may be updated, replaced, or obsoleted by other documents
   at any time.  It is inappropriate to use Internet-Drafts as
   reference material or to cite them other than as ``work in
   progress.''

   The list of current Internet-Drafts can be accessed at
   http://www.ietf.org/ietf/1id-abstracts.txt

   The list of Internet-Draft Shadow Directories can be accessed at
   http://www.ietf.org/shadow.html.


Abstract

   The rapidly-growing World Wide Web is being accessed by an
   increasingly wide range of devices over an increasingly wide
   variety of links. At least some of these links do not provide the
   reliability that hosts expect, and this expansion into unreliable
   links causes some Internet protocols, especially TCP [RFC793], to
   perform poorly.



Expires April 21, 2000                                          [Page 1]


INTERNET DRAFT          PILC - Links with Errors            October 1999


   Specifically, TCP congestion avoidance procedures [RFC2561], while
   appropriate for connections that lose traffic primarily because of
   congestion and buffer exhaustion, interact badly with connections
   that traverse links with high uncorrected error rates. The result is
   that senders may spend an excessive amount of time waiting on
   acknowledgements that aren't coming, whether these losses are due to
   data losses in the forward path or acknowledgement losses in the
   return path, and then, although these losses are not due to
   congestion-related buffer exhaustion, the sending TCP then transmits
   at substantially reduced traffic levels as it probes the network to
   determine "safe" traffic levels.

   This document discusses the specific TCP mechanisms that are
   problematic in these environments, and discusses what can be done
   to mitigate the problems without introducing intermediate devices
   into the connection.

   Applications use UDP for a number of reasons, so there may not be
   a single recommendation appropriate for all uses of UDP over high
   error-rate links.

Changes since last draft:

   Document title change.

   Totally re-write Abstract section to focus on technology instead
   of history.

   Included pointer to "Appropriate Byte Counting" experimental
   proposal [ALL99].

   Split section on explicit corruption notification and explicit
   congestion notification, and rewrite to more clearly distinguish
   between the two types of proposals.

   Add a section on "HTTP and the dark side of the force".

   Rewrite section on why TCP windows stay small in the presence of
   uncorrected errors.

   Lots of editorial changes.

   Remove text on SNOOP from this draft (it should be in [PILC-PEP]).








Expires April 21, 2000                                          [Page 2]


INTERNET DRAFT          PILC - Links with Errors            October 1999


Table of Contents

1.0 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . .   3
     1.1 Relationship of this recommendation and [PILC-PEP]  . . . .   3
     1.2 Relationship of this recommendation and [PILC-LINK] . . . .   5
2.0 Errors and Interactions with TCP Mechanisms  . . . . . . . . . .   5
   2.1 Slow Start and Congestion Avoidance [RFC2581] . . . . . . . .   5
   2.2 Fast Retransmit and Fast Recovery [RFC2581] . . . . . . . . .   6
   2.3 Selective Acknowledgements [RFC2018]  . . . . . . . . . . . .   8
   2.4 Delayed Duplicate Acknowlegements [MV97, VMPM99]  . . . . . .   8
   2.5 Detecting Corruption Loss With Explicit Notifications . . . .   9
     2.5.1 Why we need Explicit Corruption Notification  . . . . . .  10
   2.6 Appropriate Byte Counting [ALL99] (Experimental)  . . . . . .  10
3.0 Summary of Recommendations . . . . . . . . . . . . . . . . . . .  11
     3.1 HTTP and the dark side of the force . . . . . . . . . . . .  12
4.0 Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . .  13
5.0 References . . . . . . . . . . . . . . . . . . . . . . . . . . .  13
Authors' addresses . . . . . . . . . . . . . . . . . . . . . . . . .  15

































Expires April 21, 2000                                          [Page 3]


INTERNET DRAFT          PILC - Links with Errors            October 1999


1.0 Introduction

   It has been axiomatic that most losses on the Internet are due to
   congestion, as routers run out of buffers and discard incoming
   traffic. This observation is the basis for current TCP
   congestion avoidance strategies - if losses are due to congestion,
   there is no need for an explicit "congestion encountered"
   notification to the sender.

   Quoting Van Jacobson in 1988: "If packet loss is (almost) always
   due to congestion and if a timeout is (almost) always due to a
   lost packet, we have a good candidate for the `network is congested'
   signal." [VJ-DCAC]

   This axiom has served the Internet community well, because it
   allowed the deployment of TCPs that have allowed the Internet to
   accomodate explosive growth in link speeds and traffic levels.

   This same explosive growth has attracted users of networking
   technologies that DON'T have low uncorrected error rates -
   including many satellite-connected users, and many wireless Wide
   Area Network-connected users. Users connected to these networks may
   not be able to transmit and receive at anything like available
   bandwidth because their TCP connections are spending time in
   congestion avoidance procedures, or even slow-start procedures, that
   were triggered by corruption losses in the absence of congestion.

   This document makes recommendations about what the participants
   in connections that traverse high error-rate links may wish
   to consider doing to improve utilization of available bandwidth
   in ways that do not threaten the stability of the Internet.

1.1 Relationship of this recommendation and [PILC-PEP]

   This document discusses end-to-end mechanisms that do not require
   TCP-level awareness by intermediate nodes. This places severe
   limitations on what the end nodes can know about the nature of
   losses that are occurring between the end nodes. Attempts to
   apply heuristics to distinguish between congestion and corruption
   losses have not been successful [BV97, BV98, BV98a]. A companion
   PILC document on Performance-Enhancing Proxies, [PILC-PEP],
   relaxes this restriction; because PEPs can be placed on boundaries
   where network characteristics change dramatically, PEPs have an
   additional opportunity to improve performance over links with
   uncorrected errors.






Expires April 21, 2000                                          [Page 4]


INTERNET DRAFT          PILC - Links with Errors            October 1999


1.2 Relationship of this recommendation and [PILC-LINK]

   This recommendation is for use with TCP over subnetwork technologies
   that have already been deployed. A companion PILC recommendation,
   [PILC-LINK], is for designers of subnetworks that are intended to
   carry Internet protocols, and have not been completely specified,
   so that the designers have the opportunity to reduce the number of
   uncorrected errors TCP will encounter.

2.0 Errors and Interactions with TCP Mechanisms

   A TCP sender adapts its use of bandwidth based on feedback from
   the receiver. When TCP is not able to distinguish between losses
   due to congestion and losses due to uncorrected errors, it is
   not able to determine available bandwidth.

   Some TCP mechanisms, targeting recovery from losses due to
   congestion, coincidentally assist in recovery from losses due to
   uncorrected errors as well.

2.1 Slow Start and Congestion Avoidance [RFC2581]

   Slow Start and Congestion Avoidance [RFC2581] are essential to
   the Internet's stability. These mechanisms were designed for to
   accommodate networks that didn't provide explicit congestion
   notification. Although experimental mechanisms like [RFC2481]
   are moving in the direction of explicit notification, the effect
   of ECN on ECN-aware TCPs is the same as the effect of implicit
   congestion notification through congestion-related loss.

   TCP connections experiencing high error rates interact badly with
   Slow Start and with Congestion Avoidance, because high error rates
   make the interpretation of losses ambiguous - the sender cannot know
   intuitively whether detected losses are due to congestion or to
   data corruption. TCP makes the "safe" choice - assume that the losses
   are due to congestion.

      - Whenever TCP's retransmission timer expires, the sender
        assumes that the network is congested and invokes slow start.

      - During slow start, the sender increases its window in
        units of segments. This is why it is important to use an
        appropriately sized MTU - and less reliable link layers
        often use smaller MTUs.

   Recommendation: Slow Start and Congestion Avoidance are MUSTS in
   [RFC1122], itself a full Internet Standard. Recommendations in this
   document will not interfere with these mechanisms.



Expires April 21, 2000                                          [Page 5]


INTERNET DRAFT          PILC - Links with Errors            October 1999


2.2 Fast Retransmit and Fast Recovery [RFC2581]

   TCPs deliver data as a reliable byte-stream to applications, so
   when a segment is lost (due to either congestion or corruption)
   delivery of data to the receiving application must wait until
   the missing data is received. Missing segments are detected by the
   receiver by segments arriving with out-of-order sequence numbers.

   TCPs SHOULD immediately send an acknowledgement when data when is
   received out-of-order, sending the next expected sequence number
   with no delay, so that the sender can retransmit the required data
   and the receiver can resume delivery of data to the receiving
   application. When an acknowledgement carries the same expected
   sequence number as an acknowledgement that has already been sent
   for the last in-order segment received, these acknowledgements are
   called "duplicate ACKs".

   Because IP networks are allowed to reorder packets, the receiver may
   send duplicate acknowledgements for segments that are still enroute,
   but are arriving out of order due to routing changes, link-level
   retransmission, etc. When a TCP sender receives three duplicate
   ACKs, fast retransmit [RFC2581] allows it to infer that a segment
   was lost. The sender retransmits what it considers to be this lost
   segment without waiting for the full retransmission timeout, thus
   saving time.

   After a fast retransmit, a sender invokes the fast recovery
   [RFC2581] algorithm, whereby it invokes congestion avoidance,
   but not slow start from a one-segment congestion window. This also
   saves time.

   It's important to be realistic about the maximum throughput that
   TCP can have over a connection that traverses a high error-rate
   link. Even using Fast Retransmit/Fast Recovery, the sender will
   halve the congestion window each time a window contains one or
   more segments that is lost, and will re-open the window by one
   additional segment for each acknowledgement that is received. If
   a connection path traverses a link that loses one or more segments
   during recovery, the one-half reduction takes place again, this time
   on a reduced congestion window - and this downward spiral will
   continue until the connection is able to recover completely without
   experiencing loss.

   In general, TCP can increase its congestion window beyond the
   delay-bandwidth product. In links with high error rates, the
   TCP window may remain rather small for long periods of time
   due to any of the following reasons:




Expires April 21, 2000                                          [Page 6]


INTERNET DRAFT          PILC - Links with Errors            October 1999


      1. HTTP/1.0, and HTTP/1.1 in the absence of persistent
      connections, close TCP connections to indicate boundaries
      between requested resources. This means that these applications
      are constantly closing "trained" TCP connections and opening
      "untrained" TCP connections which will execute slow start,
      beginning with one or two segments.

      2. TCP's congestion avoidance strategy is additive-increase,
      multiplicative-decrease, which means that if additional
      errors are encountered during recovery, the effect on the
      congestion window is a "downward spiral" - "reduce by 50 percent,
      recover by 20 percent, reduce by 50 percent due to the next
      error ...".

      3. Often small socket buffers are recommended with high
      error-rate links in order to prevent the RTO from inflating.

      4. Typical "file size" to be transferred over a connection
      experiencing high loss rates is often relatively small
      (Web requests, Web document objects, email messages, etc.)
      In particular, users of links with high error rates
      are often unwilling to carry out large transfers as the
      response time is so long.

      5. If a TCP path with high uncorrected error rates DOES cross
      a highly congested wireline Internet path, congestion losses
      on the Internet have the same effect as losses due to corruption.
      (Editorial: "sometimes even paranoids have enemies!")

   A small window - especially a window of less than four segments -
   effectively prevents the sender from taking advantage of Fast
   Retransmits. Moreover, efficient recovery from multiple losses
   within a single window requires adoption of new proposals
   (NewReno [RFC2582]).

   Recommendation: Implement Fast Retransmit and Fast Recovery at
   this time. This is a widely-implemented optimization and is
   currently at Proposed Standard level. [RFC2488] recommends
   implementation of Fast Retransmit/Fast Recovery in satellite
   environments.  NewReno [RFC2582] apparently does help a sender
   better handle partial ACKs and multiple losses in a single
   window, but at this point is not recommended due to its
   experimental nature.  Instead, SACK (Selective Acknowledgements)
   is the preferred mechanism.







Expires April 21, 2000                                          [Page 7]


INTERNET DRAFT          PILC - Links with Errors            October 1999


2.3 Selective Acknowledgements [RFC2018]

   Selective Acknowledgements allow the repair of multiple segment
   losses per window without requiring one round-trip per loss.

   Selective acknowledgements are most useful in LFNs ("Long Fat
   Networks", because of the long round trip times that may be
   encountered in these environments, according to Section 1.1 of
   [RFC1323], and are especially useful if large windows are required,
   because there is a considerable probability of multiple segment
   losses per window.

   In low-speed, high error-rate environments (for example, the
   wireless WAN environment), TCP windows are much smaller, and burst
   errors must be much longer in duration in order to damage multiple
   segments. Accordingly, the complexity of SACK may not be
   justifiable, unless there is a high probability of both burst
   errors and congestion.

   [SACK-EXT] proposes an extension to SACK that allows receivers to
   provide more information about the order of delivery of segments,
   allowing "more robust operation in an environment of reordered
   packets, ACK loss, packet replication, and/or early retransmit
   timeouts".

   Recommendation: SACK [RFC2018] is a Proposed Standard. Implement
   SACK now for compatibility with other TCPs. Monitor [SACK-EXT] for
   possible future use.

2.4 Delayed Duplicate Acknowlegements [MV97, VMPM99]

   When link layers try aggressively to correct a high underlying
   error rate, it is imperative to prevent interaction between
   link-layer retransmission and TCP retransmission as these layers
   duplicate each other's efforts. In such an environment it may
   make sense to delay TCP's efforts so as to give the link-layer a
   chance to recover. With this in mind, the Delayed Dupacks [MV97,
   VMPM99] scheme selectively delays duplicate acknowledgements
   at the receiver.  It may be preferable to allow a local mechanism
   to resolve a local problem, instead of invoking TCP's end-to-end
   mechanism and incurring the associated costs, both in terms of
   wasted bandwidth and in terms of its effect on TCP's window
   behavior.

   At this time, it is not well understood how long the receiver
   should delay the duplicate acknowledgments. In particular, the
   impact of medium access control (MAC) protocol on the
   choice of delay parameter needs to be studied. The MAC



Expires April 21, 2000                                          [Page 8]


INTERNET DRAFT          PILC - Links with Errors            October 1999


   protocol may affect the ability to choose the appropriate
   delay (either statically or dynamically). In general,
   significant variabilities in link-level retransmission times
   can have an adverse impact on the performance of the Delayed
   Dupacks scheme.

   Recommendation: Delaying duplicate acknowledgements is not a
   standards-track mechanism. It may be useful in specific
   network topologies, but a general recommendation requires
   further research and experience.

2.5 Detecting Corruption Loss With Explicit Notifications

   As noted above, today's TCPs assume that any loss is due
   to congestion, and encounter difficulty in distinguishing
   between congestion loss and corruption loss because this
   "implicit notification" mechanism can't carry both meanings
   at once. [SF98] reports simulation results showing that
   performance improvements are possible when TCP can correctly
   distingush between losses due to congestion and losses due to
   corruption.

   With explicit notification from the network it is possible to
   determine when a loss is due to corruption. Several proposals
   along these lines include:

   - Explicit Loss Notification (ELN) [BPSK96]

   - Explicit Bad State Notification (EBSN) [BBKVP96]

   - Explicit Loss Notification to the Receiver (ELNR), and
     Explicit Delayed Dupack Activation Notification (EDDAN)
     [MV97]

   - Space Communication Protocol Specification - Transport
     Protocol (SCPS-TP), which uses explicit "negative
     acknowledgements" to notify the sender that a damaged
     packet has been received.

   These proposals offer promise, but none have been proposed as
   standards-track mechanisms for adoption in IETF.

   Recommendation: Researchers should continue to investigate true
   corruption-notification mechanisms, especially mechanisms like
   ELNR and EDDAN [MV97], in which the only systems that need to be
   modified are the base station and the mobile device. We also note
   that the requirement that the base station be able to examine TCP
   headers at link speeds raises performance issues with respect to



Expires April 21, 2000                                          [Page 9]


INTERNET DRAFT          PILC - Links with Errors            October 1999


   IPSEC-encrypted packets.

2.5.1 Why we need Explicit Corruption Notification

   Explicit Congestion Notification (ECN) [RFC2481] is likely
   closer to widespread deployment on the Internet than any of
   these techniques for explicit notification of corruption loss.
   It would be great if we could use Explicit Congestion Notification
   as a surrogate Explicit Corruption Notification ("if it wasn't
   congestion, it must have been corruption"), but we can't.
   A word about ECN is in order.

   ECN requires changes to the routing infrastructure to perform
   "active queue management" - to detect impending buffer
   exhaustion, and to randomly drop packets when impending
   buffer exhaustion has been detected, so that receivers will
   respond to this implicit notification by slowing their
   transmission rate and avoiding total buffer exhaustion.

   ECN then builds on "active queue management" by providing
   a mechanism for hosts marking packets as "ECN-capable",
   and routers marking ECN-capable packets as "congestion
   encountered" during periods of impending buffer exhaustion.
   This allows ECN-capable routers to provide congestion
   notification to ECN-capable hosts without dropping packets
   that would otherwise have been delivered (because the
   router still has available buffers when the packet arrives).

   At first glance, ECN looks like a reasonable alternative to
   the explicit corruption notification mechanisms previous discussed.
   The reason ECN isn't, is because the absence of packets marked as
   "congestion encountered" cannot be interpreted by ECN-capable
   TCP connections as a "green light" for aggressive
   retransmission. On the contrary, during periods of extreme
   network congestion routers may drop packets marked with explicit
   notification because their buffers are exhausted - exactly the
   wrong time for a host to begin retransmitting aggressively.

   Recommendation: ECN is not a standards-track mechanism ([RFC2481]
   is an Experimental RFC). Researchers should implement ECN, but
   should not (mis)use it as a surrogate for explicit corruption
   notification.

2.6 Appropriate Byte Counting [ALL99] (Experimental)

   Researchers have pointed out an interaction between delayed
   acknowledgements and TCP acknowledgement-based self-clocking, and
   various proposals have been made to improve bandwidth utilization



Expires April 21, 2000                                         [Page 10]


INTERNET DRAFT          PILC - Links with Errors            October 1999


   during slow start. One proposal, called "Appropriate Byte Counting",
   increases cwnd based on the number of bytes acknowledged, instead of
   the number of ACKs received. This proposal is a refinement of earlier
   proposals, limits the increase in cwnd so that cwnd does not "spike"
   in the presence of "stretch ACKs", which cover more than two segments
   (whether this is intentional behavior by the receiver or the result
   of lost ACKs), and limits cwnd growth based on byte counting to the
   initial slow-start exchange.

   This proposal is still at the experimental stage, but implementors
   may wish to follow this work, because the effect is that cwnd is
   opening more aggressively when ACKs are lost during the initial
   slow-start exchange, but this aggressiveness does not act to the
   detriment of other flows.

3.0 Summary of Recommendations

   Because existing TCPs have only one implicit loss feedback
   mechanism, it is not possible to use this mechanism to
   distinguish between congestion loss and corruption loss
   without additional information. Because congestion affects
   all traffic on a path while corruption affects only the
   specific traffic encountering uncorrected corruption,
   avoiding congestion has to take precedence over quickly
   repairing corruption loss. This means that the best that
   can be achieved without new feedback mechanisms is minimizing
   the amount of time spent unnecessarily in congestion avoidance.

   Fast Retransmit/Fast Recovery allows quick repair of loss
   without giving up the safety of congestion avoidance. In order
   for Fast Retransmit/Fast Recovery to work, the window size must
   be large enough to force the receiver to send three duplicate
   acknowledgements before the retransmission timeout interval
   expires, forcing full TCP slow-start.

   Selective Acknowledgements (SACK) extend the benefit of Fast
   Retransmit/Fast Recovery to situations where multiple "holes"
   in the window need to be repaired more quickly than can be
   accomplished by executing Fast Retransmit for each hole, only
   to discover the next hole.

   Delayed Duplicate Acknowledgements is an attractive scheme,
   especially when link layers use fixed retransmission timer
   mechanisms that may still be trying to recover when TCP-level
   retransmission timeouts occur, adding additional traffic to
   the network. This proposal is worthy of additional study,
   but is not recommended at this time, because we don't know
   how to calculate appropriate amounts of delay for an arbitrary



Expires April 21, 2000                                         [Page 11]


INTERNET DRAFT          PILC - Links with Errors            October 1999


   network topology.

   It's not possible to use explicit congestion notification
   as a surrogate for explicit corruption notification (no matter how
   much we wish it was!).

   Of these mechanisms, Delayed Duplicate Acknowledgements applies only
   to wireless networks. The others cover both wireless and wireline
   environments. Their more general applicability attracts more
   attention and analysis from the research community.

   All of these mechanisms continue to work in the presence of IPSec.

3.1 HTTP and the dark side of the force

   The previous recommendations are based on one very important
   assumption - that TCP connections will stay open long enough for
   TCPs to learn the network characteristics between two endpoints, and
   that the TCPs will then inject packets into this connection as fast
   as possible - but no faster!

   HTTP/1.0 (and its predecessor, HTTP/0.9) used TCP connection closing
   to signal a receiver that all of a requested resource had been
   transmitted. Because WWW objects tend to be small in size (between
   five and twenty kilobytes), TCPs experienced difficulty in "training"
   on available bandwidth (a substantial portion of the transfer had
   already happened, by the time the TCPs got out of slow start).
   Popular WWW browsers responded by using multiple parallel connections
   when retrieving objects embedded in HTML pages. This provided better
   performance, from the user's perspective, but since the use of
   multiple connections simply parallelized the time spent in slow
   start, the impact on the network was an increase in the number of
   "untrained" TCP connections. "Persistent connections" were
   introduced, relying on explicit size information instead of TCP
   connection closes, allowing the reuse of "trained" connections for
   retrieval of more than one object over a single connection. Improved
   support for persistent connections was one of the most significant
   enhancements as HTTP/1.0 became HTTP/1.1 [RFC2616].

   Sadly, as HTTP/1.1 has been deployed on the Internet, we have not
   seen a corresponding increase in the use of persistent connections.
   Continued use of multiple parallel connections is happening for a
   number of reasons, including errors in the production of size
   information (which is critical to allow a receiver to distinguish
   between the end of one resource and the beginning of another), the
   desire on heavily-loaded web servers to close connections as quickly
   as possible, browsers which do not paint the user's screen until the
   TCP connection is closed, and - most unfortunate of all - users have



Expires April 21, 2000                                         [Page 12]


INTERNET DRAFT          PILC - Links with Errors            October 1999


   apparently been trained to prefer the effect of multiple connections
   as browsers paint multiple resources on the user's screen, instead
   of rendering each resource serially.

   Proposals which reuse TCP congestion information across connections,
   like TCP Control Block Interdependence [RFC2140], or the more recent
   Congestion Manager [BS99] proposal, will have the effect of making
   multiple parallel connections impact the network as if they were a
   single connection, "trained" after a single startup transient. These
   proposals are critical to the long-term stability of the Internet,
   because today's users always have the choice of clicking on the
   "reload" button in their browsers and cutting off TCP's exponential
   backoff - replacing connections which are building knowledge of the
   available bandwidth with connections with no knowledge at all.

4.0 Acknowledgements

   This recommendation has grown out of the Internet Draft "TCP Over
   Long Thin Networks", which was in turn based on work done in the
   IETF TCPSAT working group.

5.0 References

   [ALL99] Mark Allman. TCP Byte Counting Refinements, ACM Computer
   Communication Review, July 1999.
   Availble as http://roland.grc.nasa.gov/~mallman/papers/bc-ccr.ps

   [BBKVP96] Bakshi, B., P., Krishna, N., Vaidya, N., Pradhan, D.K.,
   "Improving Performance of TCP over Wireless Networks," Technical
   Report 96-014, Texas A&M University, 1996.

   [BPSK96] Balakrishnan, H., Padmanabhan, V., Seshan, S., Katz, R.,
   "A Comparison of Mechanisms for Improving TCP Performance over
   Wireless Links," in ACM SIGCOMM, Stanford, California, August
   1996.

   [BS99] Hari Balakrishnan, Srinivasan Seshan, "The Congestion
   Manager", June 23, 1999. Work in progress, available at
   http://search.ietf.org/internet-drafts/draft-balakrishnan-cm-00.txt.

   [BV97] Biaz, S., Vaidya, N., "Using End-to-end Statistics to
   Distinguish Congestion and Corruption Lossses: A Negative Result,"
   Texas A&M University, Technical Report 97-009, August 18, 1997.

   [BV98] Biaz, S., Vaidya, N., "Sender-Based heuristics for
   Distinguishing Congestion Losses from Wireless Transmission
   Losses," Texas A&M University, Technical Report 98-013, June
   1998.



Expires April 21, 2000                                         [Page 13]


INTERNET DRAFT          PILC - Links with Errors            October 1999


   [BV98a] Biaz, S., Vaidya, N., "Discriminating Congestion Losses
   from Wireless Losses using Inter-Arrival Times at the Receiver,"
   Texas A&M University, Technical Report 98-014, June 1998.

   [MV97] Mehta, M., Vaidya, N., "Delayed
   Duplicate-Acknowledgements:  A Proposal to Improve Performance of
   TCP on Wireless Links," Texas A&M University, December 24, 1997.
   Available at http://www.cs.tamu.edu/faculty/vaidya/mobile.html

   [PILC-LINK] Phil Karn, Aaron Falk, Joe Touch, Marie-Jose Montpetit,
   "Advice for Internet Subnetwork Designers", June, 1999. Work in
   progress, available at http://people.qualcomm.com/karn/pilc.txt

   [PILC-PEP] J. Border, M. Kojo, Jim Griner, G. Montenegro,
   "Performance Implications of Link-Layer Characteristics: Performance
   Enhancing Proxies", June 25, 1999. Work in progress, available
   at http://www.ietf.org/internet-drafts/draft-ietf-pilc-pep-00.txt

   [PILC-SLOW] S. Dawkins, G. Montenegro, M. Kojo, V. Magret,
   "Performance Implications of Link-Layer Characteristics: Slow
   Links", September 1, 1999. Work in progress, available at
   http://www.ietf.org/internet-drafts/draft-ietf-pilc-slow-01.txt

   [RFC793] Jon Postel, "Transmission Control Protocol", September 1981.
   RFC 793.

   [RFC1122] Braden, R., Requirements for Internet Hosts --
   Communication Layers, October 1989. RFC 1122.

   [RFC1323] Van Jacobson, Robert Braden, and David Borman. TCP
   Extensions for High Performance, May 1992. RFC 1323.

   [RFC2018] Mathis, M., Mahdavi, J., Floyd, S., and Romanow, A.,
   "TCP Selective Acknowledgment Options," October, 1996.

   [RFC2140] J. Touch, "TCP Control Block Interdependence", RFC 2140,
   April 1997.

   [RFC2309] Braden, B. Clark, D., Crowcroft, J., Davie, B., Deering,
   S., Estrin, D., Floyd, S., Jacobson, V., Minshall, G., Partridge,
   C., Peterson, L., Ramakrishnan, K.K., Shenker, S., Wroclawski, J.,
   Zhang, L., "Recommendations on Queue Management and Congestion
   Avoidance in the Internet," RFC 2309, April 1998.

   [RFC2481] Ramakrishnan, K.K., Floyd, S., "A Proposal to add Explicit
   Congestion Notification (ECN) to IP", RFC 2481, January 1999.

   [RFC2488] Mark Allman, Dan Glover, Luis Sanchez. "Enhancing TCP



Expires April 21, 2000                                         [Page 14]


INTERNET DRAFT          PILC - Links with Errors            October 1999


   Over Satellite Channels using Standard Mechanisms," RFC 2488
   (BCP 28), January 1999.

   [RFC2581] M. Allman, V. Paxson, W. Stevens, "TCP Congestion
   Control," April 1999. RFC 2581.

   [RFC2582] Floyd, S., Henderson, T., "The NewReno Modification to
   TCP's Fast Recovery Algorithm," April 1999. RFC 2582.

   [RFC2616] R. Fielding, J. Gettys, J. Mogul, H. Frystyk, Masinter,
   P. Leach, T. Berners-Lee. "Hypertext Transfer Protocol -- HTTP/1.1",
   RFC 2616, June 1999. (Draft Standard)

   [SACK-EXT] Sally Floyd, Jamshid Mahdavi, Matt Mathis, Matthew
   Podolsky, Allyn Romanow, "An Extension to the Selective
   Acknowledgement (SACK) Option for TCP", August 1999. Work in
   progress, available at
   http://www.ietf.org/internet-drafts/draft-floyd-sack-00.txt

   [SF98] Nihal K. G. Samaraweera and Godred Fairhurst, "Reinforcement
   of TCP error Recovery for Wireless Communication", Computer
   Communication Review, volume 28, number 2, April 1998. Available at
   http://www.acm.org/sigcomm/ccr/archive/1998/apr98/
   ccr-9804-samaraweera.pdf

   [VJ-DCAC] Van Jacobson, "Dynamic Congestion Avoidance / Control"
   e-mail dated Feberuary 11, 1988, available from
   http://www.kohala.com/~rstevens/vanj.88feb11.txt

   [VMPM99] N. H. Vaidya, M. Mehta, C. Perkins, G. Montenegro,
   "Delayed Duplicate Acknowledgements: A TCP-Unaware Approach to
   Improve Performance of TCP over Wireless," Technical Report
   99-003, Computer Science Dept., Texas A&M University, February
   1999.

Authors' addresses

   Questions about this document may be directed to:

          Spencer Dawkins
          Nortel Networks
          3 Crockett Ct
          Allen, Texas 75002

          Voice:    +1-972-684-4827
          Fax:      +1-972-685-3292
          E-Mail: sdawkins@nortelnetworks.com




Expires April 21, 2000                                         [Page 15]


INTERNET DRAFT          PILC - Links with Errors            October 1999


          Gabriel E. Montenegro
          Sun Labs Networking and Security Group
          Sun Microsystems, Inc.
          901 San Antonio Road
          Mailstop UMPK 15-214
          Mountain View, California 94303

          Voice:    +1-650-786-6288
          Fax:      +1-650-786-6445
          E-Mail:   gab@sun.com


          Markku Kojo
          University of Helsinki/Department of Computer Science
          P.O. Box 26 (Teollisuuskatu 23)
          FIN-00014 HELSINKI
          Finland

          Voice:  +358-9-7084-4179
          Fax:    +358-9-7084-4441
          E-Mail: kojo@cs.helsinki.fi


          Vincent Magret
          Corporate Research Center
          Alcatel Network Systems, Inc
          1201 Campbell
          Mail stop 446-310
          Richardson Texas 75081 USA
          M/S 446-310

          Voice:    +1-972-996-2625
          Fax:    +1-972-996-5902
          E-mail: vincent.magret@aud.alcatel.com


          Nitin Vaidya
          Dept. of Computer Science
          Texas A&M University
          College Station, TX 77843-3112

          Voice:    +1 409-845-0512
          Fax:      +1 409-847-8578
          Email: vaidya@cs.tamu.edu







Expires April 21, 2000                                         [Page 16]