Internet Engineering Task Force Mark Allman, Editor
INTERNET DRAFT Dan Glover
File: draft-ietf-tcpsat-res-issues-03.txt Jim Griner
John Heidemann
Keith Scott
Jeffrey Semke
Joe Touch
Diepchi Tran
May 27, 1998
Expires: November 27, 1998
Ongoing TCP Research Related to Satellites
Status of this Memo
This document is an Internet-Draft. Internet-Drafts are working
documents of the Internet Engineering Task Force (IETF), its areas,
and its working groups. Note that other groups may also distribute
working documents as Internet-Drafts.
Internet-Drafts are draft documents valid for a maximum of six
months and may be updated, replaced, or obsoleted by other documents
at any time. It is inappropriate to use Internet-Drafts as
reference material or to cite them other than as ``work in
progress.''
To view the entire list of current Internet-Drafts, please check
the "1id-abstracts.txt" listing contained in the Internet-Drafts
Shadow Directories on ftp.is.co.za (Africa), ftp.nordu.net
(Northern Europe), ftp.nis.garr.it (Southern Europe), munnari.oz.au
(Pacific Rim), ftp.ietf.org (US East Coast), or ftp.isi.edu
(US West Coast).
NOTE
This document is not to be taken as a finished product. Some of the
sections are rough and are included in order to obtain comments from
the community that will benefit future iterations of this document.
This is simply a step in the ongoing conversation about this
document. Finally, all the authors of this draft do not necessarily
agree with and/or advocate all the mechanisms outlined in this
document.
Abstract
This document outlines TCP mechanisms that may help better utilize
the available bandwidth in TCP transfers over long-delay satellite
channels. The work outlined in this document is preliminary and has
not yet been judged to be safe for use in the shared Internet. In
addition, some of the work outlined in this document has been shown
to be unsafe for shared networks, but may be acceptable for use in
private networks.
Expires: November 27, 1998 [Page 1]
draft-ietf-tcpsat-res-issues-03.txt May 1998
Table of Contents
1 Introduction. . . . . . . . . . . . . . . . . 3
2 Satellite Architectures . . . . . . . . . . . 3
2.1 Asymmetric Satellite Networks . . . . . . . . 4
2.2 Satellite Link as Last Hop. . . . . . . . . . 4
2.3 Hybrid Satellite Networks . . . . . . . . 4
2.4 Point-to-Point Satellite Networks . . . . . . 4
2.5 Point-to-Multipoint Satellite Networks . . . 4
2.6 Multiple Satellite Hops . . . . . . . . . . . 5
3 Mitigations . . . . . . . . . . . . . . . . . 5
3.1 Connection Setup. . . . . . . . . . . . . . . 5
3.1.1 Mitigation Description. . . . . . . . . . . . 5
3.1.2 Research. . . . . . . . . . . . . . . . . . . 5
3.1.3 Implementation Issues . . . . . . . . . . . . 5
3.1.4 Topology Considerations . . . . . . . . . . . 6
3.2 Slow Start. . . . . . . . . . . . . . . . . . 6
3.2.1 Larger Initial Window . . . . . . . . . . . . 6
3.2.1.1 Mitigation Description. . . . . . . . . . . . 6
3.2.1.2 Research. . . . . . . . . . . . . . . . . . . 7
3.2.1.3 Implementation Issues . . . . . . . . . . . . 7
3.2.1.4 Topology Considerations . . . . . . . . . . . 7
3.2.2 Byte Counting . . . . . . . . . . . . . . . . 7
3.2.2.1 Mitigation Description. . . . . . . . . . . . 7
3.2.2.2 Research. . . . . . . . . . . . . . . . . . . 8
3.2.2.3 Implementation Issues . . . . . . . . . . . . 8
3.2.2.4 Topology Considerations . . . . . . . . . . . 8
3.2.3 Disabling Delayed ACKs During Slow Start. . . 8
3.2.4 Terminating Slow Start. . . . . . . . . . . . 8
3.2.4.1 Mitigation Description. . . . . . . . . . . . 8
3.2.4.2 Research. . . . . . . . . . . . . . . . . . . 9
3.2.4.3 Implementation Issues . . . . . . . . . . . . 9
3.2.4.4 Topology Considerations . . . . . . . . . . . 9
3.3 Loss Recovery . . . . . . . . . . . . . . . . 9
3.3.1 Non-SACK Based Mechanisms . . . . . . . . . . 9
3.3.2 SACK Based Mechanisms . . . . . . . . . . . . 9
3.3.2.1 SACK "pipe" Algorithm . . . . . . . . . . . . 9
3.3.2.2 Forward Acknowledgments . . . . . . . . . . . 9
3.3.2.2.1 Mitigation Description. . . . . . . . . . . . 9
3.3.2.2.2 Research. . . . . . . . . . . . . . . . . . . 10
3.3.2.2.3 Implementation Issues . . . . . . . . . . . . 10
3.3.2.2.4 Topology Considerations . . . . . . . . . . . 10
3.3.3 Explicit Congestion Notification. . . . . . . 10
3.3.4 Detecting Corruption Loss . . . . . . . . . . 10
3.4 Spoofing. . . . . . . . . . . . . . . . . . . 10
3.4.1 Mitigation Description. . . . . . . . . . . . 10
3.4.2 Research. . . . . . . . . . . . . . . . . . . 11
3.4.3 Implementation Issues . . . . . . . . . . . . 11
3.4.4 Topology Considerations . . . . . . . . . . . 11
3.5 snoop . . . . . . . . . . . . . . . . . . . . 11
3.6 Multiple Data Connections . . . . . . . . . . 11
3.6.1 Mitigation Description. . . . . . . . . . . . 11
3.6.2 Research. . . . . . . . . . . . . . . . . . . 12
3.6.3 Implementation Issues . . . . . . . . . . . . 13
Expires: November 27, 1998 [Page 2]
draft-ietf-tcpsat-res-issues-03.txt May 1998
3.6.4 Topological Considerations. . . . . . . . . . 13
3.7 Pacing TCP Segments . . . . . . . . . . . . . 13
3.7.1 ACK Spacing . . . . . . . . . . . . . . . . . 13
3.7.1.1 Mitigation Description. . . . . . . . . . . . 13
3.7.1.2 Research. . . . . . . . . . . . . . . . . . . 13
3.7.1.3 Implementation Issues . . . . . . . . . . . . 13
3.7.1.4 Topology Considerations . . . . . . . . . . . 13
3.7.2 Rate-Based Pacing . . . . . . . . . . . . . . 14
3.7.2.1 Mitigation Description. . . . . . . . . . . . 14
3.7.2.2 Research. . . . . . . . . . . . . . . . . . . 14
3.7.2.3 Implementation Issues . . . . . . . . . . . . 14
3.7.2.4 Topology Considerations . . . . . . . . . . . 14
3.8 TCP Header Compression. . . . . . . . . . . . 15
3.8.1 Mitigation Description. . . . . . . . . . . . 15
3.8.2 Research. . . . . . . . . . . . . . . . . . . 17
3.8.3 Implementation Issues . . . . . . . . . . . . 17
3.8.4 Topology Considerations . . . . . . . . . . . 17
3.9 Sharing TCP State Among Similar Connections . 18
3.9.1 Mitigation Description. . . . . . . . . . . . 18
3.9.2 Research. . . . . . . . . . . . . . . . . . . 18
3.9.3 Implementation Issues . . . . . . . . . . . . 19
3.9.4 Topology Considerations . . . . . . . . . . . 19
3.10 ACK Congestion Control. . . . . . . . . . . . 20
3.11 ACK Filtering . . . . . . . . . . . . . . . . 20
4 SPCS. . . . . . . . . . . . . . . . . . . . . 20
5 Mitigation Interactions . . . . . . . . . . . 20
6 Conclusions . . . . . . . . . . . . . . . . . 20
7 References. . . . . . . . . . . . . . . . . . 20
8 Author's Addresses: . . . . . . . . . . . . . 24
1 Introduction
This document outlines mechanisms that may help the Transmission
Control Protocol (TCP) [Pos81] better utilize the bandwidth provided
by long-delay satellite environments. These mechanisms may also
help in other environments. The proposals outlined in this document
are currently being studied throughout the research community.
Therefore, these mechanisms SHOULD NOT be used in the shared
Internet. At the point these mechanisms are proved safe and
appropriate for general use, the appropriate IETF documents will be
written. Until that time, these mechanisms should be used for
research and in private networks only.
It should be noted that non-TCP mechanisms that help performance
over satellite channels do exist (e.g., application-level changes).
However, outlining these non-TCP mitigations is left as future
work.
2 Satellite Architectures
Satellite characteristics are discussed in [AG98]. This section
discusses several ways that satellites might be used in the
Internet.
Expires: November 27, 1998 [Page 3]
draft-ietf-tcpsat-res-issues-03.txt May 1998
2.1 Asymmetric Satellite Networks
Some satellite networks exhibit a bandwidth asymmetry, with a larger
data rate in one direction than the other, because of limits on the
transmission power and the antenna size at one end of the link.
Meanwhile, other satellite systems are one way only and use a
non-satellite return path. The nature of most TCP traffic is
asymmetric with data flowing in one direction and acknowledgements
in return. However, the term asymmetric in this document refers to
different physical capacities in the forward and return channels.
2.2 Satellite Link as Last Hop
Satellite links that provide service to end users may allow for
specialized design of protocols used over the last hop. Some
satellite providers use the satellite channel for a shared high
speed downlink to users with a lower speed, non-shared terrestrial
channel that is used for requests and acknowledgements. Many times
this creates an asymmetric network, as discussed in section 2.1.
2.3 Hybrid Satellite Networks
In the more general case, satellites may be located at any point in
the network topology. In this case, the satellite link carries real
network traffic and acts as just another channel between two
gateways. In this environment, a given connection may be sent over
terrestrial channels (including wireless), as well as satellite
channels. On the other hand, a connection could also travel over
only the terrestrial network or only over the satellite portion of
the network.
TCP is an end-to-end protocol. For a geosynchronous satellite, this
means that noise anywhere in the connection will have to be dealt
with over a long delay feedback path. Eliminating noise in the
satellite link will not solve the delay problem for the case of a
noisy link (e.g., wireless interference or noisy phone line)
elsewhere in the connection path.
2.4 Point-to-Point Satellite Networks
In point-to-point satellite networks, the only hop in the network is
over the satellite channel. There is no terrestrial traffic to
contend with in this environment. This pure satellite environment
exhibits only the problems associated with the satellite channels,
as outlined in [AG98]. Since this is a private network, some
mitigations to TCP's inefficiencies can be used that are not
suitable for shared networks, such as the Internet.
2.5 Point-to-Multipoint Satellite Networks
Satellites have an advantage in point-to-multipoint uses. Although
satellite communications began as a trunking method for telephony,
the broadcast advantages of satellites were quickly recognized and
utilized for television program distribution. One signal can be
Expires: November 27, 1998 [Page 4]
draft-ietf-tcpsat-res-issues-03.txt May 1998
transmitted up to a satellite and then relayed back down to a large
geographic area. Any ground station in that area can pick up the
signal if tuned to that channel. In the same way, data can be
transmitted to small ground stations located over large geographic
distances without loading terrestrial networks. Satellites have
found use in corporate intranets and VSAT (very small aperture
terminal) networks especially for database applications, but
advantages for WWW caching, distributing network news, and
multicasting are obvious and could help to reduce network
congestion. While this is a valuable use of satellite systems, it
is considered out of scope in this document, as TCP is a
unicast-only protocol.
2.6 Multiple Satellite Hops
In some cases, service may be provided over multiple satellite hops.
This aggrivates the satellite characteristics described in [AG98].
3 Mitigations
The following sections will discuss various techniques for
mitigating the problems TCP faces in the satellite environment.
Each of the following sections will be organized as follows: First,
each mitigation will be briefly outlined. Next, research work
involving the mechanism in question will be briefly discussed. The
implementation issues of the mechanism will be dicussed next.
Finally, the mechanism's benefits in each of the environments above
will be outlined.
3.1 Connection Setup
3.1.1 Mitigation Description
TCP uses a three-way handshake to setup a connection between two
hosts. This connection setup requires 1 RTT or 1.5 RTTs, depending
upon whether the data sender started the connection actively or
passively. This startup time can be eliminated by using TCP
extensions for transactions (T/TCP) [Bra94]. In most situations,
T/TCP bypasses the three-way handshake. This allows the data sender
to begin transmitting data in the first packet sent (along with the
connection setup information). This is especially helpful for short
request/response traffic.
3.1.2 Research
T/TCP is outlined and analyzed in [Bra92] and [Bra94].
3.1.3 Implementation Issues
T/TCP required changes in the TCP stacks of both the data sender and
the data receiver. There are some security implications of sending
data in the first data segment. These will be briefly presented
and/or pointed at in a future iteration of this document.
Expires: November 27, 1998 [Page 5]
draft-ietf-tcpsat-res-issues-03.txt May 1998
3.1.4 Topology Considerations
It is expected that T/TCP will be equally beneficial in all
environments outlined in section 2.
3.2 Slow Start
The slow start algorithm is used to gradually increase the size of
TCP's sliding window [Jac88] [Ste97]. The algorithm is an important
safe-guard against transmitting an inappropriate amount of data into
the network when the connection starts up. The algorithm begins by
sending a single data segment to the receiver. For each
acknowledgment (ACK) returned, the size of the window is increased
by 1 segment. This makes the window growth directly proportional to
the round-trip time (RTT). In long-delay environments, such as some
satellite channels, the large RTT increases the time needed to
increase the size of the window to an appropriate level. This
effectively wastes capacity [All97a] [Hay97]. Slow start is most
inefficient for transfers that are short compared to the
delay*bandwidth product of the network (e.g., WWW transfers).
Delayed ACKs are another source of wasted capacity during the slow
start phase. RFC 1122 [Bra89] allows data receivers to refrain from
ACKing every incoming data segment. However, every second
full-sized segment must be ACKed. If a second full-sized segment
does not arrive within a given timeout, an ACK must be generated
(this timeout cannot exceed 500 ms). Since the data sender
increases the size of the window based on the number of arriving
ACKs, reducing the number of ACKs slows the window's growth rate.
In addition, when TCP starts sending, it sends 1 segment. When
using delayed ACKs a second segment must arrive before an ACK is
sent. Therefore, the receiver is always going to have to wait for
the delayed ACK timer to expire before ACKing the first segment,
which also increases the transfer time.
Several proposals have suggested ways to make slow start less time
consuming. These proposals are briefly outlined below and
references to the research work given.
3.2.1 Larger Initial Window
3.2.1.1 Mitigation Description
One method that will reduce the amount of time required by slow
start (and therefore, the amount of wasted capacity) is to make the
initial window be more than a single segment. Recently, this
proposal has been outlined in an Internet-Draft [FAP97]. The
suggested size of the initial window is given in equation 1.
min (4*MSS, max (2*MSS, 4380 bytes)) (1)
By increasing the initial window, more packets are sent immediately,
which will trigger more ACKs, allowing the window to open more
rapidly. In addition, by sending at least 2 segments initially, the
Expires: November 27, 1998 [Page 6]
draft-ietf-tcpsat-res-issues-03.txt May 1998
first segment does not need to wait for the delayed ACK timer to
expire as is the case when the initial window is 1 segment (as
discussed above). Therefore, the window size given in equation 1
saves up to 3 RTTs and a delayed ACK timeout when compared to an
initial window of 1 segment.
Using a larger initial window is likely to cause increased amount of
loss in highly congested networks (where each connection's share of
the router queue is less than the initial window size). Therefore,
this change must be studied further to ensure that it is safe the
shared Internet.
3.2.1.2 Research
Several researchers have studied the use of a larger initial window
in various environments. [Nic97] and [KAGT98] show a reduction in
WWW page transfer time over hybrid fiber coax (HFC) and satellite
channels respectivly. Furthermore, it has been shown that using an
initial window of 4 packets does not negatively impact overall
performance over dialup modem channels with a small number of
buffers [SP97]. [All97c] shows an improvment in transfer time for 16
KB files across the Internet and dialup modem channels when using a
larger initial window. Furthermore, a slight increase in
retransmitted segments was also shown. Finally, [PN98] shows
improved transfer time for WWW traffic in simulations with competing
traffic. [PN98] also shows a small increase in the drop rate.
3.2.1.3 Implementation Issues
The use of larger initial windows requires changes to the sender's
TCP stack.
3.2.1.4 Topology Considerations
It is expected that the use of a large initial window would be
equally beneficial to all network architectures outlined in section
2.
3.2.2 Byte Counting
3.2.2.1 Mitigation Description
As discussed above, the wide-spread use of delayed ACKs increases
the time needed by a TCP sender to increase the size of its window
during slow start. One mechanism that can mitigate this problem is
the use of ``byte counting'' [All97a]. Using this mechanism, the
window increase is based on the number of previously unacknowledged
bytes ACKed, rather than on the number of ACKs received. This makes
the increase relative to the amount of data transmitted, rather than
being dependent on the ACK interval used by the receiver.
Byte counting leads to slightly larger line-rate bursts of segments.
This increase in burstiness may increase the loss rate on some
Expires: November 27, 1998 [Page 7]
draft-ietf-tcpsat-res-issues-03.txt May 1998
networks. The size of the line-rate burst increases if the receiver
generates ``stretch ACKs'' [Pax97] (either by design [Joh95] or due
to implementation bugs [All97b] [PADHV97]).
3.2.2.2 Research
Using byte counting, as opposed to standard ACK counting, has been
shown to reduce the amount of time needed to increase the window to
an appropriate size in satellite networks [All97a]. Byte counting,
however, has not been studied in a congested environment with
competing traffic.
3.2.2.3 Implementation Issues
Changing from ACK counting to byte counting requires changes to the
data sender's TCP stack.
3.2.2.4 Topology Considerations
It has been suggested by some (and roundly criticized by others)
that byte counting will allow TCP to exhibit the same properties
regardless of the network topology (outlined in section 2) being
used.
3.2.3 Disabling Delayed ACKs During Slow Start
(in progress)
3.2.4 Terminating Slow Start
3.2.4.1 Mitigation Description
The initial slow start phase is used by TCP to determine an
appropriate window size for the given network conditions [Jac88].
Slow start is terminated when TCP detects congestion, or when the
size of the window reaches the size of the receiver's advertised
window. The window size at which TCP ends slow start and begins
using the congestion avoidance [Jac88] algorithm is called
"ssthresh". The initial value for ssthresh is the receiver's
advertised window. TCP doubles the size of the window every RTT and
therefore can overwhelm the network with at most twice as many
segments as the network can handle. By setting ssthresh to a value
less than the receiver's advertised window initially, the sender may
avoid overwhelming the network with segments. Hoe [Hoe96] proposes
using the packet-pair algorithm [Kes91] to determine a more
appropriate value for ssthresh. The algorithm observes the spacing
between the first few returning ACKs to determine the bandwidth of
the bottleneck link. Together with the measured RTT, the
delay*bandwidth product is determined and ssthresh is set to this
value. When TCP's window reaches this reduced ssthresh, slow start
is terminated and transmission continues with congestion avoidance,
which is a more conservative algorithm for increasing the size of
the window.
Expires: November 27, 1998 [Page 8]
draft-ietf-tcpsat-res-issues-03.txt May 1998
3.2.4.2 Research
It has been shown that estimating ssthresh can improve performance
and decrease packet loss in simulations [Hoe96]. However, before
this mechanism is widely deployed, it must be studied in a more
dynamic network environment.
3.2.4.3 Implementation Issues
Estimating ssthresh requires changes to the data sender's TCP
stack.
3.2.4.4 Topology Considerations
It is expected that this mechanism will work well in all symmetric
topologies outlined in section 2. However, asymmetric channels pose
a special problem, as the rate of the returning ACKs may not be the
bottleneck bandwidth in the forward direction. This can lead to the
sender setting ssthresh too low and hurting performance.
3.3 Loss Recovery
3.3.1 Non-SACK Based Mechanisms
(in progress)
3.3.2 SACK Based Mechanisms
3.3.2.1 SACK "pipe" Algorithm
(in progress)
3.3.2.2 Forward Acknowledgments
3.3.2.2.1 Mitigation Description
The Forward Acknowledgment (FACK) algorithm was developed to improve
TCP congestion control during recovery. FACK uses TCP SACK options
to glean additional information about the congestion state, adding
more precise control to the injection of data into the network
during recovery. FACK decouples the congestion control algorithms
from the data recovery algorithms to provide a simple and direct way
to use SACK to improve congestion control. Due to the separation of
these two algorithms, new data may be sent during recovery to
sustain TCP's self-clock when there is no further data to
retransmit.
The most recent version of FACK is Rate-Halving, in which one packet
is sent for every two ACKs received during recovery. ACKing
every-other packet has the result of reducing the window in one
round trip to half of the number of packets that were successfully
handled by the network. (So windows that are too large by more than
a factor of two still get reduced to half of what the network can
sustain.) Another important aspect of FACK with Rate-Halving is
Expires: November 27, 1998 [Page 9]
draft-ietf-tcpsat-res-issues-03.txt May 1998
that it sustains the ACK self-clock during recovery because
transmitting a packet for every-other ACK does not require half a
window of data to drain from the network before transmitting, as
required by the fast recovery algorithm [Ste97].
In addition, the FACK with Rate-Halving implementation provides
Thresholded Retransmission to each lost segment. Tcprexmtthresh is
the number of duplicate ACKs required by Reno to enter recovery.
FACK applies thresholded retransmission to all segments by waiting
until tcprexmtthresh SACK blocks indicate that a given segment is
missing before resending the segment. This allows reasonable
behavior on links that reorder segments. As described above, FACK
sends a segment for every second ACK received during recovery. New
segments are transmitted except when tcprexmtthresh SACK blocks have
been observed for a dropped segment, at which point the dropped
segment is retransmitted.
3.3.2.2.2 Research
The original FACK algorithm was presented at Sigcomm'96 [MM96a].
The algorithm was later enhanced to include Rate-Halving [MM96b].
The real-world performance of FACK with Rate-Halving was shown to be
much closer to the theoretical maximum for TCP than either SACK or
Reno [MSMO97].
3.3.2.2.3 Implementation Issues
In order to use FACK, the sender's TCP stack must be modified. In
addition, the receiver must be able to generate SACK options to
obtain the full benefit of using FACK.
3.3.2.2.4 Topology Considerations
FACK is expected to improve performance in all environments. Since
it is more able to sustain its self-clock than Reno, it may be
considerably attractive over long delay paths.
3.3.3 Explicit Congestion Notification
3.3.4 Detecting Corruption Loss
3.4 Spoofing
3.4.1 Mitigation Description
TCP spoofing is a technique used to split a TCP connection between a
client (such as a mobile host or a hybrid terminal) and a server
(such as fixed terminal or Internet server) into two parts: one
between the client and its gateway router over satellite/wireless
link and another between the gateway router and the server over the
Internet/wired link. The gateway effectively breaks incoming TCP
connections in two by acting on the client's behalf in interactions
with the server. This allows the server to complete the transfer
without incurring delays introduced by the satellite. Furthermore,
Expires: November 27, 1998 [Page 10]
draft-ietf-tcpsat-res-issues-03.txt May 1998
spoofing allows the gateway to use a more appropriate transport
protocol (or version of TCP) over the satellite hop. This mechanism
is criticized by some as breaking the end-to-end semantics
associated with the TCP protocol.
3.4.2 Research
The TCP spoofing technique has been used to improve the overall
throughput for asymmetric Internet access over satellite-terrestrial
network [ASBD96] and for transferring data to mobile clients over
wireless-wired network [BPSK97] [BB95]. In addition, [ASBD96] with
spoofing and an increased ACK interval (i.e., decreased frequency of
ACKs), it has been found that the throughput increased up to 400Kbps
compare to 120Kbps of the system without these techniques. By using
spoofing and the SMART retransmission technique [KM97], [BPSK97]
shows that the TCP throughput improved from 0.7 Mbps to 1.3 Mbps in
LAN environments and from 0.3 Mbps to 1.1 Mbps in WAN environments.
3.4.3 Implementation Issues
The use of TCP spoofing requires modification to the gateway
routers to enable them to act on the behalf of the end hosts.
3.4.4 Topology Considerations
TCP spoofing should help performance over all topologoies outlined
above. However, TCP spoofing is an especially useful technique in
asymmetric networks.
3.5 snoop
(Might better be handled by a "tcppep" document, if that group gets
going. Comments on this issue appreciated... -- allman)
3.6 Multiple Data Connections
3.6.1 Mitigation Description
One method that has been used to overcome TCP's inefficiencies in
the satellite environment is to use multiple TCP flows to transfer a
given file. The use of N TCP connections makes the sender N times
more aggressive and therefore can benefit throughput in some
situations. Using N multiple TCP connections can impact the
transfer and the network in a number of ways, which are listed
below.
1. The transfer is able to start transmission using an effective
window of N segments, rather than a single segment as one TCP
flow uses. This allows the transfer to more quickly increase
the effective window size to an appropriate size for the given
network. However, in some circumstances an initial window of N
segments is inappropriate for the network conditions. In this
case, a transfer utilizing more than one connection may
aggravate congestion.
Expires: November 27, 1998 [Page 11]
draft-ietf-tcpsat-res-issues-03.txt May 1998
2. During the congestion avoidance phase, the transfer increases
the window by N segments per RTT, rather than the one segment
per RTT that a single TCP connection would. Again, this can aid
the transfer by more rapidly increasing the window to an
appropriate point. However, this rate of increase can also be
too aggressive for the network conditions. In this case, the
use of multiple data connections can aggravate congestion in the
network.
3. Using multiple connections can provide a very large overall
window size. This can be an advantage for TCP implementations
that do not support the TCP larger window extension [JBB92].
However, the aggregate window size across all N connections is
equivalent to using a TCP implementation that supports large
windows.
4. The overall window decrease in the face of dropped segments is
reduced when using N parallel connections. A single TCP
connection reduces the window size to half when segment loss is
detected. Therefore, when utilizing N multiple connections each
using a window of W bytes, a single drop reduces the window to:
N * W * ((2N - 1)/2N)
Clearly this is a less dramatic reduction in window size than
when using a single TCP connection.
The use of multiple data connections can increase the ability of
non-SACK TCP implementations to quickly recover from multiple
dropped segments, assuming the dropped segments cross
connections.
The use of multiple parallel connections makes TCP overly aggressive
for many environments and can contribute to congestive collapse in
shared networks [FF98]. The advantages provided by using multiple
TCP connections are now largely provided by TCP extensions (larger
windows, SACKs, etc.). Therefore, the use of a single TCP
connection is more ``network friendly'' than using multiple parallel
connections. However, using multiple parallel TCP connections may
provide performance improvment in private networks.
3.6.2 Research
Research on the use of multiple parallel TCP connections shows
improved performance [IL92] [Hah94] [AOK95] [AKO96]. In addition,
research has shown that multiple TCP connections can outperform a
single modern TCP connection (with large windows and SACK) [AHKO97].
However, these studies did not consider the impact of using multiple
TCP connections on competing traffic. [FF98] argues that using
multiple simultaneous connections to transfer a given file may lead
to congestive collapse in shared networks.
Expires: November 27, 1998 [Page 12]
draft-ietf-tcpsat-res-issues-03.txt May 1998
3.6.3 Implementation Issues
To utilize multiple parallel TCP connections a client application
and the corresponding server must be customized.
3.6.4 Topological Considerations
As stated above, [FF98] outlines that the use of multiple parallel
connections in a shared network, such as the Internet may lead to
congestive collapse. However, the use of multiple connections may
be safe and beneficial in private networks. The specific topology
being used will dictate the number of parallel connections required.
Some work has been done to determine the appropriate number of
connections on the fly [AKO96], but such a mechanism is far from
complete.
3.7 Pacing TCP Segments
3.7.1 ACK Spacing
3.7.1.1 Mitigation Description
Routes with high bandwidth*delay products are capable of
utilizing large TCP window sizes. One possible cause of this
delay is small router buffers. In an idealized situation the
router buffer should be one half the bandwidth*delay product in
order to avoid losing segments [Par97]. This arises during slow
start, because it is possible for the sender to burst data at
twice the rate of the bottleneck router. When the router cannot
buffer the extra segments arriving from the sender, the segments
are dropped, causing the TCP sender to reduce the window size.
Using ACK spacing, the bursts can be spread over time by making
a gateway separate ACKs by at least two segments between ACKs
[Par97]. Since the ACK rate is used to determine the rate
packets at which are sent, ACK spacing may allow the sender to
transmit at the correct rate and thus avoid dropped segments.
3.7.1.2 Research
Currently an implementation of ACK spacing does not exist. An
algorithm has not been developed to determine the proper ACK
spacing, which may be different depending on whether TCP is in
slow start or congestion avoidance.
3.7.1.3 Implementation Issues
ACK spacing is implemented at the router, which eliminates the
need to change either the sender or receiver's TCP stack.
3.7.1.4 Topology Considerations
It may not be necessary to use ACK spacing in asymmetrical routes,
because of the inherent delay incurred by the returning ACKs.
Expires: November 27, 1998 [Page 13]
draft-ietf-tcpsat-res-issues-03.txt May 1998
3.7.2 Rate-Based Pacing
3.7.2.1 Mitigation Description
Slow-start takes several round trips to fully open the TCP
congestion window over routes with high bandwidth-delay product.
For short TCP connections (common in web traffic with HTTP/1.0),
this slow-start overhead can preclude effective use of the
high-bandwidth satellite channels. When senders implement
slow-start restart after a TCP connection goes idle (suggested by
Jacobson and Karels [JK92]), performance is reduced in long-lived
(but bursty) connections [Hei97a].
Rate-based pacing is a technique, used in the absence of incoming
ACKs, where the data sender temporarily paces TCP segments at a
given rate to restart the ACK clock. Upon receipt of the first ACK,
pacing is discontinued and normal TCP ACK clocking resumes. The
pacing rate may either be known from recent traffic estimates (when
restarting an idle connection or from recent prior connections), or
may be known through external means (perhaps in a point-to-point or
point-to-multipoint satellite network where available bandwidth can
be assumed to be large).
In addition, pacing data during the first RTT of a transfer may
allow TCP to make effective use of high bandwidth-delay links even
for short transfers or intermittent senders. Pacing can also be
used to reduce bursts in general (due to buggy TCPs or byte
counting, see section 3.2.2 for a discussion on byte counting).
3.7.2.2 Research
Simulation studies of rate-paced pacing for web-like traffic has
been shown to reduce router congestion and drop rates [VH97a]. In
this environment RBP substantially improves performance compared to
slow-start-after-idle for intermittent senders, and it slightly
improves performance over burst-full-cwnd-after-idle (because of
drops) [VH98]. More recently pacing has been suggested to eliminate
burstiness in networks with ACK filtering [BPK97].
3.7.2.3 Implementation Issues
RBP requires only sender-side changes to TCP. Prototype
implementations of RBP are available [VH97b]. RBP requires an
additional sender timer for pacing. The overhead of timer-driven
data transfer is often considered to high for practical use.
Preliminary experiments suggest that in RBP this overhead is minimal
because RBP only requires this timer for the first RTT of
transmission [VH98].
3.7.2.4 Topology Considerations
RBP could be used to restart an idle TCP connection for all
topologies in Section 2. Use at the beginning of new connections
Expires: November 27, 1998 [Page 14]
draft-ietf-tcpsat-res-issues-03.txt May 1998
would be restricted to topologies where available bandwidth can be
estimated out-of-band.
3.8 TCP Header Compression
The TCP and IP header information needed to reliably deliver packets
to a remote site across the Internet can add significant overhead,
especially for interactive applications. Telnet packets, for
example, typically carry only 1 byte of data per packet, and
standard IPv4 and TCP headers add at least 40 bytes to this;
IPv6/TCP headers add at least 60 bytes. Much of this information
remains relatively constant over the course of a session and so can
be replaced by a short session identifier.
3.8.1 Mitigation Description
Many fields in the TCP and IP headers either remain constant during
the course of a session, change very infrequently, or can be
inferred from other sources. For example, the source and
destination addresses, as well as the IP version, protocol, and port
fields generally do not change during a session. Packet length can
be deduced from the length field of the underlying link layer
protocol provided that the link layer packet is not padded. Packet
sequence numbers in a forward data stream generally change with
every packet, but increase in a predictable manner.
The TCP/IP header compression methods described in [DNP97], [DENP97]
and [Jac90] all reduce the overhead of TCP sessions by replacing the
data in the TCP and IP headers that remains constant, changes
slowly, or changes in a predictable manner with a short 'connection
number'. Using these methods, the sender first sends a full TCP
header, including in it a connection number that the sender will use
to reference the connection. The receiver stores the full header
and uses it as a template, filling in some fields from the limited
information contained in later, compressed headers. This
compression can reduce the size of an IPv4/TCP header from 20 to as
few as 3 or 4 bytes.
Compression and decompression happen below the IP layer, and there
is a separate compressor / decompressor pair for each serial link.
Each compression pair mantains some state about some number of TCP
connections which may use the link concurrently, and the
decompressor passes complete, uncompressed packets to the IP layer.
Thus header compression is transparent to routing, for example,
since an incoming packet with compressed headers is expanded before
being passed to the IP layer.
A variety of methods can be used by the endpoints of a connection to
negotiate the use of header compresson. The PPP serial line
protocol allows for an option exchange, during which time the
endpoints can agree on whether or not to use header compression.
For older SLIP implementations, [Jac90] describes a mechanism that
uses the first bit in the IP packet as a flag.
Expires: November 27, 1998 [Page 15]
draft-ietf-tcpsat-res-issues-03.txt May 1998
The reduction in overhead is especially useful when the link is
bandwidth-limited such as terrestrial wireless and mobile satellite
links, where the overhead associated with transmitting the header
bits is nontrivial. Header compression has the added advantage that
for the case of uniformly distributed bit errors, compressing TCP/IP
headers can provide a better quality of service by decreasing the
packet error probability. The shorter, compressed packets are less
likely to be corrupted, and the reduction in errors increases the
connection's throughput.
Extra space is saved by encoding changes in fields that change
relatively slowly by sending only their difference from their values
in the previous packet instead of their absolute values. In order
to decode headers compressed this way, the receiver keeps a copy of
each full, reconstructed TCP header after it is decoded, and applies
the delta values from the next decoded compressed header to the
reconstructed full header template.
A caveat to using this delta encoding scheme where values are
encoded as deltas from their values in the previous packet is that
if a single compressed packet it lost, subsequent packets with
compressed headers can become garbled if they contain fields which
depend on the lost packet. Consider a forward data stream of
packets with compressed headers and increasing sequence numbers. If
packet N is lost, the full header of packet N+1 will be
reconstructed at the receiver using packet N-1's full header as a
template. Thus the sequence number, which should have been
calculated from packet N's header, will be wrong, the checksum will
fail, and the packet will be discarded. When the sending TCP times
out it retransmits a packet with a full header in order to re-synch
the decompresser.
It is important to note that the compressor does not maintain any
timers, nor does the decompressor know when an error occured (only
the receiving TCP knows this, when the TCP checksum fails). A
single bit error will cause the decompressor to lose synch, and
subsequent packets with compressed headers will be dropped by the
receiving TCP, since they will all fail the TCP checksum. When this
happens, no duplicate acknowledgments will be generated, and the
decompressor can only resynch when it receives a packet with an
uncompressed header. This means that when header compression is
being used, both fast retransmit and selective acknowledgments will
not be able correct packets lost on a compressed link. The 'twice'
algorithm, described below, may be a partial solution to this.
[DNP97] and [DENP97] describe TCP/IPv4 and TCP/IPv6 compression
algorithms including compressing the various IPv6 extension headers
as well as methods for compressing non-TCP streams. [DENP97] also
augments TCP header compression by introducing the 'twice'
algorithm. If a particular packet fails to decompress properly, the
'twice' algorithm modifies its assumptions about the inferred fields
in the compressed header, assuming that a packet identical to the
current one was dropped between the last correctly decoded packet
and the current one. 'Twice' then tries to decompress the received
Expires: November 27, 1998 [Page 16]
draft-ietf-tcpsat-res-issues-03.txt May 1998
packet under the new assumptions and, if the checksum passes, the
packet is passed to IP and the decompresser state has been
re-synched. This procedure can be extended to three or more
decoding attempts. Additional robustness can be achieved by
cacheing full copies of packets which don't decompress properly in
the hopes that later arrivals will fix the problem. Finally, the
performance improvement if the decompresser can explicitly request a
full header is discussed. Simulation results show that twice, in
conjunction with the full header request mechanism, can improve
throughput over uncompressed streams.
3.8.2 Research
[Jac90] outlines a simple header compression scheme for TCP/IP.
In [DENP97] the authors present the results of simulations showing
that header compression is advantageous for both low and medium
bandwidth links. Simulations show that the twice algorithm,
combined with an explicit header request mechanism, improved
throughput by 10-15% over uncompressed sessions across a wide range
of bit error rates.
Much of this improvement may have been due to the 'twice' algorithm
quickly re-synchronizing the decompressor when a packet is lost.
This is because the twice algorithm, applied one or two times when
the decompressor becomes unsynchronized, will re-synch the
decompressor in between 83% and 99% of the cases. This is
incredibly valuable, since packets received correctly after 'twice'
has resynched the decompressor will cause duplicate acknowledgments.
This re-enables the use of both fast retransmit and SACK in
conjunction with header compression.
3.8.3 Implementation Issues
Implementing TCP/IP header compression requires changes at both the
sending (compressor) and receiving (decompresser) ends of each link
that uses compression. The twice algorithm requires very little
extra machinery over and above header compression, while the
explicit header request mechanism of [DENP97] requires more
extensive modifications to the sending and receiving ends of each
link that employs header compression.
3.8.4 Topology Considerations
TCP header compression is applicable to all of the environments
discussed in section 2, but will provide relatively more improvement
in situations where packet sizes are small (i.e., overhead is large)
and there is medium to low bandwidth and/or higher BER. When TCP's
window size is large, implementing the explicit header request
mechanism, the 'twice' algorithm, and caching packets which fail to
decompress properly become more critical.
Expires: November 27, 1998 [Page 17]
draft-ietf-tcpsat-res-issues-03.txt May 1998
3.9 Sharing TCP State Among Similar Connections
3.9.1 Mitigation Description
Persistent TCP state information can be used to overcome limitations
in the configuration of the initial state, and to automatically tune
TCP to environments using satellite channels.
TCP includes a variety of parameters, many of which are set to
initial values which can severely affect the performance of
satellite connections, even though most TCP parameters are adjusted
later while the connection is established. These include initial
window size and initial MSS size. Various suggestions have been
made to change these initial conditions, to more effectively support
satellite links. It is difficult to select any single set of
parameters which is effective for all environments, however.
Instead of attempting to select these parameters a-priori, TCB
sharing keeps persistent state between incarnations of TCP
connections, and considers this state when initializing a new
connection. For example, if all connections to subnet 10 result in
extended windows of 1 megabyte, it is probably more efficient to
start new connections with this value, than to rediscover it by
window doubling over a period of dozens of round-trip times.
Sharing state among connections brings up a number of questions such
as what to share, with whom to share, how to share it, and how to
age shared information. First, what information is to be shared
must be determined. Some information may be appropriate to share
among TCP connections, while some information sharing may be
inappropriate or not useful. Next, we need to determine with whom
to share information. Sharing may be appropriate for TCP
connections sharing a common path to a given host. Information may
be shared among connections within a host, or even among connections
between different hosts, such as hosts on the same LAN. However,
sharing information between connections not traversing the same
network may not be appropriate. Given the state to share and the
parties that share it, a mechanism for the sharing is
required. Simple state, like MSS and RTT, is easy to share, but
window information can be shared a variety of ways. The sharing
mechanism determines priorities among the sharing connections, and a
variety of fairness criteria need to be considered. Also, the
mechanisms by which information is aged require further study.
Fianlly, the security concerns associated with sharing a piece of
information need to be carefully considered before introducing such
a mechanism.
3.9.2 Research
The opportunity for such sharing, both among a sequence of
connections, as well as among concurrent connections, is described
in more detail in [Tou97]. The state management itself is largely
an implementation issue; the point of TCB sharing is to raise this
Expires: November 27, 1998 [Page 18]
draft-ietf-tcpsat-res-issues-03.txt May 1998
to a research issue, and to further specify the ways in which the
information should be shared, regardless of the implementation.
3.9.3 Implementation Issues
Much of TCB sharing is an implementation issue only. The TCP
specifications do not preclude sharing information across
connections, or using some information from previous connections to
affect the state of new connections.
The goal of TCB sharing is to decouple the effect of connection
initialization from connection performance, to obviate the desire to
have persistent connections solely to maintain efficiency. This
allows separate connections to be more correctly used to indicate
separate associations, distinct from the performance implications
current implementations suffer.
Each TCP connection maintains state, usually in a data structure
called the TCP Control Block (TCB). The TCB contains information
about the connection state, its associated local process, and
feedback parameters about the connection's transmission. As
originally specified, and usually implemented, the TCB is maintained
on a per-connection basis. An alternate implementation can share
some of this state across similar connection instances and among
similar simultaneous connections. The resulting implementation can
have better transient performance, especially where long-term TCB
parameters differ widely from their typical initial values. These
changes can be constrained to affect only the TCB initialization,
and so have no effect on the long-term behavior of TCP after a
connection has been established. They can also be more broadly
applied to coordinate concurrent connections.
We note that the notion of sharing TCB state was originally
documented in T/TCP [Bra92], and is used there to aggregate RTT
values across connection instances, to provide meaningful average
RTTs, even though most connections are expected to persist for only
one RTT. T/TCP also shares a connection identifier, a sequence
number separate from the window number and address/port pairs by
which TCP connections are typically distinguished. As a result of
this shared state, T/TCP allows a receiver to pass data in the SYN
segment to the receiving application, prior to the completion of the
three-way handshake, without compromising the integrity of the
connection. In effect, this shared state caches a partial handshake
from the previous connection, which is a variant of the more general
issue of TCB sharing.
Other implementation considerations are outlined in [Tou97] in
detail. Many instances of the implementation are the subject of
ongoing research.
3.9.4 Topology Considerations
TCB sharing aggregates state information. The set over which this
state is aggregated is critical to the performance of the
Expires: November 27, 1998 [Page 19]
draft-ietf-tcpsat-res-issues-03.txt May 1998
sharing. Worst case, nothing is shared, which degenerates to the
behavior of current implementations. Best case, information is
shared among connections sharing a critical property. In earlier
work [Tou97], the possibility of aggregating based on destination
subnet, or even routing path is considered.
For example, on a host connected to a satellite link, all
connections out of the host share the critical property of large
propagation latency, and are dominated by the bandwidth of the
satellite link. In this case, all connections with the same source
would share information.
It is expected that sharing state across TCP connections may be
useful in all network environments presented in section 2.
3.10 ACK Congestion Control
(in progress)
3.11 ACK Filtering
(in progress)
4 SPCS
(in progress)
5 Mitigation Interactions
6 Conclusions
7 References
[AHKO97] Mark Allman, Chris Hayes, Hans Kruse, Shawn Ostermann. TCP
Performance Over Satellite Links. In Proceedings of the 5th
International Conference on Telecommunication Systems, March
1997.
[AKO96] Mark Allman, Hans Kruse, Shawn Ostermann. An
Application-Level Solution to TCP's Satellite Inefficiencies.
In Proceedings of the First International Workshop on
Satellite-based Information Services (WOSBIS), November 1996.
[AG98] Mark Allman, Dan Glover. Enhancing TCP Over Satellite
Channels using Standard Mechanisms, February 1998.
Internet-Draft draft-ietf-tcpsat-stand-mech-03.txt (work in
progress).
[All97a] Mark Allman. Improving TCP Performance Over Satellite
Channels. Master's thesis, Ohio University, June 1997.
[All97b] Mark Allman. Fixing Two BSD TCP Bugs. Technical Report
CR-204151, NASA Lewis Research Center, October 1997.
Expires: November 27, 1998 [Page 20]
draft-ietf-tcpsat-res-issues-03.txt May 1998
[All97c] Mark Allman. An Evaluation of TCP with Larger Initial
Windows. 40th IETF Meeting -- TCP Implementations WG.
December, 1997. Washington, DC.
[AOK95] Mark Allman, Shawn Ostermann, Hans Kruse. Data Transfer
Efficiency Over Satellite Circuits Using a Multi-Socket
Extension to the File Transfer Protocol (FTP). In Proceedings
of the ACTS Results Conference, NASA Lewis Research Center,
September 1995.
[ASBD96] Vivek Arara, Narin Suphasindhu, John S. Baras, Douglas
Dillon. Asymmetric Internet Access Over Satellite-Terrestrial
Networks. Proceedings of the AIAA: 16th International
Communications Satellite Systems Conference and Exhibit, Part1,
pp. 476-482, Washington, D.C, February 25-29, 1996.
[BB95] Ajay Bakre, B.R. Badrinath. I-TCP: Indirect TCP for Mobile
Hosts. In Proceeding of the 15th International Conference on
Distributed Computing Systems (ICDCS), May 1995.
[BPK97] Hari Balakrishnan, Venkata N. Padmanabhan, and Randy
H. Katz. The Effects of Asymmetry on TCP Performance. In
Proceedings of the ACM/IEEE Mobicom, Budapest, Hungary, ACM.
September, 1997.
[BPSK97] Hari Balakrishnan, Venkata N. Padmanabhan, Srinivasan
Seshan, Randy H. Katz. A Comparison of Mechanism for Improving
TCP Performance over Wireless Links. IEEE/ACM Transactions on
Networking, December 1997.
[Bra89] Robert Braden. Requirements for Internet Hosts --
Communication Layers, October 1989. RFC 1122.
[Bra92] Robert Braden. Transaction TCP -- Concepts, September 1992.
RFC 1379.
[Bra94] Robert Braden. T/TCP -- TCP Extensions for Transactions:
Functional Specification, July 1994. RFC 1644.
[DENP97] Low-Loss TCP/IP Header Compression for Wirelesss Networks.
Wireless Networks, vol.3, no.5, p. 375-87
[DNP97] Mikael Degermark, Bjorn Nordgren, and Stephen Pink. IP
Header Compression, December 1997. Internet-Draft
draft-degermark-ipv6-hc-05.txt (work in progress).
[FAP97] Sally Floyd, Mark Allman, Craig Partridge. Increasing TCP's
Initial Window, July 1997. Internet-Draft
draft-floyd-incr-init-win-00.txt (work in progress).
[FF98] Sally Floyd, Kevin Fall. Promoting the Use of End-to-End
Congestion Control in the Internet. Submitted to IEEE
Transactions on Networking.
Expires: November 27, 1998 [Page 21]
draft-ietf-tcpsat-res-issues-03.txt May 1998
[Hah94] Jonathan Hahn. MFTP: Recent Enhancements and Performance
Measurements. Technical Report RND-94-006, NASA Ames Research
Center, June 1994.
[Hay97] Chris Hayes. Analyzing the Performance of New TCP
Extensions Over Satellite Links. Master's Thesis, Ohio
University, August 1997.
[Hoe96] Janey Hoe. Improving the Startup Behavior of a Congestion
Control Scheme for TCP. In ACM SIGCOMM, August 1996.
[IL92] David Iannucci and John Lakashman. MFTP: Virtual TCP Window
Scaling Using Multiple Connections. Technical Report
RND-92-002, NASA Ames Research Center, January 1992.
[Jac88] Van Jacobson. Congestion Avoidance and Control. In
Proceedings of the SIGCOMM '88, ACM. August, 1988.
[Jac90] Van Jacobson. Compressing TCP/IP Headers, February 1990.
RFC 1144.
[JBB92] Van Jacobson, Robert Braden, and David Borman. TCP
Extensions for High Performance, May 1992. RFC 1323.
[JK92] Van Jacobson and Mike Karels. Congestion Avoidance and
Control. Originally appearing in the proceedings of SIGCOMM '88
by Jacobson only, this revised version includes an additional
appendix. The revised version is available at
ftp://ftp.ee.lbl.gov/papers/congavoid.ps.Z. 1992.
[Joh95] Stacy Johnson. Increasing TCP Throughput by Using an
Extended Acknowledgment Interval. Master's Thesis, Ohio
University, June 1995.
[Kes91] Srinivasan Keshav. A Control Theoretic Approach to Flow
Control. In ACM SIGCOMM, September 1991.
[KAGT98] Hans Kruse, Mark Allman, Jim Griner, Diepchi Tran. HTTP
Page Transfer Rates Over Geo-Stationary Satellite Links. March
1998. Proceedings of the Sixth International Conference on
Telecommunication Systems.
[KM97] S. Keshav, S. Morgan. SMART Retransmission: Performance with
Overload and Random Losses. Proceeding of Infocom. 1997.
[MM96a] M. Mathis, J. Mahdavi, "Forward Acknowledgment: Refining TCP
Congestion Control," Proceedings of SIGCOMM'96, August, 1996,
Stanford, CA. Available from
http://www.psc.edu/networking/papers/papers.html
[MM96b] M. Mathis, J. Mahdavi, "TCP Rate-Halving with Bounding
Parameters" Available from
http://www.psc.edu/networking/papers/FACKnotes/current.
Expires: November 27, 1998 [Page 22]
draft-ietf-tcpsat-res-issues-03.txt May 1998
[MSMO97] M. Mathis, J. Semke, J. Mahdavi, T. Ott, "The Macroscopic
Behavior of the TCP Congestion Avoidance Algorithm",Computer
Communication Review, volume 27, number3, July 1997. available
from Available from
http://www.psc.edu/networking/papers/papers.html
[Nic97] Kathleen Nichols. Improving Network Simulation with
Feedback. Com21, Inc. Technical Report. Available from
http://www.com21.com/pages/papers/068.pdf.
[PADHV97] Vern Paxson, Mark Allman, Scott Dawson, Ian Heavens,
Bernie Volz. Known TCP Implementation Problems, March 1998.
Internet-Draft draft-ietf-tcpimpl-prob-03.txt.
[Par97] Craig Partridge. ACK Spacing for High Delay-Bandwidth Paths
with Insufficient Buffering, July 1997. Internet-Draft
draft-partridge-e2e-ackspacing-00.txt.
[Pax97] Vern Paxson. Automated Packet Trace Analysis of TCP
Implementations. In Proceedings of ACM SIGCOMM, September 1997.
[PN98] Poduri, K., and Nichols, K., Simulation Studies of Increased
Initial TCP Window Size, February 1998. Internet-Draft
draft-ietf-tcpimpl-poduri-00.txt (work in progress).
[Pos81] Jon Postel. Transmission Control Protocol, September 1981.
RFC 793.
[SP97] Tim Shepard and Craig Partridge. When TCP Starts Up With
Four Packets Into Only Three Buffers, July 1997. Internet-Draft
draft-shepard-TCP-4-packets-3-buff-00.txt (work in progress).
[Ste97] W. Richard Stevens. TCP Slow Start, Congestion Avoidance,
Fast Retransmit, and Fast Recovery Algorithms, January 1997.
RFC 2001.
[Tou97] Touch, J., "TCP Control Block Interdependence," RFC-2140,
USC/Informatino Sciences Institute , April 1997.
[VH97a] Vikram Visweswaraiah and John Heidemann. Improving Restart
of Idle TCP Connections. Technical Report 97-661, University of
Southern California, 1997.
[VH97b] Vikram Visweswaraiah and John Heidemann. Rate-based pacing
Source Code Distribution, Web page
http://www.isi.edu/lsam/publications/rate_based_pacing/README.html.
November, 1997.
[VH98] Vikram Visweswaraiah and John Heidemann. Improving Restart
of Idle TCP Connections (revised). Submitted for publication.
Expires: November 27, 1998 [Page 23]
draft-ietf-tcpsat-res-issues-03.txt May 1998
8 Author's Addresses:
Mark Allman
NASA Lewis Research Center/Sterling Software
21000 Brookpark Rd. MS 54-2
Cleveland, OH 44135
mallman@lerc.nasa.gov
http://gigahertz.lerc.nasa.gov/~mallman
Dan Glover
NASA Lewis Research Center
21000 Brookpark Rd. MS 54-2
Cleveland, OH 44135
Daniel.R.Glover@lerc.nasa.gov
Jim Griner
NASA Lewis Research Center
21000 Brookpark Rd. MS 54-2
Cleveland, OH 44135
jgriner@lerc.nasa.gov
John Hiedemann
University of Southern California/Information Sciences Institute
4676 Admiralty Way
Marina del Rey, CA 90292-6695
johnh@isi.edu
Keith Scott
Jet Propulsion Laboratory
California Institute of Technology
4800 Oak Grove Drive MS 161-260
Pasadena, CA 91109-8099
Keith.Scott@jpl.nasa.gov
http://eis.jpl.nasa.gov/~kscott/
Jeffrey Semke
Pittsburgh Supercomputing Center
4400 Fifth Ave.
Pittsburgh, PA 15213
semke@psc.edu
http://www.psc.edu/~semke
Joe Touch
University of Southern California/Information Sciences Institute
4676 Admiralty Way
Marina del Rey, CA 90292-6695
USA
Phone: +1 310-822-1511 x151
Fax: +1 310-823-6714
URL: http://www.isi.edu/~touch
Email: touch@isi.edu
Expires: November 27, 1998 [Page 24]
draft-ietf-tcpsat-res-issues-03.txt May 1998
Diepchi Tran
NASA Lewis Research Center
21000 Brookpark Rd. MS 54-2
Cleveland, OH 44135
dtran@lerc.nasa.gov
Expires: November 27, 1998 [Page 25]