Internet Engineering Task Force                      Mark Allman, Editor
INTERNET DRAFT                                                Dan Glover
File: draft-ietf-tcpsat-res-issues-01.txt                     Jim Griner
                                                             Keith Scott
                                                               Joe Touch
                                                       February 23, 1998
                                                Expires: August 23, 1998


               Ongoing TCP Research Related to Satellites


Status of this Memo

    This document is an Internet-Draft.  Internet-Drafts are working
    documents of the Internet Engineering Task Force (IETF), its areas,
    and its working groups.  Note that other groups may also distribute
    working documents as Internet-Drafts.

    Internet-Drafts are draft documents valid for a maximum of six
    months and may be updated, replaced, or obsoleted by other documents
    at any time.  It is inappropriate to use Internet-Drafts as
    reference material or to cite them other than as ``work in
    progress.''

    To learn the current status of any Internet-Draft, please check the
    ``1id-abstracts.txt'' listing contained in the Internet- Drafts
    Shadow Directories on ftp.is.co.za (Africa), nic.nordu.net (Europe),
    munnari.oz.au (Pacific Rim), ds.internic.net (US East Coast), or
    ftp.isi.edu (US West Coast).

NOTE

    This document is not to be taken as a finished product.  Some of the
    sections are rough and are included in order to obtain comments from
    the community that will benefit future iterations of this document.
    This is simply a step in the ongoing conversation about this
    document.  Finally, all the authors of this draft do not necessarily
    agree with and/or advocate all the mechanisms outlined in this
    document.

Abstract

    This document outlines TCP mechanisms that may help better utilize
    the available bandwidth in TCP transfers over long-delay satellite
    channels.  The work outlined in this document is preliminary and has
    not yet been judged to be safe for use in the shared Internet.

1   Introduction

    This document outlines mechanisms that may help the Transmission
    Control Protocol (TCP) [Pos81] better utilize the bandwidth provided
    by long-delay satellite environments.  These mechanisms may also
    help in other environments.  The proposals outlined in this document
    are currently being studied throughout the research community.

Expires: August 23, 1998                                        [Page 1]


draft-ietf-tcpsat-res-issues-01.txt                        February 1998

    Therefore, these mechanisms SHOULD NOT be used in the shared
    Internet.  At the point these mechanisms are proved safe and
    appropriate for general use, the appropriate IETF documents will be
    written.  Until that time, these mechanisms should be used for
    research and in private networks only.

2   Satellite Architectures

2.1 Asymmetric Satellite Networks

    Some satellite networks exhibit a bandwidth asymmetry, with a larger
    data rate in one direction than the other, because of limits on the
    transmission power and the antenna size at one end of the link.
    Meanwhile, other systems are one way only or use a different return
    path.

    For example, in some systems being used today, satellites are found
    at the edges of the Internet providing the end user with a
    connection to the shared Internet.  Many end users share a relatively
    high data rate downlink from a satellite and use a non-shared, low
    speed dialup modem connection as the return channel.

    [Picture in next iteration of draft.]

2.2 Hybrid Satellite Networks

    In the more general case, satellites may be located at any point in
    the network topology.  In this case, the satellite link carries real
    network traffic and acts as just another channel between two
    gateways.  In this environment, a given connection may be sent over
    terrestrial channels, as well as satellite channels.  On the other
    hand, a connection could also travel over only the terrestrial
    network or only over the satellite portion of the network.

    [Picture in next iteration of draft.]

2.3 Satellite Link as Last Hop

2.4 Point-to-Point Satellite Networks

2.5 Point-to-Multipoint Satellite Networks

2.6 Satellite to Mobile Host

3   Mitigations

    The following sections will discuss various techniques for
    mitigating the problems TCP faces in the satellite environment.
    Each of the following sections details mitigation techniques that
    apply to one portion of the TCP connection.  Each of the following
    sections will be organized as follows.  First, each mitigation will
    be briefly outlined.  Next, research work involving the mechanism in
    question will be briefly discussed.  Finally, the mechanism's
    benefits in each of the environments above will be outlined.

Expires: August 23, 1998                                        [Page 2]


draft-ietf-tcpsat-res-issues-01.txt                        February 1998


3.1 Connection Setup

3.1.1 Mitigation Description

    TCP uses a three-way handshake to setup a connection between two
    hosts.  This connection setup requires 1 RTT or 1.5 RTTs, depending
    upon whether the data sender started the connection actively or
    passively.  This startup time can be eliminated by using TCP
    extensions for transactions (T/TCP) [Bra94].  In most situations,
    T/TCP bypasses the three-way handshake.  This allows the data sender
    to begin transmitting data in the first packet sent (along with the
    connection setup information).  This is especially helpful for short
    request/response traffic.

3.1.2 Research

    T/TCP is outlined and analyzed in [Bra94] and [Bra92].

3.1.3 Implementation Issues

    T/TCP required changes in the TCP stacks of both the data sender and
    the data receiver.

3.1.4 Topology Considerations

    It is expected that T/TCP will be equally beneficial in all
    environments outlined in section 2.

3.2 Slow Start

    The slow start algorithm is used to gradually increase the size of
    TCP's sliding window [JK88] [Ste97].  The algorithm is an important
    safe-guard against transmitting an inappropriate amount of data into
    the network when the connection starts up.  The algorithm begins by
    sending a single data segment to the receiver.  For each
    acknowledgment (ACK) returned, the size of the window is increased
    by 1 segment.  This makes the window growth directly proportional to
    the round-trip time (RTT).  In long-delay environments, such as some
    satellite channels, the large RTT increases the time needed to
    increase the size of the window to an appropriate level.  This
    effectively wastes capacity [All97a].

    Delayed ACKs are another source of wasted capacity during the slow
    start phase.  RFC 1122 [Bra89] allows data receivers to refrain from
    ACKing every incoming data segment.  However, every second
    full-sized segment must be ACKed.  If a second full-sized segment
    does not arrive within a given timeout, an ACK must be generated
    (this timeout cannot exceed 500 ms).  Since the data sender
    increases the size of the window based on the number of arriving
    ACKs, reducing the number of ACKs slows the window's growth rate.
    In addition, when TCP starts sending, it sends 1 segment.  When
    using delayed ACKs a second segment must arrive before an ACK is
    sent.  Therefore, the receiver is always going to have to wait for

Expires: August 23, 1998                                        [Page 3]


draft-ietf-tcpsat-res-issues-01.txt                        February 1998

    the delayed ACK timer to expire before ACKing the first segment.
    This also increases the transfer time.

    Several proposals have suggested ways to make slow start less
    problematic in long-delay environments.  These proposals are briefly
    outlined below and references to the research work given.

3.2.1 Larger Initial Window

3.2.1.1 Mitigation Description

    One method that will reduce the amount of time required by slow
    start (and therefore, the amount of wasted capacity) is to make the
    initial window be more than a single segment.  Recently, this
    proposal has been outlined in an Internet-Draft [FAP97].   The
    suggested size of the initial window is given in equation 1.

                  min (4*MSS, max (2*MSS, 4380 bytes))               (1)

    By increasing the initial window, more packets are sent immediately,
    which will trigger more ACKs, allowing the window to open more
    rapidly.  In addition, by sending at least 2 segments into the
    network initially, the first segment does not need to wait for the
    delayed ACK timer to expire as is the case when the initial window
    is 1 segment (as discussed above).  Therefore, the window size given
    in equation 1 saves up to 3 RTTs and a delayed ACK timeout when
    compared to an initial window of 1 segment.

    Using a larger initial window is likely to cause increased amount of
    loss in highly congested networks (where each connection's share of
    the router queue is less than the initial window size).  Therefore,
    this change must be studied further to ensure that it is safe the
    shared Internet.

3.2.1.2 Research

    Several researchers have studied the use of a larger initial window
    in various environments.  Nichols [Nic97] found that using a larger
    initial window reduced the time required to load WWW pages over
    hybrid fiber coax (HFC) channels.  Furthermore, it has been found
    that using an initial window of 4 packets does not negatively impact
    overall performance over dialup modem channels [SP97].

3.2.1.3 Implementation Issues

    The use of larger initial windows requires changes to the sender's
    TCP stack.

3.2.1.4 Topology Considerations

    It is expect that the use of a large initial window would be equally
    beneficial to all network architectures outlined in section 2.



Expires: August 23, 1998                                        [Page 4]


draft-ietf-tcpsat-res-issues-01.txt                        February 1998

3.2.2 Byte Counting

3.2.2.1 Mitigation Description

    As discussed above, the wide-spread use of delayed ACKs increases
    the time needed by a TCP sender to increase the size of its window
    during slow start.  One mechanism that can mitigate this problem is
    the use of ``byte counting'' [All97a].  Using this mechanism, the
    window increase is based on the number of previously unacknowledged
    bytes ACKed, rather than on the number of ACKs received.  This makes
    the increase relative to the amount of data transmitted, rather than
    being dependent on the ACK interval used by the receiver.

    Byte counting leads to slightly larger line-rate bursts of segments.
    This increase in burstiness may increase the loss rate on some
    networks.  The size of the line-rate burst increases if the receiver
    generates ``stretch ACKs'' [Pax97] (either by design [Joh95] or due
    to implementation bugs [All97b]).

3.2.2.2 Research

    Using byte counting, as opposed to standard ACK counting, has been
    shown to reduce the amount of time needed to increase the window to
    an appropriate size in satellite networks [All97a].  Byte counting,
    however, has not been studied in a congested environment with
    competing traffic.

3.2.2.3 Implementation Issues

    Changing from ACK counting to byte counting requires changes to the
    data sender's TCP stack.

3.2.2.4 Topology Considerations

    It has been suggested by some (and roundly criticized by others)
    that byte counting will be exhibit the same properties regardless of
    the network topology (outlined in section 2) being used.

3.2.3 Disabling Delayed ACKs During Slow Start

    (in progress)

3.2.4 Terminating Slow Start

3.2.4.1 Mitigation Description

    The initial slow start phase is used by TCP to determine an
    appropriate window size [JK88].  Slow start is terminated when TCP
    detects congestion, or when the size of the window reaches the size
    of the receivers advertised window.  The window size at which TCP
    ends slow start and begins using the congestion avoidance [JK88]
    algorithm is "ssthresh".  The initial value for ssthresh is the
    receiver's advertised window.  TCP doubles the size of the window
    every RTT and therefore can overwhelm the network with at most twice

Expires: August 23, 1998                                        [Page 5]


draft-ietf-tcpsat-res-issues-01.txt                        February 1998

    as many segments as the network can handle.  By setting ssthresh to
    something less than the receiver's advertised window initially, the
    sender may avoid overwhelming the network with segments.  Hoe
    [Hoe96] proposes using the packet-pair [Kes91] algorithm to
    determine a more appropriate value for ssthresh.  The algorithm
    observes the spacing between the first few ACKs to determine the
    bandwidth of the bottleneck link.  Together with the measured RTT,
    the delay*bandwidth product is determined and ssthresh is set to
    this value.  When TCP's window reaches this reduced ssthresh, slow
    start is terminated and transmission continues with congestion
    avoidance, which is a much more conservative algorithm for
    increasing the size of the window.

3.2.4.2 Research

    It has been shown that estimating ssthresh can improve performance
    and decrease packet loss [Hoe96].

3.2.4.3 Implementation Issues

    Estimating ssthresh requires changes to the data sender's TCP
    stack.

3.2.4.4 Topology Considerations

    It is expected that this mechanism will work well in all symmetric
    topologies outlined in section 2.  However, asymmetric channels pose
    a special problem, as the rate of the returning ACKs may not be the
    bottleneck bandwidth in the forward direction.

3.3 Loss Recovery

3.3.1 Non-SACK Based Mechanisms

    (in progress)

3.3.2 SACK Based Mechanisms

    (in progress)

3.3.3 Explicit Congestion Notification

3.3.4 Detecting Corruption Loss

3.4 Spoofing

    (in progress)

3.5 Multiple Data Connections

3.5.1 Mitigation Description

    One method that has been used to overcome TCP's inefficiencies in
    the satellite environment is to use multiple TCP flows to transfer a

Expires: August 23, 1998                                        [Page 6]


draft-ietf-tcpsat-res-issues-01.txt                        February 1998

    given file.  The use of N TCP connections makes TCP N times more
    aggressive and therefore can benefit throughput in some situations.
    Using N multiple TCP connections can impact the transfer and the
    network in a number of ways, which are listed below.

    1.  The transfer is able to start transmission using an effective
        window of N segments, rather than a single segment as one TCP
        flow uses.  This allows the transfer to more quickly increase
        the window size to an appropriate size for the given network.
        However, in some circumstances an initial window of N segments
        is inappropriate for the network conditions.  In this case, a
        transfer utilizing more than one connection may aggravate
        congestion.

    2.  During the congestion avoidance phase, the transfer increases
        the window by N segments per RTT, rather than the one segment
        per RTT that a single TCP connection would employ.  Again, this
        can aid the transfer by more rapidly increasing the window to an
        appropriate point.  However, this rate of increase can also be
        too aggressive for the current network conditions.  In this
        case, the use of multiple data connections can aggravate
        congestion in the network.

    3.  Using multiple connections can provide a very large overall
        window size.  This can be an advantage for TCP implementations
        that do not support the TCP larger window extension.  However,
        the aggregate window size across all N connections is equivalent
        using a TCP implementation that supports large windows.

    4.  The overall window decrease in the face of dropped segments is
        less drastic when using N parallel connections.  A single TCP
        connection reduces the window size to half when segment loss is
        detected.  Therefore, when utilizing N multiple connections each
        using a window of W bytes, a single drop reduces the window to:

                W * ((2N - 1)/2N)

        Clearly this is a less dramatic reduction in window size than
        when using a single TCP connection.

        The use of multiple data connections can increase the ability of
        non-SACK TCP implementations to recover from multiple dropped
        segments, assuming the dropped segments cross connections.

    The use of multiple parallel connections makes TCP overly aggressive
    for many environments and can contribute to congestive collapse in
    shared networks [FF98].  The advantages provided by using multiple
    TCP connections are now largely provided by TCP extensions (larger
    windows, SACKs, etc.).  Furthermore, the use of a single TCP
    connection is more ``network friendly'' than using multiple parallel
    connections.

3.5.2 Research


Expires: August 23, 1998                                        [Page 7]


draft-ietf-tcpsat-res-issues-01.txt                        February 1998

    Research on the use of multiple parallel TCP connections shows
    improved performance [IL92] [Hah94] [AOK95] [AKO96].
    In addition, research has shown that multiple TCP connections can
    outperform a single modern TCP connection (with large windows
    and SACK) [AHKO97].  However, these studies did not consider the
    impact of using multiple TCP connections on competing traffic.
    [FF98] outlines that using multiple simultaneous connection to
    transfer a given file will lead to congestive collapse in shared
    networks.

3.5.3 Implementation Issues

    To utilize multiple parallel TCP connections an application and the
    corresponding server must be customized.

3.5.4 Topological Considerations

    As stated above, [FF98] outlines that the use of multiple parallel
    connections in a shared network, such as the Internet, leads to
    congestive collapse.  In addition, the specific topology being used
    will dictate the number of parallel connections required.  Some work
    has been done to determine the appropriate number of connection on
    the fly [AKO96], but such a mechanism is far from complete.

3.6 Pacing TCP Segments

3.6.1 ACK Spacing

3.6.1.1 Mitigation Description

    Routes with high bandwidth*delay products (such as those found in
    geostationary satellite links) are capable of utilizing large TCP
    window sizes.  However, it can take a long time before TCP can fully
    utilize this large window.  One possible cause of this delay are
    small router buffers, since in an idealized situation the router
    buffer should be one half the bandwidth*delay product in order to
    avoid losing segments [Par97].  This arises during slow start,
    because it is possible for the sender to burst data at twice the
    rate of the bottleneck router.  Using ACK spacing, the bursts can be
    spread over time by making a gateway separate ACKs by at least two
    segments between ACKs [Par97].  Since the ACK rate is used to
    determine the rate packets are sent, ACK spacing may allow the
    sender to transmit at the correct rate.

3.6.1.2 Research

    Currently an implementation of ACK spacing does not exist, beyond a
    mere thought exercise.  An algorithm has not been developed to
    determine the proper ACK spacing, which may be different depending
    on whether TCP is in slow start or congestion avoidance.





Expires: August 23, 1998                                        [Page 8]


draft-ietf-tcpsat-res-issues-01.txt                        February 1998

3.6.1.3 Implementation Issues

    ACK spacing is implemented at the router, which elevates the need to
    change either the sender or receiver's TCP stack.

3.6.1.4 Topology Considerations

    It may not be necessary to use ACK spacing in an asymmetrical
    routes, because of the inherent slowness of the returning ACKs.

3.6.2 Rate-Based Pacing

3.7 TCP Header Compression

    The TCP and IP header information needed to reliably deliver packets
    to a remote site across the Internet can add significant overhead,
    especially for interactive applications.  Telnet packets, for
    example, typically carry only 1 byte of data per packet, and
    standard IPv4 and TCP headers add at least 40 bytes to this;
    IPv6/TCP headers add at least 60 bytes.  Much of this information
    remains relatively constant over the course of a session and so can
    be replaced by a short session identifier.

3.7.1 Mitigation Description

    Many fields in the TCP and IP headers either remain constant during
    the course of a session, change very infrequently, or can be
    inferred from other sources.  For example, the source and
    destination addresses, as well as the IP version, protocol, and port
    fields generally do not change during a session.  Packet length can
    be deduced from the length field of the underlying link layer
    protocol provided that the link layer packet is not padded.  Packet
    sequence numbers in a forward data stream generally change with
    every packet, but increase in a predictable manner.

    The TCP/IP header compression methods described in [DNP97] [DENP97]
    and [Jac90] all attempt to reduce the overhead of a TCP session by
    replacing the data in the TCP and IP headers that remains constant,
    changes slowly, or changes in a predictable manner with a short
    'connection number'.  Using these methods, the sender first sends a
    full TCP header including in it a connection number that the sender
    will use to reference the connection.  The receiver stores the full
    header and uses it as a template, filling in some fields from the
    limited information contained in later, compressed headers.  This
    compression can reduce the size of an IPv4/TCP header from 20 to as
    few as 3 or 4 bytes.

    The reduction in overhead is especially useful when the link is
    bandwidth-limited such as terrestrial wireless and mobile satellite
    links, where the overhead associated with transmitting the header
    bits is nontrivial.  Header compression has the added advantage that
    for the case of uniformly distributed bit errors, compressing TCP/IP
    headers can provide a better quality of service by decreasing the
    packet error probability.

Expires: August 23, 1998                                        [Page 9]


draft-ietf-tcpsat-res-issues-01.txt                        February 1998


    Extra space is saved by encoding changes in fields that change
    relatively slowly by sending only their difference from their values
    in the previous packet instead of their absolute values.  In order
    to decode headers compressed this way, the receiver keeps a copy of
    each full, reconstructed TCP header after it is decoded, and applies
    the delta values from the next decoded compressed header to the
    reconstructed full header template.

    A caveat to using this delta encoding scheme where values are
    encoded as deltas from their values in the previous packet is that
    if a single compressed packet it lost, subsequent packets with
    compressed headers can become garbled if they contain fields which
    depend on the lost packet.  Consider a forward data stream of
    packets with compressed headers and increasing sequence numbers.  If
    packet N is lost, the full header of packet N+1 will be
    reconstructed at the receiver using packet N-1's full header as a
    template.  Thus the sequence number, which should have been
    calculated from packet N's header, will be wrong, the checksum will
    fail, and the packet will be discarded.  When the sending TCP times
    out it retransmits a packet with a full header in order to re-synch
    the decompresser.

    [DNP97] and [DENP97] describe TCP/IPv4 and TCP/IPv6 compression
    algorithms including compressing the various IPv6 extension headers.
    For plain TCP/IPv4 connections, these are similar to VJ compression.
    In addition, [DNP97] and [DENP97] discuss 'compression slow-start'
    for non-TCP streams, whereby full headers are transmitted with
    exponentially decreasing frequency until some steady state is
    reached.  Compression slow-start is used since many non-TCP streams,
    like UDP, have weaker checksums which could allow the compressor and
    decompresser to remain out of synch for long periods of time if a
    full header were lost or corrupted.

    [DENP97] also augments TCP header compression by introducing the
    'twice' algorithm.  If a particular packet fails to decompress
    properly, the 'twice' algorithm modifies its assumptions about the
    inferred fields in the compressed header, assuming that a packet
    identical to the current one was dropped between the last correctly
    decoded packet and the current one.  'Twice' then tries to
    decompress the received packet under the new assumptions and, if the
    checksum passes, the packet is passed to IP and the decompresser
    state has been re-synched.  This procedure can be extended to three
    or more decoding attempts.  Additional robustness can be achieved by
    cacheing full copies of packets which don't decompress properly in
    the hopes that later arrivals will fix the problem.  Finally, the
    performance improvement if the decompresser can explicitly request a
    full header is discussed.  Simulation results show that twice, in
    conjunction with the full header request mechanism, can improve
    throughput over uncompressed streams.





Expires: August 23, 1998                                       [Page 10]


draft-ietf-tcpsat-res-issues-01.txt                        February 1998

3.7.2 Research

    [Jac90] is a proposed standard.

    In [DENP97] the authors present the results of simulations showing
    that header compression is advantageous for both low and medium
    bandwidth links.  Simulations using ns show that the twice
    algorithm, combined with an explicit header request mechanism, can
    improve throughput over uncompressed sessions in a moderate BER
    (2*10^-7) environment.  An implementation that uses UDP as the link
    layer also exists.


3.7.3 Implementation Issues

    Implementing TCP header compression requires changes at both the
    sending (compressor) and receiving (decompresser) ends.  The twice
    algorithm requires very little extra machinery, while the explicit
    header request mechanism again requires modification of both the
    sender and receiver.


3.7.4 Topology Considerations

    TCP header compression is applicable to all of the environments
    discussed in section 2, but will provide relatively more improvement
    in situations where packet sizes are small and there is medium to
    low bandwidth and/or higher BER. When TCP's window size is large,
    implementing the explicit header request mechanism combined with
    caching packets which fail to decompress properly becomes more
    critical.

3.8 Sharing TCP State Among Multiple Similar Connections

3.8.1 Mitigation Description

    Persistent TCP state information can be used to overcome limitations
    in the configuration of the initial state, and to automatically tune
    TCP to environments using satellite channels.

    TCP includes a variety of parameters, many of which are set to
    initial values which can severely affect the performance of
    satellite connections, even though most TCP parameters are adjusted
    later while the connection is established. These include initial
    window size and initial MSS size. Various suggestions have been made
    to change these initial conditions, to more effectively support
    satellite links. It is difficult to select any single set of
    parameters which is effective for all environments, however.

    Instead of attempting to select these parameters a-priori, TCB
    sharing keeps persistent state between incarnations of TCP
    connections, and considers this state when initializing a new
    connection. For example, if all connections to subnet 10 result in
    extended windows of 1 megabyte, it is probably more efficient to

Expires: August 23, 1998                                       [Page 11]


draft-ietf-tcpsat-res-issues-01.txt                        February 1998

    start new connections with this value, than to rediscover it by
    window doubling over a period of dozens of round-trip times.

3.8.2 Research

    The opportunity for such sharing, both among a sequence of
    connections, as well as among concurrent connections, is described
    in more detail in [Tou97]. Each TCP connection maintains state,
    usually in a data structure called the TCP Control Block (TCB). The
    TCB contains information about the connection state, its associated
    local process, and feedback parameters about the connection's
    transmission. As originally specified, and usually implemented, the
    TCB is maintained on a per-connection basis. An alternate
    implementation can share some of this state across similar
    connection instances and among similar simultaneous connections. The
    resulting implementation can have better transient performance,
    especially where long-term TCB parameters differ widely from their
    typical initial values.  These changes can be constrained to affect
    only the TCB initialization, and so have no effect on the long-term
    behavior of TCP after a connection has been established. They can
    also be more broadly applied to coordinate concurrent connections.

    There are several research issues with TCB state sharing. First is
    what data to share, and how to share it effectively. Second are the
    security implications of sharing, and how best to control them.

    We note that the notion of sharing TCB state was originally
    documented in T/TCP [Bra92], and is used there to aggregate RTT
    values across connection instances, to provide meaningful average
    RTTs, even though most connections are expected to persist for only
    one RTT. T/TCP also shares a connection identifier, a sequence
    number separate from the window number and address/port pairs by
    which TCP connections are typically distinguished. As a result of
    this shared state, T/TCP allows a receiver to pass data in the SYN
    segment to the receiving application, prior to the completion of the
    three-way handshake, without compromising the integrity of the
    connection. In effect, this shared state caches a partial handshake
    from the previous connection, which is a variant of the more general
    issue of TCB sharing.

3.8.3 Implementation Issues

    Much of TCB sharing is an implementation issue only. The TCP
    specifications do not preclude sharing information across
    connections, or using information from previous connections to
    affect the state of new connections.

    The goal of TCB sharing is to decouple the effect of connection
    initialization from connection performance, to obviate the desire to
    have persistent connections solely to maintain efficiency. This
    allows separate connections to be more correctly used to indicate
    separate associations, distinct from the performance implications
    current implementations suffer.


Expires: August 23, 1998                                       [Page 12]


draft-ietf-tcpsat-res-issues-01.txt                        February 1998

    Other implementation considerations are outlined in [Tou97] in
    detail.  Many instances of the implementation are the subject of
    ongoing research.

3.8.4 Topology Considerations

    TCB sharing aggregates state information. The set over which this
    state is aggregated is critical to the performance of the
    sharing. Worst case, nothing is shared, which degenerates to the
    behavior of current implementations. Best case, information is
    shared among connections sharing a critical property. In earlier
    work [Tou97], the possibility of aggregating based on destination
    subnet, or even routing path is considered.

    For example, on a host connected to a satellite link, all
    connections out of the host share the critical property of large
    propagation latency, and are dominated by the bandwidth of the
    satellite link. In this case, all connections with the same source
    would share information.

3.9 ACK Congestion Control

3.10 ACK Filtering

4   SPCS

    (in progress)

5   Mitigation Interactions

6   Conclusions

References

    [AHKO97] Mark Allman, Chris Hayes, Hans Kruse, Shawn Ostermann.  TCP
        Performance Over Satellite Links.  In Proceedings of the 5th
        International Conference on Telecommunication Systems, March
        1997.

    [AKO96] Mark Allman, Hans Kruse, Shawn Ostermann.  An
        Application-Level Solution to TCP's Satellite Inefficiencies.
        In Proceedings of the First International Workshop on
        Satellite-based Information Services (WOSBIS), November 1996.

    [AG97] Mark Allman, Dan Glover.  Enhancing TCP Over Satellite
        Channels using Standard Mechanisms, November 1997.
        Internet-Draft draft-ietf-tcpsat-stand-mech-01.txt (work in
        progress).

    [All97a] Mark Allman.  Improving TCP Performance Over Satellite
        Channels.  Master's thesis, Ohio University, June 1997.

    [All97b] Mark Allman.  Fixing Two BSD TCP Bugs.  Technical Report
        CR-204151, NASA Lewis Research Center, October 1997.

Expires: August 23, 1998                                       [Page 13]


draft-ietf-tcpsat-res-issues-01.txt                        February 1998


    [AOK95] Mark Allman, Shawn Ostermann, Hans Kruse.  Data Transfer
        Efficiency Over Satellite Circuits Using a Multi-Socket
        Extension to the File Transfer Protocol (FTP).  In Proceedings
        of the ACTS Results Conference, NASA Lewis Research Center,
        September 1995.

    [Bra89] Robert Braden.  Requirements for Internet Hosts --
        Communication Layers, October 1989.  RFC 1122.

    [Bra92] Robert Braden.  Transaction TCP -- Concepts, September 1992.
        RFC 1379.

    [Bra94] Robert Braden.  T/TCP -- TCP Extensions for Transactions:
        Functional Specification, July 1994.  RFC 1644.

    [DENP97] Low-Loss TCP/IP Header Compression for Wirelesss Networks.
        Wireless Networks, vol.3, no.5, p. 375-87

    [DNP97] Mikael Degermark, Bjorn Nordgren, and Stephen Pink.  IP
        Header Compression, December 1997.  Internet-Draft
        draft-degermark-ipv6-hc-05.txt (work in progress).

    [FAP97] Sally Floyd, Mark Allman, Craig Partridge.  Increasing TCP's
        Initial Window, July 1997.  Internet-Draft
        draft-floyd-incr-init-win-00.txt (work in progress).

    [FF98] Sally Floyd, Kevin Fall.  Promoting the Use of End-to-End
        Congestion Control in the Internet.  Submitted to IEEE
        Transactions on Networking.

    [Hah94] Jonathan Hahn.  MFTP: Recent Enhancements and Performance
        Measurements.  Technical Report RND-94-006, NASA Ames Research
        Center, June 1994.

    [Hoe96] Janey Hoe.  Improving the Startup Behavior of a Congestion
        Control Scheme for TCP.  In ACM SIGCOMM, August 1996.

    [IL92] David Iannucci and John Lakashman.  MFTP: Virtual TCP Window
        Scaling Using Multiple Connections.  Technical Report
        RND-92-002, NASA Ames Research Center, January 1992.

    [Jac90]  Van Jacobson.  Compressing TCP/IP Headers, February 1990.
        RFC 1144.

    [JK88] Van Jacobson and Michael Karels.  Congestion Avoidance and
        Control.  In ACM SIGCOMM, 1988.

    [Joh95] Stacy Johnson.  Increasing TCP Throughput by Using an
        Extended Acknowledgment Interval.  Master's Thesis, Ohio
        University, June 1995.

    [Kes91] Srinivasan Keshav.  A Control Theoretic Approach to Flow
        Control.  In ACM SIGCOMM, September 1991.

Expires: August 23, 1998                                       [Page 14]


draft-ietf-tcpsat-res-issues-01.txt                        February 1998


    [Nic97] Kathleen Nichols.  Improving Network Simulation with
        Feedback.  Submitted to InfoCom 97.

    [Par97] Craig Partridge.  ACK Spacing for High Delay-Bandwidth Paths
        with Insufficient Buffering, July 1997.  Internet-Draft
        draft-partridge-e2e-ackspacing-00.txt.

    [Pax97] Vern Paxson.  Automated Packet Trace Analysis of TCP
        Implementations.  In Proceedings of ACM SIGCOMM, September 1997.

    [Pos81] Jon Postel.  Transmission Control Protocol, September 1981.
        RFC 793.

    [SP97] Tim Shepard and Craig Partridge.  When TCP Starts Up With
        Four Packets Into Only Three Buffers, July 1997.  Internet-Draft
        draft-shepard-TCP-4-packets-3-buff-00.txt (work in progress).

    [Ste97] W. Richard Stevens.  TCP Slow Start, Congestion Avoidance,
        Fast Retransmit, and Fast Recovery Algorithms, January 1997.
        RFC 2001.

    [Tou97] Touch, J., "TCP Control Block Interdependence," RFC-2140,
        USC/Informatino Sciences Institute , April 1997.

Author's Addresses:

    Mark Allman
    NASA Lewis Research Center/Sterling Software
    21000 Brookpark Rd.  MS 54-2
    Cleveland, OH  44135
    mallman@lerc.nasa.gov
    http://gigahertz.lerc.nasa.gov/~mallman

    Dan Glover
    NASA Lewis Research Center
    21000 Brookpark Rd.  MS 54-2
    Cleveland, OH  44135
    Daniel.R.Glover@lerc.nasa.gov

    Jim Griner
    NASA Lewis Research Center
    21000 Brookpark Rd.  MS 54-2
    Cleveland, OH  44135
    jgriner@lerc.nasa.gov

    Keith Scott
    Jet Propulsion Laboratory
    California Institute of Technology
    4800 Oak Grove Drive MS 161-260
    Pasadena, CA 91109-8099
    Keith.Scott@jpl.nasa.gov
    http://eis.jpl.nasa.gov/~kscott/


Expires: August 23, 1998                                       [Page 15]


draft-ietf-tcpsat-res-issues-01.txt                        February 1998

    Joe Touch
    University of Southern California/Information Sciences Institute
    4676 Admiralty Way
    Marina del Rey, CA 90292-6695
    USA
    Phone: +1 310-822-1511 x151
    Fax:   +1 310-823-6714
    URL:   http://www.isi.edu/~touch
    Email: touch@isi.edu














































Expires: August 23, 1998                                       [Page 16]