Network Working Group                                          M. Tuexen
Internet-Draft                                                Siemens AG
Expires: May 5, 2003                                              Q. Xie
                                                          Motorola, Inc.
                                                              R. Stewart
                                                                M. Shore
                                                     Cisco Systems, Inc.
                                                                  L. Ong
                                                       Ciena Corporation
                                                             J. Loughney
                                                   Nokia Research Center
                                                             M. Stillman
                                                                   Nokia
                                                        November 4, 2002


                Architecture for Reliable Server Pooling
                    draft-ietf-rserpool-arch-04.txt

Status of this Memo

   This document is an Internet-Draft and is in full conformance with
   all provisions of Section 10 of RFC2026.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF), its areas, and its working groups.  Note that
   other groups may also distribute working documents as Internet-
   Drafts.

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   The list of current Internet-Drafts can be accessed at http://
   www.ietf.org/ietf/1id-abstracts.txt.

   The list of Internet-Draft Shadow Directories can be accessed at
   http://www.ietf.org/shadow.html.

   This Internet-Draft will expire on May 5, 2003.

Copyright Notice

   Copyright (C) The Internet Society (2002).  All Rights Reserved.

Abstract

   This document describes an architecture and protocols for the



Tuexen, et al.            Expires May 5, 2003                   [Page 1]


Internet-Draft    Architecture for Reliable Server Pooling November 2002


   management and operation of server pools supporting highly reliable
   applications, and for client access mechanisms to a server pool.

Table of Contents

   1.    Introduction . . . . . . . . . . . . . . . . . . . . . . . .  3
   1.1   Overview . . . . . . . . . . . . . . . . . . . . . . . . . .  3
   1.2   Terminology  . . . . . . . . . . . . . . . . . . . . . . . .  3
   1.3   Abbreviations  . . . . . . . . . . . . . . . . . . . . . . .  4
   2.    Reliable Server Pooling Architecture . . . . . . . . . . . .  5
   2.1   RSerPool Functional Components . . . . . . . . . . . . . . .  5
   2.2   RSerPool Protocol Overview . . . . . . . . . . . . . . . . .  6
   2.2.1 Endpoint Name Resolution Protocol  . . . . . . . . . . . . .  6
   2.2.2 Aggregate Server Access Protocol . . . . . . . . . . . . . .  6
   2.2.3 PU <-> NS Communication  . . . . . . . . . . . . . . . . . .  7
   2.2.4 PE <-> NS Communication  . . . . . . . . . . . . . . . . . .  7
   2.2.5 PU <-> PE Communication  . . . . . . . . . . . . . . . . . .  8
   2.2.6 NS <-> NS Communication  . . . . . . . . . . . . . . . . . .  8
   2.2.7 PE <-> PE Communication  . . . . . . . . . . . . . . . . . .  9
   2.3   Failover Support . . . . . . . . . . . . . . . . . . . . . .  9
   2.3.1 Testament  . . . . . . . . . . . . . . . . . . . . . . . . .  9
   2.3.2 Cookies  . . . . . . . . . . . . . . . . . . . . . . . . . . 10
   2.3.3 Application level acknowledgements . . . . . . . . . . . . . 11
   2.3.4 Business Cards . . . . . . . . . . . . . . . . . . . . . . . 11
   2.4   Typical Interactions between RSerPool Components . . . . . . 11
   3.    Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 14
   3.1   Two File Transfer Examples . . . . . . . . . . . . . . . . . 14
   3.1.1 The RSerPool Aware Client  . . . . . . . . . . . . . . . . . 15
   3.1.2 The RSerPool Unaware Client  . . . . . . . . . . . . . . . . 16
   3.2   Telephony Signaling Example  . . . . . . . . . . . . . . . . 17
   3.2.1 Decomposed GWC and GK Scenario . . . . . . . . . . . . . . . 17
   3.2.2 Collocated GWC and GK Scenario . . . . . . . . . . . . . . . 19
   4.    Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 21
         References . . . . . . . . . . . . . . . . . . . . . . . . . 22
         Authors' Addresses . . . . . . . . . . . . . . . . . . . . . 22
         Full Copyright Statement . . . . . . . . . . . . . . . . . . 24















Tuexen, et al.            Expires May 5, 2003                   [Page 2]


Internet-Draft    Architecture for Reliable Server Pooling November 2002


1. Introduction

1.1 Overview

   This document defines an architecture, for providinge a highly
   available reliable server function in support of some service.  The
   way this is achieved is by forming a pool of servers, each of which
   is capable of supporting the desired service, and providing a name
   service that will resolve requests from a service user to the
   identity of a working server in the pool.

   To access a server pool, the pool user consults a name server.  The
   name service itself can be provided by a pool of name servers using a
   shared protocol to make the name resolution function fault-tolerant.
   It is assumed that the name space is kept flat and designed for a
   limited scale in order to keep the protocols simple, robust and fast.

   The server pool itself is supported by a shared protocol between
   servers and the name service allowing servers to enter and exit the
   pool.  Several server selection mechanisms, called server pool
   policies, are supported for flexibility.

1.2 Terminology

   This document uses the following terms:

   Home Name Server: The Name Server a Pool Element has registered with.
      This Name Server supervises the Pool Element.

   Operation scope: The part of the network visible to pool users by a
      specific instance of the reliable server pooling protocols.

   Pool (or server pool): A collection of servers providing the same
      application functionality.

   Pool handle (or pool name): A logical pointer to a pool.  Each server
      pool will be identifiable in the operation scope of the system by
      a unique pool handle or "name".

   Pool element: A server entity having registered to a pool.

   Pool user: A server pool user.

   Pool element handle (or endpoint handle): A logical pointer to a
      particular pool element in a pool, consisting of the name of the
      pool and a destination transport address of the pool element.

   Name space: A cohesive structure of pool names and relations that may



Tuexen, et al.            Expires May 5, 2003                   [Page 3]


Internet-Draft    Architecture for Reliable Server Pooling November 2002


      be queried by an internal or external agent.

   Name server: Entity which is responsible for managing and maintaining
      the name space within the RSerPool operation scope.


1.3 Abbreviations

   ASAP: Aggregate Server Access Protocol

   ENRP: Endpoint Name Resolution Protocol

   Home NS: Home Name Server

   NS: Name Server

   PE: Pool element

   PU: Pool user

   SCTP: Stream Control Transmission Protocol

   TCP: Transmission Control Protocol




























Tuexen, et al.            Expires May 5, 2003                   [Page 4]


Internet-Draft    Architecture for Reliable Server Pooling November 2002


2. Reliable Server Pooling Architecture

   In this section, we define a reliable server pool architecture.

2.1 RSerPool Functional Components

   There are three classes of entities in the RSerPool architecture:

   o  Pool Elements (PEs).

   o  Name Servers (NSs).

   o  Pool Users (PUs).

   A server pool is defined as a set of one or more servers providing
   the same application functionality.  These servers are called Pool
   Elements (PEs).  PEs form the first class of entities in the RSerPool
   architecture.  Multiple PEs in a server pool can be used to provide
   fault tolerance or load sharing, for example.

   Each server pool will be identifiable by a unique name which is
   simply a byte string, called the pool handle.  This allows binary
   names to be used.

   These names are not valid in the whole internet but only in smaller
   domains, called the operational scope.  Furthermore, the namespace is
   assumed to be flat, so that multiple levels of query are not
   necessary to resolve a name request.

   The second class of entities in the RSerPool architecture is the
   class of name servers (NSs).  These name servers can resolve a pool
   handle to a list of information which allows the PU to access a PE of
   the server pool identified by the handle.  This information includes:

   o  A list of IPv4 and/or IPv6 addresses.

   o  A protocol field of the IP header specifying the transport layer
      protocol or protocols.

   o  A port number associatiated with the transport protocol, e.g.
      SCTP, TCP or UDP.

   Please note that the RSerPool architecture supports both IPv4 and
   IPv6 addressing.  A PE can also support multiple transport layers.

   In each operational scope there must be at least one name server.
   All name servers within the operational scope have knowledge of all
   server pools within the operational scope.



Tuexen, et al.            Expires May 5, 2003                   [Page 5]


Internet-Draft    Architecture for Reliable Server Pooling November 2002


   A third class of entities in the architecture is the Pool User (PU)
   class, consisting of the clients being served by the PEs of a server
   pool.

2.2 RSerPool Protocol Overview

   The RSerPool requested features can be obtained with the help of the
   combination of two protocols: ENRP (Endpoint Name Resolution
   Protocol) and ASAP (Aggregate Server Access Protocol).

2.2.1 Endpoint Name Resolution Protocol

   The name servers use a protocol called Endpoint Name Resolution
   Protocol (ENRP) for communication with each other to make sure that
   all have the same information about the server pools.

   ENRP is designed to provide a fully distributed fault-tolerant real-
   time translation service that maps a name to a set of transport
   addresses pointing to a specific group of networked communication
   endpoints registered under that name.  ENRP employs a client-server
   model with which an name server will respond to the name translation
   service requests from endpoint clients running on the same host or
   running on different hosts.

   RFC3237 [7] also requires that the name servers should not resolve a
   pool handle to a transport layer address of a PE which is not in
   operation.  Therefore each PE is supervised by one specific name
   server, called the home NS of that PE.  If it detects that the PE is
   out of service all other name servers are informed by using ENRP.

2.2.2 Aggregate Server Access Protocol

   The PU wanting service from the pool uses the Aggregate Server Access
   Protocol (ASAP) to access members of the pool.  Depending on the
   level of support desired by the application, use of ASAP may be
   limited to an initial query for an active PE, or ASAP may be used to
   mediate all communication between the PU and PE, so that automatic
   failover from a failed PE to an alternate PE can be supported.

   ASAP in conjunction with ENRP provides a fault tolerant data transfer
   mechanism over IP networks.  ASAP uses a name-based addressing model
   which isolates a logical communication endpoint from its IP
   address(es), thus effectively eliminating the binding between the
   communication endpoint and its physical IP address(es) which normally
   constitutes a single point of failure.

   In addition, ASAP defines each logical communication destination as a
   server pool, providing full transparent support for server-pooling



Tuexen, et al.            Expires May 5, 2003                   [Page 6]


Internet-Draft    Architecture for Reliable Server Pooling November 2002


   and load sharing.

   ASAP is also used by a server to join or leave a server pool.  It
   registers or deregisters itself by communicating with a name server,
   which will normally the home NS.  ASAP allows dynamic system
   scalability, allowing the pool membership to change at any time
   without interruption of the service.

2.2.3 PU <-> NS Communication

   The PU <-> NS communication is used for doing name queries.  The PU
   sends a pool handle to the NS and gets back the information necessary
   for accessing a server in a server pool.

                       ********        ********
                       *  PU  *        *  NS  *
                       ********        ********

                       +------+        +------+
                       | ASAP |        | ASAP |
                       +------+        +------+
                       | SCTP |        | SCTP |
                       +------+        +------+
                       |  IP  |        |  IP  |
                       +------+        +------+

                    Protocol stack between PU and NS


2.2.4 PE <-> NS Communication

   The PE <-> NS communication is used for registration and
   deregistration of the PE in one ore more pools and for the
   supervision of the PE by the home NS.  This communication is based on
   SCTP, the protocol stack is shown in the following figure.

                       ********        ********
                       *  PE  *        *  NS  *
                       ********        ********

                       +------+        +------+
                       | ASAP |        | ASAP |
                       +------+        +------+
                       | SCTP |        | SCTP |
                       +------+        +------+
                       |  IP  |        |  IP  |
                       +------+        +------+
                    Protocol stack between PE and NS



Tuexen, et al.            Expires May 5, 2003                   [Page 7]


Internet-Draft    Architecture for Reliable Server Pooling November 2002


2.2.5 PU <-> PE Communication

   The PU <-> PE communication can be divided into two parts:

   o  control channel

   o  data channel

   The data channel is used for the transmission of the upper layer
   data.  The ASAP layer at the PU and PE may or may not be involved in
   the handling of the data channel.

   The control channel can be established from the PU side, if needed,
   to transport the following information:

   o  The PE can send a testament to the PU for providing information to
      which other PE the PU should failover in case of a failover.

   o  The PE can send cookies to the PU.  The PE would store only the
      last cookie and send it to the new PE in case of a failover.

   o  Both the PE and PU can send application level acknowledgements to
      provide a user controlled buffer management at the RSerPool layer.

   See Section 2.3 for further details.

   The control channel is transported using the ASAP protocol making use
   of SCTP as its transport protocol.  The control and data channel may
   be tranported over a single transport layer connection.

2.2.6 NS <-> NS Communication

   The communication between name servers is used to share the knowledge
   about all server pools between all name servers in an operational
   scope.

                       ********        ********
                       *  NS  *        *  NS  *
                       ********        ********

                       +------+        +------+
                       | ENRP |        | ENRP |
                       +------+        +------+
                       | SCTP |        | SCTP |
                       +------+        +------+
                       |  IP  |        |  IP  |
                       +------+        +------+
                    Protocol stack between NS and NS



Tuexen, et al.            Expires May 5, 2003                   [Page 8]


Internet-Draft    Architecture for Reliable Server Pooling November 2002


   For this communication ENRP over SCTP is used.

   When a name server boots up a UDP multicast message may be sent out
   for initial detection of other name servers in the operational scope.
   The other name servers send an answer using a unicast UDP message.

2.2.7 PE <-> PE Communication

   This is a special case of the PU <-> PE communication.  In this case
   the PU is also a PE in a server pool.

   There is one additional point here: The PE acting as a PU can send
   the PE the information that it is acually a PE of pool.  This means
   that the pool handle is transferred via the control channel.  Also a
   testament can be can be sent from the PE acting as a PU to the PE.
   See Section 2.3 for further details.

2.3 Failover Support

   If the PU detects the failure of a PE it may fail over to a different
   PE.  The selection to a new PE should be made such that most likely
   the new PE is not affected by the failed one.  This means, for
   example, in case of the failure of a TCP connection between a PU and
   a PE the PU should not fail over to a SCTP association on the same
   host.  It is better to use a different host.  Therefore it is
   possible for a PE to register multiple transports.

   There are some mechanisms provided by RSerPool to support the
   failover to a new PE.

2.3.1 Testament

   Consider the szenario given in the following figure.


















Tuexen, et al.            Expires May 5, 2003                   [Page 9]


Internet-Draft    Architecture for Reliable Server Pooling November 2002


                   .......................
                   .      +-------+      .
                   .      |       |      .
                   .      |  PE 1 |      .
                   .      |       |      .
                   .      +-------+      .        .
                   .                     .
                   .     Server Pool     .
                   .                     .
                   .                     .
    +-------+      .      +-------+      .       +-------+
    |       |      .      |       |      .       |       |
    |  PU 1 |------.------|  PE 2 |------.-------|  PU 2 |
    |       |      .      |       |      .       |       |
    +-------+      .      +-------+      .       +-------+
                   .                     .
                   .                     .
                   .                     .
                   .                     .
                   .      +-------+      .
                   .      |       |      .
                   .      |  PE 3 |      .
                   .      |       |      .
                   .      +-------+      .
                   .......................
                 Two PE accessing the same PE

   PU 1 is using PE 2 of the server pool.  Assume that PE 1 and PE 2
   share state but not PE 2 and PE 3.  Using the testament of PE 2 it is
   possible for PE 2 to inform PU 1 that it should fail over to PE 1 in
   case of a failure.

   A slightly more complicated situation is if two pool users, PU 1 and
   PU 2, use PE 2 but both, PU 1 and PU 2, need to use the same PE.
   Then it is important that PU 1 and PU 2 fail over to the same PE.
   This can be handled in a way such that PE 2 gives the same testament
   to PU 1 and PU 2.

2.3.2 Cookies

   Cookies may be sent from the PE to the PU if the PE wants this to do.
   The PU only stores the last received cookie.  In  case of a fail over
   it sends this last received cookie to the new PE.  This method
   provides a simple way of state sharing between the PEs.  Please note
   that the old PE should sign the cookie and the receiving PE should
   verify the signature.  For the PU, the cookie has no structure and is
   only stored and transmitted to the new PE.




Tuexen, et al.            Expires May 5, 2003                  [Page 10]


Internet-Draft    Architecture for Reliable Server Pooling November 2002


2.3.3 Application level acknowledgements

   In case of a failure an upper layer might want to retrieve some data
   from the communication to to failed PE and transfer it to the new
   one.  Because this data retrieval problem can not be completely
   solved in a general way (and provide neither message loss nor message
   duplication) the ASAP layer only provides the support of application
   layer acknowledgements.  ASAP uses this for upper layer supported
   buffer management in the ASAP layer.

2.3.4 Business Cards

   In case of a PE to PE communication one of the PEs acts as a PU for
   establishing the communication.  The receiving may not know the pool
   handle of the PE which initiated the communication.  A business card
   can be used for the initiating PE to provide its peer with a pool
   handle, allowing the peer PE to fail over the communication in case
   the initiating PE fails.

2.4 Typical Interactions between RSerPool Components

   The following drawing shows the typical RSerPool components and their
   possible interactions with each other:




























Tuexen, et al.            Expires May 5, 2003                  [Page 11]


Internet-Draft    Architecture for Reliable Server Pooling November 2002


     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     ~                                                  operation scope ~
     ~  .........................          .........................    ~
     ~  .        Server Pool 1  .          .        Server Pool 2  .    ~
     ~  .  +-------+ +-------+  .    (d)   .  +-------+ +-------+  .    ~
     ~  .  |PE(1,A)| |PE(1,C)|<-------------->|PE(2,B)| |PE(2,A)|<---+  ~
     ~  .  +-------+ +-------+  .          .  +-------+ +-------+  . |  ~
     ~  .      ^            ^   .          .      ^         ^      . |  ~
     ~  .      |      (a)   |   .          .      |         |      . |  ~
     ~  .      +----------+ |   .          .      |         |      . |  ~
     ~  .  +-------+      | |   .          .      |         |      . |  ~
     ~  .  |PE(1,B)|<---+ | |   .          .      |         |      . |  ~
     ~  .  +-------+    | | |   .          .      |         |      . |  ~
     ~  .      ^        | | |   .          .      |         |      . |  ~
     ~  .......|........|.|.|....          .......|.........|....... |  ~
     ~         |        | | |                     |         |        |  ~
     ~      (c)|     (a)| | |(a)               (a)|      (a)|     (c)|  ~
     ~         |        | | |                     |         |        |  ~
     ~         |        v v v                     v         v        |  ~
     ~         |     +++++++++++++++    (e)     +++++++++++++++      |  ~
     ~         |     +      NS     +<---------->+      NS     +      |  ~
     ~         |     +++++++++++++++            +++++++++++++++      |  ~
     ~         v            ^                          ^             |  ~
     ~     *********        |                          |             |  ~
     ~     * PU(A) *<-------+                       (b)|             |  ~
     ~     *********   (b)                             |             |  ~
     ~                                                 v             |  ~
     ~         :::::::::::::::::      (f)      *****************     |  ~
     ~         : Other Clients :<------------->* Proxy/Gateway * <---+  ~
     ~         :::::::::::::::::               *****************        ~
     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            RSerPool components and their possible interactions.


   In this figure  we can identify the following possible interactions:

   (a) Server Pool Elements <-> NS: (ASAP) Each PE in a pool uses ASAP
      to register or de-register itself as well as to exchange other
      auxiliary information with the NS.  The NS also uses ASAP to
      monitor the operational status of each PE in a pool.

   (b) PU <-> NS: (ASAP) A PU normally uses ASAP to request the NS for a
      name-to-address translation service before the PU can send user
      messages addressed to a server pool by the pool's name.

   (c) PU <-> PE: (ASAP) ASAP can be used to exchange some auxiliary
      information of the two parties before they engage in user data
      transfer.



Tuexen, et al.            Expires May 5, 2003                  [Page 12]


Internet-Draft    Architecture for Reliable Server Pooling November 2002


   (d) Server Pool <-> Server Pool: (ASAP) A PE in a server pool can
      become a PU to another pool when the PE tries to initiate
      communication with the other pool.  In such a case, the
      interactions described in (a) and (c) above will apply.

   (e) NS <-> NS: (ENRP) ENRP can be used to fulfill various Name Space
      operation, administration, and maintenance (OAM) functions.

   (f) Other Clients <-> Proxy/Gateway: standard protocols The proxy/
      gateway enables clients ("other clients"), which are not RSerPool
      aware, to access services provided by an RSerPool based server
      pool.  It should be noted that these proxies/gateways may become a
      single point of failure.






































Tuexen, et al.            Expires May 5, 2003                  [Page 13]


Internet-Draft    Architecture for Reliable Server Pooling November 2002


3. Examples

   [Editors note] This section has not been updated.  The examples will
   be updated after the architecture has been finalized.

   In this section the basic concepts of ENRP and ASAP will be
   described.  First an RSerPool aware FTP server is considered.  The
   interaction with an RSerPool aware and an non-aware client is given.
   Finally, a telephony example is considered.

3.1 Two File Transfer Examples

   In this section we present two separate file transfer examples using
   ENRP and ASAP.  We present two separate examples demonstrating an
   ENRP/ASAP aware client and a client that is using a Proxy or Gateway
   to perform the file transfer.  In this example we will use a FTP
   RFC959 [2] model with some modifications.  The first example (the
   RSerPool aware one) will modify FTP concepts so that the file
   transfer takes place over SCTP.  In the second example we will use
   TCP between the unaware client and the Proxy.  The Proxy itself will
   use the modified FTP with RSerPool as illustrated in the first
   example.

   Please note that in the example we do NOT follow FTP RFC959 [2]
   precisely but use FTP-like concepts and attempt to adhere to the
   basic FTP model.  These examples use FTP for illustrative purposes,
   FTP was chosen since many of the basic concept are well known and
   should be familiar to readers.























Tuexen, et al.            Expires May 5, 2003                  [Page 14]


Internet-Draft    Architecture for Reliable Server Pooling November 2002


3.1.1 The RSerPool Aware Client

   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   ~                                                  operation scope ~
   ~  .........................                                       ~
   ~  . "File Transfer Pool"  .                                       ~
   ~  .  +-------+ +-------+  .                                       ~
   ~ +-> |PE(1,A)| |PE(1,C)|  .                                       ~
   ~ |.  +-------+ +-------+  .                                       ~
   ~ |.      ^            ^   .                                       ~
   ~ |.      +----------+ |   .                                       ~
   ~ |.  +-------+      | |   .                                       ~
   ~ |.  |PE(1,B)|<---+ | |   .                                       ~
   ~ |.  +-------+    | | |   .                                       ~
   ~ |.      ^        | | |   .                                       ~
   ~ |.......|........|.|.|....                                       ~
   ~ |  ASAP |    ASAP| | |ASAP                                       ~
   ~ |(d)    |(c)     | | |                                           ~
   ~ |       v        v v v                                           ~
   ~ |   *********   +++++++++++++++                                  ~
   ~ + ->* PU(X) *   +      NS     +                                  ~
   ~     *********   +++++++++++++++                                  ~
   ~         ^     ASAP     ^                                         ~
   ~         |     <-(b)    |                                         ~
   ~         +--------------+                                         ~
   ~               (a)->                                              ~
   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

               Architecture for RSerPool aware client.

   To effect a file transfer the following steps would take place.

   1.  The application in PU(X) would send a login request.  The PU(X)'s
       ASAP layer would send an ASAP request to its NS to request the
       list of pool elements (using (a)).  The pool handle to identify
       the pool would be "File Transfer Pool".  The ASAP layer queues
       the login request.

   2.  The NS would return a list of the three PEs PE(1,A), PE(1,B) and
       PE(1,C) to the ASAP layer in PU(X) (using (b)).

   3.  The ASAP layer selects one of the PEs, for example PE(1,B).  It
       transmits the login request, the other FTP control data finally
       starts the transmission of the requested files (using (c)).  For
       this the multiple stream feature of SCTP could be used.

   4.  If during the file transfer conversation, PE(1,B) fails, assuming
       the PE's were sharing state of file transfer, a fail-over to



Tuexen, et al.            Expires May 5, 2003                  [Page 15]


Internet-Draft    Architecture for Reliable Server Pooling November 2002


       PE(1,A) could be initiated.  PE(1,A) would continue the transfer
       until complete (see (d)).  In parallel a request from PE(1,A)
       would be made to ENRP to request a cache update for the server
       pool "File Transfer Pool" and a report would also be made that
       PE(1,B) is non-responsive (this would cause appropriate audits
       that may remove PE(1,B) from the pool if the NS had not already
       detected the failure) (using (a)).


3.1.2 The RSerPool Unaware Client

   In this example we investigate the use of a Proxy server assuming the
   same set of scenario as illustrated above.

   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   ~                                                  operation scope ~
   ~  .........................                                       ~
   ~  . "File Transfer Pool"  .                                       ~
   ~  .  +-------+ +-------+  .                                       ~
   ~  .  |PE(1,A)| |PE(1,C)|  .                                       ~
   ~  .  +-------+ +-------+  .                                       ~
   ~  .      ^            ^   .                                       ~
   ~  .      +----------+ |   .                                       ~
   ~  .  +-------+      | |   .                                       ~
   ~  .  |PE(1,B)|<---+ | |   .                                       ~
   ~  .  +-------+    | | |   .                                       ~
   ~  .......^........|.|.|....                                       ~
   ~         |        | | |                                           ~
   ~         |    ASAP| | |ASAP                                       ~
   ~         |        | | |                                           ~
   ~         |        v v v                                           ~
   ~         |       +++++++++++++++          +++++++++++++++         ~
   ~         |       +      NS     +<--ENRP-->+      NS     +         ~
   ~         |       +++++++++++++++          +++++++++++++++         ~
   ~         |                                ASAP   ^                ~
   ~         |     ASAP       (c)                (b) |  ^             ~
   ~         +---------------------------------+  |  |  |             ~
   ~                                           |  v  | (a)            ~
   ~                                           v     v                ~
   ~         :::::::::::::::::     (e)->     *****************        ~
   ~         :   FTP Client  :<------------->* Proxy/Gateway *        ~
   ~         :::::::::::::::::     (f)       *****************        ~
   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                Architecture for RserPool unaware client.

   In this example the steps will occur:

   1.  The FTP client and the Proxy/Gateway are using the TCP-based ftp



Tuexen, et al.            Expires May 5, 2003                  [Page 16]


Internet-Draft    Architecture for Reliable Server Pooling November 2002


       protocol.  The client sends the login request to the proxy (using
       (e)).

   2.  The proxy behaves like a client and performs the actions
       described under (1), (2) and (3) of the above description (using
       (a), (b) and (c)).

   3.  The ftp communication continues and will be translated by the
       proxy into the RSerPool aware dialect.  This interworking uses
       (f) and (c).

   Note that in this example high availability is maintained between the
   Proxy and the server pool but a single point of failure exists
   between the FTP client and the Proxy, i.e.  the command TCP
   connection and its one IP address it is using for commands.

3.2 Telephony Signaling Example

   This example shows the use of ASAP/RSerPool to support server pooling
   for high availability of a telephony application such as a Voice over
   IP Gateway Controller (GWC) and Gatekeeper services (GK).

   In this example, we show two different scenarios of deploying these
   services using RSerPool in order to illustrate the flexibility of the
   RSerPool architecture.

3.2.1 Decomposed GWC and GK Scenario

   In this scenario, both GWC and GK services are deployed as separate
   pools with some number of PEs, as shown in the following diagram.
   Each of the pools will register their unique pool handle (i.e.  name)
   with the NS.  We also assume that there are a Signaling Gateway (SG)
   and a Media Gateway (MG) present and both are RSerPool aware.


















Tuexen, et al.            Expires May 5, 2003                  [Page 17]


Internet-Draft    Architecture for Reliable Server Pooling November 2002


                              ...................
                              .    Gateway      .
                              . Controller Pool .
       .................      .   +-------+     .
       .   Gatekeeper  .      .   |PE(2,A)|     .
       .     Pool      .      .   +-------+     .
       .   +-------+   .      .   +-------+     .
       .   |PE(1,A)|   .      .   |PE(2,B)|     .
       .   +-------+   .      .   +-------+     .
       .   +-------+   . (d)  .   +-------+     .
       .   |PE(1,B)|<------------>|PE(2,C)|<-------------+
       .   +-------+   .      .   +-------+     .        |
       .................      ........^..........        |
                                      |                  |
                                   (c)|               (e)|
                                      |                  v
           +++++++++++++++        *********       *****************
           +      NS     +        * SG(X) *       * Media Gateway *
           +++++++++++++++        *********       *****************
                  ^                   ^
                  |                   |
                  |     <-(a)         |
                  +-------------------+
                         (b)->

               Deployment of Decomposed GWC and GK.

   As shown in the previous figure, the following sequence takes place:

   1.  the Signaling Gateway (SG) receives an incoming signaling message
       to be forwarded to the GWC.  SG(X)'s ASAP layer would send an
       ASAP request to its "local" NS to request the list of pool
       elements (PE's) of GWC (using (a)).  The key used for this query
       is the pool handle of the GWC.  The ASAP layer queues the data to
       be sent in local buffers until the NS responds.

   2.  the NS would return a list of the three PE's A, B and C to the
       ASAP layer in SG(X) together with information to be used for
       load-sharing traffic across the gateway controller pool (using
       (b)).

   3.  the ASAP layer in SG(X) will select one PE (e.g., PE(2,C)) and
       send the signaling message to it (using (c)).  The selection is
       based on the load sharing information of the gateway controller
       pool.

   4.  to progress the call, PE(2,C) finds that it needs to talk to the
       Gatekeeper.  Assuming it has already had gatekeeper pool's



Tuexen, et al.            Expires May 5, 2003                  [Page 18]


Internet-Draft    Architecture for Reliable Server Pooling November 2002


       information in its local cache (e.g., obtained and stored from
       recent query to NS), PE(2,C) selects PE(1,B) and sends the call
       control message to it (using (d)).

   5.  We assume PE(1,B) responds back to PE(2,C) and authorizes the
       call to proceed.

   6.  PE(2,C) issues media control commands to the Media Gateway (using
       (e)).

   RSerPool will provide service robustness to the system if some
   failure would occur in the system.

   For instance, if PE(1, B) in the Gatekeeper Pool crashed after
   receiving the call control message from PE(2, C) in step (d) above,
   what most likely will happen is that, due to the absence of a reply
   from the Gatekeeper, a timer expiration event will trigger the call
   state machine within PE(2, C) to resend the control message.  The
   ASAP layer at PE(2, C) will then notice the failure of PE(1, B)
   through (likely) the endpoint unreachability detection by the
   transport protocol beneath ASAP and automatically deliver the re-sent
   call control message to the alternate GK pool member PE(1, A).  With
   appropriate intra-pool call state sharing support, PE(1, A) will be
   able to correctly handle the call and reply to PE(2, C) and hence
   progress the call.

3.2.2 Collocated GWC and GK Scenario

   In this scenario, the GWC and GK services are collocated (e.g., they
   are implemented as a single process).  In such a case, one can form a
   pool that provides both GWC and GK services as shown in the figure
   below.



















Tuexen, et al.            Expires May 5, 2003                  [Page 19]


Internet-Draft    Architecture for Reliable Server Pooling November 2002


        ........................................
        .  Gateway Controller/Gatekeeper Pool  .
        .                  +-------+           .
        .                  |PE(3,A)|           .
        .                  +-------+           .
        .           +-------+                  .
        .           |PE(3,C)|<---------------------------+
        .           +-------+                  .         |
        .    +-------+  ^                      .         |
        .    |PE(3,B)|  |                      .         |
        .    +-------+  |                      .         |
        ................|.......................         |
                        |                                |
                        +-------------+                  |
                                      |                  |
                                   (c)|               (e)|
                                      v                  v
           +++++++++++++++        *********       *****************
           +      NS     +        * SG(X) *       * Media Gateway *
           +++++++++++++++        *********       *****************
                  ^                   ^
                  |                   |
                  |     <-(a)         |
                  +-------------------+
                         (b)->

               Deployment of Collocated GWC and GK.

   The same sequence as described in 5.2.1 takes place, except that step
   (4) now becomes internal to the PE(3,C) (again, we assume Server C is
   selected by SG).




















Tuexen, et al.            Expires May 5, 2003                  [Page 20]


Internet-Draft    Architecture for Reliable Server Pooling November 2002


4. Acknowledgements

   The authors would like to thank Bernard Aboba, Harrie Hazewinkel,
   Matt Holdrege, Christopher Ross, Werner Vogels and many others for
   their invaluable comments and suggestions.














































Tuexen, et al.            Expires May 5, 2003                  [Page 21]


Internet-Draft    Architecture for Reliable Server Pooling November 2002


References

   [1]  Postel, J., "Transmission Control Protocol", STD 7, RFC 793,
        September 1981.

   [2]  Postel, J. and J. Reynolds, "File Transfer Protocol", STD 9, RFC
        959, October 1985.

   [3]  Bradner, S., "The Internet Standards Process -- Revision 3", BCP
        9, RFC 2026, October 1996.

   [4]  Guttman, E., Perkins, C., Veizades, J. and M. Day, "Service
        Location Protocol, Version 2", RFC 2608, June 1999.

   [5]  Ong, L., Rytina, I., Garcia, M., Schwarzbauer, H., Coene, L.,
        Lin, H., Juhasz, I., Holdrege, M. and C. Sharp, "Framework
        Architecture for Signaling Transport", RFC 2719, October 1999.

   [6]  Stewart, R., Xie, Q., Morneault, K., Sharp, C., Schwarzbauer,
        H., Taylor, T., Rytina, I., Kalla, M., Zhang, L. and V. Paxson,
        "Stream Control Transmission Protocol", RFC 2960, October 2000.

   [7]  Tuexen, M., Xie, Q., Stewart, R., Shore, M., Ong, L., Loughney,
        J. and M. Stillman, "Requirements for Reliable Server Pooling",
        RFC 3237, January 2002.


Authors' Addresses

   Michael Tuexen
   Siemens AG
   ICN WN CC SE 7
   D-81359 Munich
   Germany

   Phone: +49 89 722 47210
   EMail: Michael.Tuexen@siemens.com


   Qiaobing Xie
   Motorola, Inc.
   1501 W. Shure Drive, #2309
   Arlington Heights, IL  60004
   USA

   Phone: +1-847-632-3028
   EMail: qxie1@email.mot.com




Tuexen, et al.            Expires May 5, 2003                  [Page 22]


Internet-Draft    Architecture for Reliable Server Pooling November 2002


   Randall R. Stewart
   Cisco Systems, Inc.
   8725 West Higgins Road
   Suite 300
   Chicago, IL  60631
   USA

   Phone: +1-815-477-2127
   EMail: rrs@cisco.com


   Melinda Shore
   Cisco Systems, Inc.
   809 Hayts Rd
   Ithaca, NY  14850
   USA

   Phone: +1 607 272 7512
   EMail: mshore@cisco.com


   Lyndon Ong
   Ciena Corporation
   10480 Ridgeview Drive
   Cupertino, CA  95014
   USA

   EMail: lyong@ciena.com


   John Loughney
   Nokia Research Center
   PO Box 407
   FIN-00045 Nokia Group  FIN-00045
   Finland

   EMail: john.loughney@nokia.com


   Maureen Stillman
   Nokia
   127 W. State Street
   Ithaca, NY  14850
   USA

   Phone: +1-607-273-0724
   EMail: maureen.stillman@nokia.com




Tuexen, et al.            Expires May 5, 2003                  [Page 23]


Internet-Draft    Architecture for Reliable Server Pooling November 2002


Full Copyright Statement

   Copyright (C) The Internet Society (2002).  All Rights Reserved.

   This document and translations of it may be copied and furnished to
   others, and derivative works that comment on or otherwise explain it
   or assist in its implementation may be prepared, copied, published
   and distributed, in whole or in part, without restriction of any
   kind, provided that the above copyright notice and this paragraph are
   included on all such copies and derivative works.  However, this
   document itself may not be modified in any way, such as by removing
   the copyright notice or references to the Internet Society or other
   Internet organizations, except as needed for the purpose of
   developing Internet standards in which case the procedures for
   copyrights defined in the Internet Standards process must be
   followed, or as required to translate it into languages other than
   English.

   The limited permissions granted above are perpetual and will not be
   revoked by the Internet Society or its successors or assigns.

   This document and the information contained herein is provided on an
   "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING
   TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING
   BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION
   HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF
   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Acknowledgement

   Funding for the RFC Editor function is currently provided by the
   Internet Society.



















Tuexen, et al.            Expires May 5, 2003                  [Page 24]