Internet Engineering Task Force                          Walter Weiss
Internet Draft                                    Lucent Technologies
Expiration: September 1998                                 March 1998



               Providing Differentiated Services through
               Cooperative Dropping and Delay Indication

                 <draft-weiss-cooperative-drop-00.txt>


Status of this Memo

   This document is an Internet Draft. Internet Drafts are working
   documents of the Internet Engineering Task Force (IETF), its Areas,
   and its Working Groups. Note that other groups may also distribute
   working documents as Internet Drafts.

   Internet Drafts are draft documents valid for a maximum of six
   months. Internet Drafts may be updated, replaced, or obsoleted by
   other documents at any time. It is not appropriate to use Internet
   Drafts as reference material or to cite them other than as a "working
   draft" or "work in progress."

   Please check the 1id-abstracts.txt listing contained in the
   internet-drafts Shadow Directories on nic.ddn.mil, nnsc.nsf.net,
   nic.nordu.net, ftp.nisc.sri.com, or munnari.oz.au to learn the
   current status of any Internet Draft.


Abstract

   The current state of the Internet only supports a single class of
   service.  To further the success of the Internet, new capabilities
   must be deployed which allow for deterministic end to end behavior
   irrespective of location or the number of domains along the path.
   Experience with existing signaling based protocols have proven diffi-
   cult to deploy due to technical and economic factors.  This document
   proposes using in-band frame marking to support a diverse set of ser-
   vices.  In addition, the mechanisms described here will provide end
   users and/or enterprise backbones with new capabilities such as ser-
   vice validation, congestion localization, and uniform service
   irrespective of the type of service contract.  For ISPs this document
   proposes mechanisms providing more available bandwidth by creating
   strong incentives for adaptive behavior in applications as well as
   mechanisms for providing both sender based and receiver based service
   contracts.



Weiss                     Expiration: May 1998                  [Page 1]


Internet Draft            Cooperative Dropping                  Nov 1997


1. Introduction

   It is widely recognized that there are many types of services which
   can be offered and many means for providing them.  These services can
   be reduced to three basic components: the control of the bandwidth,
   the control of the delay, and the control of delay variation.  The
   control of delay variation requires frame scheduling at the granular-
   ity of flows.  This, in turn, requires per flow state in each hop
   along the path of the flow.  This capability requires configured or
   signaled reservations.  Therefore the management of delay variation
   is beyond the scope of this document.

   Although in-band delay variation is too difficult to support within a
   Differentiated Services framework, it is feasible to provide
   bandwidth and delay management using a less sophisticated model.
   However, any model which attempts to satisfy service contracts
   without an awareness of available network capacities along the path
   faces two issues.

   First of all, how can end to end (or edge to edge) bandwidth guaran-
   tees be satisfied when the capacity and available bandwidth of down-
   stream links are unknown?  Some indication of capacity can be gleaned
   through routing protocols like OSPF.  However, there is no effective
   mechanism for detecting the level of persistent downstream conges-
   tion, as a result of capacity limits like cross-oceanic links or
   because of singular but long lasting events such as NASA's "Life on
   Mars" announcement.  Thus, attempting to use a profile to promise
   even  minimal bandwidth guarantees is virtually impossible, given
   factors of such as distance, time of day, the specific path taken and
   link consumption.

   VPN and Virtual Leased Line services can be supported by configuring
   reserved capacity.  However, this does not deminish the benefits of
   congestion awareness.  As with traditional network links, Virtual
   Networks have a large cross section of applications using them at any
   given time.  Some applications may need to reserve strict bandwidth
   and delay guarantees.  However, there are other applications which
   can adapt to changes in available bandwidth.  These adaptive applica-
   tions are dependent on effective congestion awareness to operate
   properly.

   In addition, ISPs need service differentiation to deploy Electronic
   Commerce solutions.  ISPs desire the ability to provide individual
   users with different service offerings.  In these cases bandwidth
   cannot be pre-allocated because the destinations for these services
   are not pinned.  Therefore congestion awareness is crucial to antici-
   pating and adapting to available bandwidth along the path to a desti-
   nation.



Weiss                     Expiration: May 1998                  [Page 2]


Internet Draft            Cooperative Dropping                  Nov 1997


   The second issue with a service contract model that does not employ
   signaling is that one router is not aware of another router's conges-
   tion control actions.  Hence, when congestion occurs on multiple hops
   along the path of a particular flow, individual congestion control
   algorithms could independently drop frames.  But far worse, they
   could collectively drop a sequence of frames, causing stalling or
   slow start behavior.  Hence, to the end user, the service is per-
   ceived as erratic.  Even if a customer is guaranteed more bandwidth
   with a more expensive profile, the "perceived" benefit is greatly
   deminished with choppy service.

   This document describes a differentiated services architecture that
   allows maximal flexibility for specifying bandwidth and delay sensi-
   tive policies while reducing or eliminating the choppy behavior as it
   exists in the Internet today.


2. The Differentiated Services service model

2.1. Congestion Control

   Differentiated services without the use of signaling relies on two
   basic components: a user or group profile specifying the level of
   service to which the flow, user, or group is entitled and a means for
   encoding this profile into each packet on which the profile is
   applied.  To provide consistency in the network, all traffic within a
   routing domain must be administered with an associated traffic pro-
   file.  The most economic way of enforcing this requirement is to
   apply or enforce the profile on all edges of the domain as described
   in Clark/Wroclawski [1].  Network Edge devices are defined in this
   document to be those devices capable of metering, assigning and
   enforcing bandwidth profiles and delay profiles.

   Because the amount of traffic on the egress link at any given time is
   non-deterministic, the encoding of the profile in each packet pro-
   vides an effective means for taking appropriate action on each
   packet.  By comparing current congestion conditions of a link with
   the profile associated with the packet a consistent action can be
   applied to the packet.  Possible actions on the packet could range
   from dropping the packet to giving it preferential access to the out-
   bound link.

   This document proposes using the TOS octet in the IP header to encode
   a profile into the packet stream of each session.  The profile will
   be capable of specifying a delay priority as well as a relative
   bandwidth share.  Relative bandwidth share is the relative bandwidth
   to which the profile's owner is entitled relative to other profiles.
   Through a combination of profile enforcement and the judicious use of



Weiss                     Expiration: May 1998                  [Page 3]


Internet Draft            Cooperative Dropping                  Nov 1997


   the TOS octet, share-based bandwidth sharing, as described in USD
   [2], can be provided without proliferating individual profiles to
   each router in the network.

   The problem of coordinating dropping policies between routers on a
   per flow basis is too complex to be feasible.  However, to provide
   effective coordination, each router does not need to know what pack-
   ets other routers have dropped.  Instead, a router can create a set
   of hierarchical drop Priority Classes for each link.  These Priority
   Classes are only for congestion control; they do not affect the ord-
   ering of packets.  A new 3 bit field called the Priority Class field,
   is provided in the TOS octet to allow Priority Class assignment on a
   per packet basis.  A flow distributes packets evenly to each Priority
   Class by having the Priority Class value assigned to the 3 bit field
   in the TOS octet.

   When a congestion threshold is reached, dropping will be initiated
   for the lowest Priority Class.  As congestion increases more and more
   packets will be dropped from this Priority Class.  If congestion con-
   tinues to increase, all packets in the Priority Class will be dropped
   and partial dropping will begin in the next higher Priority Class.
   However, the packets in the flow that use the remaining, higher
   Priority Classes will be unaffected.

   Dropping packets only within a single Priority Class creates many
   beneficial side effects.  First, routers today have difficulty deter-
   mining how to drop packets fairly across all flows.  The main issue
   is that routers have no knowledge of the profiles of individual
   flows, so they also have no knowledge of the relative amount of
   bandwidth to which the flow is entitled.  Also, most drop algorithms
   are based on a random discard model.  Without per flow state it is
   possible (probable for high bandwidth apps) for multiple and succes-
   sive drops to be performed against the same flow.  Even algorithms
   with per flow state have limitations due to a lack of profile aware-
   ness.  However, the Priority Class encoding mechanism described in
   this document allows routers to drop packets fairly based on the pro-
   file encoded in each flow.

   Further, this approach provides implicit cooperation between routers.
   If two or more routers along the path of a flow are dropping packets
   in the lowest Priority Class, all traffic in the higher Priority
   Classes is protected irrespective of the number of routers which
   experience congestion.  This provides much more predictable service
   irrespective of the location or the current condition of the network
   as a whole.

   Also, with profiles which limit the number of packets that can be
   sent in each Priority Class (or alternatively the time interval



Weiss                     Expiration: May 1998                  [Page 4]


Internet Draft            Cooperative Dropping                  Nov 1997


   between packets in a given Priority Class), a bandwidth share is
   implicitly assigned to each user.  The proportional share to which
   each user is entitled remains constant irrespective of the level of
   congestion or the number of flows on the link.

   This model provides an implicit congestion notification mechanism to
   senders for TCP based applications.  When a sender keeps track of the
   Priority Classes of sent packets, TCP acknowledgments provide infor-
   mation on the level of congestion along the path.  This provides end
   users with an easy tool for service contract verification.  Further,
   it provides equivalent functionality to ECN [3] without consuming
   additional bits in the TOS header.

   When routers are only dropping packets up to a specific Priority
   Class, the other Priority Classes are implicitly protected.  This
   allows Service Providers to charge more accurately for end-to-end
   Goodput rather than Throughput.

   This mechanism can greatly reduce or eliminate bandwidth consumed by
   packets that will be dropped somewhere along the path.  With implicit
   congestion notification, applications can stop sending packets in
   Priority Classes that they know will be dropped.  In fact, if an
   application knows that packets in a given Priority Class are
   guaranteed to be dropped, it benefits by not sending the packets
   because it can use the in-profile bandwidth in that Priority Class
   for a different flow to an uncongested destination.  If the edge
   routers which enforce profiles also snoop the TCP sessions (or use
   the Congestion Check mechanism described below), they could perform
   aggressive policing by dropping packets in unavailable Priority
   Classes, thus providing additional network bandwidth and encouraging
   adaptive behavior in end systems.

   While congestion awareness can be used to restrict aggressive or
   abusive bandwidth consumption, it can also be used to allow bandwidth
   to grow beyond normal limits when there is no congestion.  This can
   have the effect of maximizing available bandwidth when it is avail-
   able.

   Using this mechanism in conjunction with TraceRoute, end users and
   network administrators could verify service contracts by identifying
   the precise location of the highest level of congestion.  This
   clearly fixes blame when service contracts are not met and also
   easily identifies those links which need to be upgraded.

   Current TCP congestion control algorithms grow bandwidth incremen-
   tally and cut bandwidth in half when congestion occurs.  With an
   awareness of which Priority Classes are being dropped, TCP growth and
   cutback algorithms could be applied to the same Priority Class that



Weiss                     Expiration: May 1998                  [Page 5]


Internet Draft            Cooperative Dropping                  Nov 1997


   is performing the partial drop.  This smoothes the bandwidth in the
   flow while still adjusting to current bandwidth availability as it
   increases and decreases.

   Another benefit is that the Priority Class assignment can be
   sequenced in any order based on transport specific criteria.  If it
   is desirable to lose a sequence of packets with congestion, a
   sequence such as 0,1,2,3,4,5,6,7 would drop a block of packets based
   on the current congestion level. If it is desirable to spread the
   dropping of packets out, a sequence such as 0,4,1,5,2,6,3,7 provides
   a very high probability that two packets will not be dropped in a
   row.  If it is desirable to distinguish important packets from less
   important ones, Priority Class can be assigned in a more discretion-
   ary manner.

   The last benefit is that this model is extremely efficient and simple
   to implement in routers.  A router only needs to set a congestion
   threshold and apply a dropper algorithm to that single Priority
   Class.  If congestion increases, all packets in the current Priority
   Class are dropped and the dropper algorithm is applied to the next
   higher Priority Class.

2.2. Delay Control

   The other mechanism necessary to support Quality of Service is a
   means for controlling link access based on a combination of service
   contract and profile.  This document proposes using a 2 bit field in
   the TOS octet to provide up to 4 Delay Classes. It is believed that 4
   classes are adequate for current needs. Vendors may choose to map
   these classes to at least 2 delay classes.

   There are two issues that must be addressed when providing control
   over packet delay.  One is how the packet scheduling is handled
   across delay classes.  For example, both Class Based and Priority
   Queuing provide specific features.  These capabilities may be more or
   less appropriate depending on customer needs.  Therefore, it is left
   to vendors to choose the technology most appropriate to the specific
   market.

   The other issue is how Delay Classes and Priority Classes work
   together.  Is it better to for Priority Class droppers to operate
   autonomously in each Delay Class or is it better to have a single
   Priority Class dropper that is indifferent to which Delay Class a
   frame belongs to?  This issue is in reality identical to the CBQ vs.
   Priority Queuing issue.  The former creates specific bandwidth limits
   thereby creating specific delay limits.  The latter allows more
   flexible/dynamic bandwidth allocations at the expense of possible
   starvation and looser delay guarantees.  In certain environments like



Weiss                     Expiration: May 1998                  [Page 6]


Internet Draft            Cooperative Dropping                  Nov 1997


   the Internet, where the number of hops is fairly non-deterministic,
   it may make more sense to use a single Priority Class dropper across
   all Delay Classes.  However, in most private networks where the
   number of hops is deterministic, it is feasible to provide specific
   delay limits.  Therefore it also makes sense to support independent
   Priority Class droppers within each Delay Class.  It is therefore
   left to the vendor to choose the model most appropriate to their
   market and customers.


3. The TOS octet

   This document proposes using the 8-bit TOS field in the IPv4[4]
   header to provide differentiated services. The identical format would
   also be used in the Class field of the IPv6[5] header.  The format of
   the TOS field is shown below.

           0   1   2   3   4   5   6   7
         +---+---+---+---+---+---+---+---+
         | CC|     TC    | RR|   DC  | RB|
         +---+---+---+---+---+---+---+---+

           CC:  Congestion Check
           PC:  Priority Class
           RR:  Request/Response
           DC:  Delay Class
           RB:  Receiver Billing

   The Priority Class field is used to provide congestion control as
   described earlier. This field allows for 8 possible values.  From a
   congestion management prospective, this provides congestion/traffic
   management in 12.5% chunks.  The Priority Class semantics are as fol-
   lows:

        7:  Least likely to be dropped
        .
        .
        .
        0:  Most likely to be dropped

   The Delay Class field is used to specify the delay sensitivity of the
   packet.  It is strongly recommended that a flow not use different
   Delay Class values.  This would create packet ordering problems and
   make effective congestion management more difficult.  The Delay Class
   semantics are as follows:

        3:  Low delay (most delay sensitive)
        2:  Moderate delay



Weiss                     Expiration: May 1998                  [Page 7]


Internet Draft            Cooperative Dropping                  Nov 1997


        1:  Modest delay
        0:  High delay (indifferent to delay)

   The Congestion Check bit is used as an efficient means for determin-
   ing the current congestion level along the path to a destination.
   When this bit is set, each hop will assign the congestion level of
   the (downstream) target link in the Priority Class field if the
   congestion level is greater than the value currently assigned to the
   Priority Class field.  When the packet arrives at the destination,
   the Priority Class field should contain the highest Priority Class on
   which packets are being dropped.

   The usage of the Congestion Check bit is sensitive to the value of
   the Delay Class field.  The Priority Class field will be assigned to
   the congestion level of the delay class specified in the Delay Class
   field.  This bit could be set during connection establishment to
   optimize the initial windowing and congestion control algorithms. The
   Priority Class for this packet should be considered to have a value
   of 7 and should be charged against sender profiles accordingly.

   When both the Congestion Check bit and the Request/Response bit are
   set, this is an indication that the contents of the Priority Class
   field is the congestion level of traffic in the opposite direction.
   The Request/Response bit is used to indicate that the current Prior-
   ity Class value is a response to a Congestion Check request.  The
   Priority Class field is to be ignored by intermediate routers and the
   Priority Class for this packet should be treated as if it contained a
   value of 7.

   It is possible for the response capability to be provided out of
   band. However, if the end station is not capable of supporting the
   new TOS octet and the edge router wants to perform the TOS byte
   assignments on behalf of the end station (or is performing aggressive
   dropping), this is an effective mechanism for snooping congestion
   levels without new protocols or extra bandwidth.

   Because this Congestion Check request and response mechanism behaves
   as a packet with a Priority Class of 7, profile meters should treat
   (and charge for) these packets as Priority Class 7 packets to prevent
   abuse.  Further, because congestion checking is sensitive to the
   Receiver Billing bit, these request and response packets are always
   charged to the sender.

   The Receiver Billing bit is provided to indicate that bandwidth will
   be charged to the receiver.  This bit may only be set when the
   receiver's bandwidth profile has been provided to the sender.  The
   mechanisms or protocol extensions used to propagate bandwidth pro-
   files to senders are beyond the scope of this document.



Weiss                     Expiration: May 1998                  [Page 8]


Internet Draft            Cooperative Dropping                  Nov 1997


   Because Receiver Billing requires a different profile and Priority
   Class based dropping may be applied because a profile has been
   exceeded, two types of Congestion Checks are possible: one for sender
   billing and one for receiver billing.  The Receiver Billing bit is
   set in conjunction with the Congestion Check bit, to determine the
   congestion level for Receiver Billing packets.  More details on pro-
   file based dropping and Receiver Billing will be provided later in
   this document.

   It is not clear at this time whether charging of a Congestion Check
   Response packet should be against the sender or receiver.  It makes
   sense to charge Congestion Check Request to the sender when the
   Receiver Billing bit is reset and charge to the receiver when the
   Receiver Billing bit is set.  However, if Congestion Check Response
   packets are charged based on the value of the Receiver Billing bit,
   then it may preclude concurrent sender and receiver charging within
   the same flow.

   This new use of the IPv4 TOS octet subsumes the previous functional-
   ity of this field as described in RFC1349[4] and similarly in
   IPv6[5].  Current usage of this field leaves little room for the
   coexistence of the original semantics with the semantics described in
   this document.  This document concurs with Nichols et. al.[6] in
   requiring the remarking of packets between differentiated services
   networks and non-differentiated services networks.  This will
   minimally require configuration support to demarcate differentiated
   services network boundaries.


4. Operational Model

   The operational model is based on the assumption that all traffic
   entering a network domain will be verified against a user or group
   profile.  This profile has a number of potential components.  One
   component is the allowable Delay Class(es).  Another component may be
   the maximum bandwidth allocation. Maximum bandwidth allocation is
   particularly important for receiver billing to prevent excessive
   sending and overcharging.  Another component may be a maximum
   bandwidth allocation for each given Priority Class.  It is generally
   more useful to distribute bandwidth evenly among all Priority
   Classes.  However, some policy models may choose to block or reserve
   certain Priority Classes for specific applications.  Alternatively a
   policy may provide more bandwidth to a specific Priority Class to
   support specialized services such as premium service as described in
   Nichols[7].






Weiss                     Expiration: May 1998                  [Page 9]


Internet Draft            Cooperative Dropping                  Nov 1997


4.1 Inter-Domain Edge devices

   An Inter-Domain Edge Device is defined in this document as a dif-
   ferentiated services capable device that is part of one differen-
   tiated services aware network and connected to another network
   domain. As service contracts between Inter-Domain Edge Devices usu-
   ally assume a statistical limit on the bandwidth between domains, the
   actual bandwidth may be higher or lower at any given time depending
   on the number of sessions active at the time.

   When bandwidth falls below the specified service contract, it can be
   beneficial to increase the Priority Class values on some or all pack-
   ets to take optimal advantage of service agreements.  This can allow
   the packets a higher probability of getting through.  However, there
   is only marginal benefit in increasing Priority Class values because
   the congestion check mechanism would be unaware of this action and
   would not increase the bandwidth to take full advantage of this
   option.

   If the service contract between the two domains is exceeded, the
   correct behavior must be to begin dropping packets in the lowest
   Priority Class.  If the Priority Class values in packets were decre-
   mented, there would be potential anomalies between the Congestion
   Check algorithm and the original Priority Class values assigned to
   packets.

4.2. Between the End Station and Network Edge devices

   End Stations can choose to use the new TOS octet semantics or not.
   Network Edge devices should be aware of the End Station's TOS field
   semantics assumptions. If the Network Edge device knows that con-
   nected End Stations are performing TOS octet assignments themselves,
   then the Network Edge device must operate as a profile meter.

   Profile meters are forwarding devices that match each packet with the
   appropriate profile and verify that the TOS octet assignments are
   within the profile associated with the user, group, or application.
   Their behavior should be identical to that of Inter-Domain Edge Dev-
   ices.  When packets are arriving below the bandwidth profile, profile
   meters may choose to increase the Priority Class of some or all pack-
   ets.  If the packets arriving exceed a maximum bandwidth profile(if
   any), packets in all Priority Classes must be dropped.

   When Network Edge Devices receive packets destined to an End Station
   which does not support differentiated services and the bandwidth
   exceeds the capabilities of the End Station, the Network Edge device
   connected to the End Station should treat this as congestion and
   begin dropping low Priority Class packets.



Weiss                     Expiration: May 1998                 [Page 10]


Internet Draft            Cooperative Dropping                  Nov 1997


4.3. Bandwidth Scaling

   It is important for bandwidth to be able to grow and shrink across
   the Priority Classes.  For example, a server may have a very large
   bandwidth profile, but the clients it connects to may have drasti-
   cally different bandwidth limits.  Traditionally bandwidth grows
   until congestion occurs and is then cut back.  So far, there has been
   detailed discussion about how congestion can be managed.  However,
   there still needs to be a mechanism to determine what measure of
   bandwidth each end of a connection can tolerate.

   The best way to achieve this is to gradually increase the bandwidth
   across all Priority Classes up to the limit of the profile.  In the
   past, when the receiver became incapable of keeping up with the
   sender, it usually began dropping packets.  This mechanism needs to
   be refined so that a sender can be notified that a bandwidth limit
   has been reached.  For this scenario, it is reasonable for a receiver
   to absorb all packets up to its capability.  After that point, it
   begins to randomly drop packets.  When a sender discovers that pack-
   ets are randomly being discarded, it will throttle its bandwidth back
   evenly across all Priority Classes.  Some research will be required
   to determine the most appropriate bandwidth growth and cutback rates.

4.4. Receiver Billing

   Receiver based billing is a model that charges bandwidth and delay
   services to the receiver's profile rather than the sender's.  This is
   an important capability because a service is usually bought or given.
   The cost of a telephone conversation is not typically shared between
   both parties.  It is usually paid for by the caller or by the callee
   (an 800 number).

   There are three main issues with a Receiver Billing model.  First,
   the sender must know what the profile limits of the receiver are.
   Second, the receiver must be charged for the traffic that fits the
   profile.  Third, a receiver must be protected from excessive
   bandwidth sent by a malicious sender.

   As mentioned earlier, a sender should not be allowed to set the
   Receiver Billing bit unless it has received the sender's profile.
   The means for sending this profile is beyond the scope of this docu-
   ment.  However, there are a number of alternate mechanisms including
   static configuration, a standardized profile header in the TCP
   options header, or extensions to application headers.

   In order for the receiver to be charged for the traffic, profiles
   must be defined in bi-directional terms.  It is conceivable that a
   single profile is an aggregation of both the bandwidth sent and the



Weiss                     Expiration: May 1998                 [Page 11]


Internet Draft            Cooperative Dropping                  Nov 1997


   bandwidth received.  However, usually the amount of bandwidth sent is
   different from the bandwidth received.  Therefore, independent
   accounting will be required at a minimum.  Because traffic through a
   Domain Edge could be charged to a sender or a receiver, different
   accounting may be required for each.

   As mentioned earlier, the traffic sent and the traffic received are
   seldom symmetric.  Therefore, when sender and receiver billing pro-
   files are specifically defined, a Priority Class dropper will need to
   be supported for each.  By providing receiver based bandwidth manage-
   ment, a solution to the second issue is provided.

   The malicious sender is a sender that sends packets using the
   receiver's bandwidth.  This is a difficult problem to solve because
   it requires all networks along the path between the sender and the
   receiver to be aware of the sender's right to send using receiver
   billing.  This problem can really be broken up into three problems.
   One is malicious deterioration of the end receiver's bandwidth with
   no interest in the data.  Another is malicious deterioration of
   intermediate ISP bandwidth with no interest in the data.  The last is
   an attempt to charge bandwidth to the receiver without the receiver's
   consent with an interest in the data.

   The problem of dealing with users who are incorrectly attempting to
   reverse charges for services is fairly easy to solve.  When a
   receiver determines that a packet is sent with Receiver Billing set
   and the receiver did not ask for it, the packet can be dropped.  This
   is in effect denial of service.  If a receiver determines it is worth
   receiving the packet, it can accept it.  The issue of determining
   when Receiver Billing is and is not acceptable will need to be
   resolved when mechanisms are put in place for propagating profiles to
   senders.

   The first and second problems associated with malicious deterioration
   of bandwidth exists in the Internet today.  A subset of these cases
   can be handled by terminating the session.  For other cases, this
   problem will likely require a protocol between Network Edge Devices
   which propagate denial of service between the sender and receiver
   back to the source.  This type of protocol is likely to be required
   irrespective of the Receiver Billing issue to resolve current possi-
   bilities for malicious Internet abuse.


5. Supported Service Models

   There are a number of services that have been suggested by the Dif-
   ferentiated Services Working Group.  One type of service, described
   by Clark/Wroclawski[1], has been commonly referred to as Assured



Weiss                     Expiration: May 1998                 [Page 12]


Internet Draft            Cooperative Dropping                  Nov 1997


   Service.  The premise for this service is that packets can be marked
   as either "in" profile or "out" of profile.  During congestion, the
   packets marked as out of profile are dropped.  The proposals in this
   document support Assured Service directly using the same model.  The
   only distinction is that this document provides layers or classes of
   assurance.  As mentioned earlier, this proposal has the unique, addi-
   tional benefit of allowing cooperative congestion control between
   forwarding devices.

   Another service model, described by Van Jacobson[7], is called prem-
   ium service.  This service provides preferential treatment for all
   traffic marked as premium.  All unmarked traffic would continue to be
   treated as Best Effort.  On the other hand, premium traffic would
   have a guarantee of delivery, provided that the traffic is within
   profile.  All traffic exceeding the profile would be dropped by the
   profile meter.  The mechanisms described in this document can satisfy
   the service described by Van through a combination of forced dropping
   at the profile meter and by setting packets to higher (or the
   highest) Priority Classes as congestion occurs.  Premium Service also
   gives preferential access to all links over Best Effort traffic.
   This aspect could be accommodated using high Delay Classes.

   In addition, other service models can also be supported.  When
   congestion occurs along the path of a flow, Congestion Check can be
   used to prevent the sending of all packets which fall below the
   current highest congestion level. This would leave additional
   bandwidth available in the profile that could be used to communicate
   with destinations experiencing less congestion or no congestion.

   This strategy provides very flexible and optimized communication
   throughout the Internet.  Further, any combination of Priority Class
   values are possible.  For connections that are considered less impor-
   tant but which must be kept alive, packets with higher Priority Class
   values could be used to keep the session alive while lower Priority
   Class values would be used to send data when congestion decreased
   enough to permit it.

   Another possible service model that provides bandwidth guarantees
   irrespective of the level of congestion could be supported through a
   combination of Congestion Checking and adaptive assignment of the
   Priority Class values by the End Station.  Various combinations of
   the services described above can be supported as well.


6. Acknowledgments

   This document is a collection of ideas taken from David Clark, Van
   Jacobson, Zheng Wang, Kalevi Kilkki, Paul Ferguson and Kathleen



Weiss                     Expiration: May 1998                 [Page 13]


Internet Draft            Cooperative Dropping                  Nov 1997


   Nichols.  In addition the many opportunities described in this docu-
   ment were inspired by the issues surfaced on the Diff-Serv mailing
   list.


7. References

   [1]  D. Clark and J. Wroclawski, "An Approach to Service
        Allocation in the Internet", Internet Draft
        <draft-clark-diff-svc-alloc-00.txt>, July 1997.

   [2]  Z. Wang, "User-Share Differentiation (USD), Scalable
        bandwidth allocation for differentiated services",
        Internet Draft <draft-wang-diff-serv-usd-00.txt>, May 1998.

   [3]  S. Floyd, "TCP and Explicit Congestion Notification",
        ACM Computer Communications Review, Vol. 24 no. 5, pp. 10-23,
        October 1994.

   [4]  RFC1349, "Type of Service in the Internet Protocol Suite",
        P. Almquist. July 1992.

   [5]  S. Deering and R. Hinden, "Internet Protocol, Version 6
        (IPv6) Specification", Internet Draft
        <draft-ietf-ipngwg-ipv6-spec-v2-01.txt>, November 1997.

   [6]  Nichols, et. al., "Differentiated Services Operational Model
        and Definitions", Internet Draft
        <draft-nichols-dsopdef-00.txt>, August 1998.

   [7]  K. Nichols, V. Jacobson, L. Zhang, "A Two-bit Differentiated
        Services Architecture for the Internet", Internet Draft
        <draft-nichols-diff-svc-arch-00.txt>, May 1998.


8. Author's address

   Walter Weiss
   Lucent Technologies
   300 Baker Avenue, Suite 100,
   Concord, MA USA 01742-2168
   Email: wweiss@lucent.com









Weiss                     Expiration: May 1998                 [Page 14]