[Search] [txt|pdf|bibtex] [Tracker] [WG] [Email] [Diff1] [Diff2] [Nits]

Versions: 00 01 02 03 04 05 06 rfc2816                                  
Internet Engineering Task Force                           Anoop Ghanwani
INTERNET DRAFT                                             J. Wayne Pace
Expires May 1998                                        Vijay Srinivasan
                                                               IBM Corp.
                                                            Andrew Smith
                                                        Extreme Networks
                                                             Mick Seaman
                                                              3Com Corp.
                                                           November 1997

                    A Framework for Providing Integrated Services
           Over Shared and Switched IEEE 802 LAN Technologies


Status of This Memo

   This document is an Internet-Draft.  Internet Drafts are working
   documents of the Internet Engineering Task Force (IETF), its areas,
   and its working groups.  Note that other groups may also distribute
   working documents as Internet Drafts. Internet Drafts are draft
   documents valid for a maximum of six months, and may be updated,
   replaced, or obsoleted by other documents at any time.  It is not
   appropriate to use Internet Drafts as reference material, or to cite
   them other than as a ``working draft'' or ``work in progress.''  To
   view the entire list of current Internet-Drafts, please check the
   "1id-abstracts.txt" listing contained in the Internet-Drafts Shadow
   Directories on ftp.is.co.za (Africa), ftp.nordu.net (Europe),
   munnari.oz.au (Pacific Rim), ds.internic.net (US East Coast), or
   ftp.isi.edu (US West Coast). This document is a product of the IS802
   subgroup of the ISSLL working group of the Internet Engineering Task
   Force.  Comments are solicited and should be addressed to the working
   group's mailing list at issll@mercury.lcs.mit.edu and/or the authors.


This memo describes a framework for supporting IETF Integrated Services
on shared and switched LAN infrastructures. It  includes background
material on the capabilities of IEEE 802-like networks with regard to
parameters that affect Integrated Services such as access latency, delay
variation and queueing support in LAN switches. It discusses aspects of
IETF's Integrated Services model that cannot easily be accommodated in
different LAN environments. It outlines a functional model for
supporting the Resource Reservation Protocol (RSVP) in such LAN
environments.  Details of extensions to RSVP for use over LANs are
described in an accompanying memo [14]. Mappings of the assorted
Integrated Services onto IEEE LANs are described in another memo [13].

Ghanwani et al.             Expires May 1998            [Page 1]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

1 Introduction

The Internet has traditionally provided support for best effort traffic
only.  However, with the recent advances in link layer technology, and
with numerous emerging real-time applications such as video conferencing
and Internet telephony, there has been much interest for developing
mechanisms which enable real-time services over the Internet.  A
framework for meeting these new requirements was set out in RFC1633 [8]
and this has driven the specification of various classes of network
service by the Integrated Services working group of the IETF, such as
Controlled Load RFC 2211 [6] and Guaranteed Service RFC 2212 [7].  Each
of these service classes is designed to provide certain Quality of
Service (QoS) to traffic conforming to a specified set of parameters.
Applications are expected to choose one of these classes according to
their QoS requirements.  One mechanism for end-stations to utilise such
services in an IP network is provided by a QoS signaling protocol, the
Resource Reservation Protocol (RSVP) RFC 2205 [5] developed by the RSVP
working group of the IETF.  The IEEE under its Project 802 has defined
standards for many different local area network technologies. These all
typically offer the same "MAC-layer" datagram service [1] to upper-layer
protocols such as IP although they often provide different dynamic
behaviour characteristics - it is these that are important when
considering their ability to support real-time services. Later in this
memo we describe some of the relevant characteristics of different MAC-
layer LAN technologies.   In addition, IEEE 802 has defined standards
for bridging multiple LAN segments together using devices known as "MAC
Bridges" or "Switches" [2]. Newer work has also defined enhanced queuing
[3] and "virtual LAN" [4] capabilities for these devices.  Such LANs
often constitute the last hop or hops between users and the Internetas
well as being a primary building-block for complete private campus
networks.  It is therefore necessary to provide standardized mechanisms
for using these technologies to support end- to-end real-time services.
In order to do this, there must be some mechanism for resource
management at the data-link link layer.  Resource management in this
context encompasses the functions of admission control, scheduling,
traffic policing, etc.  The ISSLL (Integrated Services over Specific
Link Layers) working group in the IETF was chartered with the purpose of
exploring and standardizing such mechanisms for various link layer
technologies.  2 Document Outline

This document is concerned with specifying a framework for providing
Integrated Services over shared and switched LAN technologies such as
Ethernet/802.3, token ring/802.5, FDDI, etc. We begin in section 4 with
a discussion of the capabilities of various IEEE 802 MAC-layer
technologies. Section 5lists the requirements and goals for a mechanism
capable of providing Integrated Services in a LAN. The resource
management functions outlined in Section 5are provided by an entity
referred to as a Bandwidth Manager (BM): the architectural model of the

Ghanwani et al.             Expires May 1998            [Page 2]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

the BM is described in section 6 and its various components are
discussed in section 7.  Some implementation issues with respect to link
layer support for Integrated Services are examined in Section 8. We then
in section 9 discuss a taxonomy of topologies for the LAN technologies
under consideration with an emphasis on the capabilities of each which
can be leveraged for enabling Integrated Services. In this framework, no
assumptions are made about the topology at the link layer.  The
framework is intended to be as exhaustive as possible; this means that
it is possible that all the functions discussed may not be supportable
by a particular topology or technology, but this should not preclude the
usage of this model for it.

3 Definitions

   The following is a list of terms used in this and other ISSLL

-  Link Layer or Layer 2 or L2: We refer to data-link layer technologies
such as IEEE 802.3/Ethernet as L2 or layer 2.

- Link Layer Domain or Layer 2 domain or L2 domain: a set of nodes and
links interconnected without passing through a L3 forwarding function.
One or more IP subnets can be overlaid on a L2 domain.

- Layer 2 or L2 devices: We refer to devices that only implement Layer 2
functionality as Layer 2 or L2 devices. These include 802.1D bridges or

- Internetwork Layer or Layer 3 or L3: Layer 3 of the ISO 7 layer model.
This memo is primarily concerned with networks that use the Internet
Protocol (IP) at this layer.

- Layer 3 Device or L3 Device or End-Station: these include hosts and
routers that use L3 and higher layer protocols or application programs
that need to make resource reservations.

- Segment: A L2 physical segment that is shared by one or more senders.
Examples of segments include (a) a shared Ethernet or Token-Ring wire
resolving contention for media access using CSMA or token passing, (b) a
half duplex link between two stations or switches, (c) one direction of
a switched full-duplex link.

-  Managed segment: A managed segment is a segment with a DSBM present
and responsible for exercising admission control over requests for
resource reservation. A managed segment includes those interconnected
parts of a shared LAN that are not separated by DSBMs.

Ghanwani et al.             Expires May 1998            [Page 3]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

- Traffic Class:  An aggregation of data flows which are given similar
service within a switched network.

- Subnet: used in this memo to indicate a group of L3 devices sharing a
common L3 network address prefix along with the set of segments making
up the L2 domain in which they are located.

- Bridge/Switch: a layer 2 forwarding device as defined by IEEE 802.1D.
The terms bridge and switch are used synonymously in this memo.

4 Frame Forwarding in IEEE 802 Networks

4.1 General IEEE 802 Service Model

User_priority is a value associated with the transmission and reception
of all frames in the IEEE 802 service model: it is supplied by the
sender that is using the MAC service. It is provided along with the data
to a receiver using the MAC service. It may or may not be actually
carried over the network: Token-Ring/802.5 carries this value (encoded
in its FC octet), basic Ethernet/802.3 does not, 802.12 may or may not
depending on the frame format in use. 802.1p defines a consistent way to
carry this value over the bridged network on Ethernet, Token Ring,
Demand-Priority, FDDI or other MAC-layer media using an extended frame
format. The usage of user_priority is summarised below but is more fully
described in section 2.5 of 802.1D [2] and 802.1p [3] "Support of the
Internal Layer Service by Specific MAC Procedures" and readers are
referred to these documents for further information.

If the "user_priority" is carried explicitly in packets, its utility is
as a simple label in the data stream enabling packets in different
classes to be discriminated easily by downstream nodes without their
having to parse the packet in more detail.

Apart from making the job of desktop or wiring-closet switches easier,
an explicit field means they do not have to change hardware or software
as the rules for classifying packets evolve (e.g. based on new protocols
or new policies). More sophisticated layer-3 switches, perhaps deployed
towards the core of a network, can provide added value here by
performing the classification more accurately and, hence, utilising
network resources more efficiently or providing better protection of
flows from one another: this appears to be a good economic choice since
there are likely to be very many more desktop/wiring closet switches in
a network than switches requiring layer-3 functionality.

The IEEE 802 specifications make no assumptions about how user_priority
is to be used by end stations or by the network. In particular it can

Ghanwani et al.             Expires May 1998            [Page 4]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

only be considered a "priority" in a loose sense: although 802.1p
defines static priority queuing as the default mode of operation of
switches that implement multiple queues (user_priority is defined as a
3-bit quantity so strict priority queueing would give value 7 = high
priority, 0 = low priority). The general switch algorithm is as follows:
packets are placed onto a particular queue based on the received
user_priority (perhaps directly from the packet if a 802.1p header or
802.5 network was used or else invented according to some local policy
if not). The selection of queue is based on a mapping from user_priority
[0,1,2,3,4,5,6 or 7] onto the number of available queues. Note that
switches may implement any number of queues from 1 upwards and it may
not be visible externally, except through any advertised int-serv
parameters and the switch's admission control behaviour, which
user_priority values get mapped internally onto the same vs. different
queues. Other algorithms that a switch might implement might include
e.g. weighted fair queueuing, round robin.

In particular, IEEE makes no recommendations about how a sender should
select the value for user_priority: one of the main purposes of this
current document is to propose such usage rules and how to communicate
the semantics of the values between switches, end-stations and routers.
In the remainder of this document we use the term "traffic class"
synonymously with user_priority.

4.2 Ethernet/802.3

There is no explicit traffic class or user_priority field carried in
Ethernet packets. This means that user_priority must be regenerated at a
downstream receiver or switch according to some defaults or by parsing
further into higher-layer protocol fields in the packet. Alternatively,
the IEEE 802.1Q encapsulation [4] may be used which provides an explicit
traffic class field on top of an basic MAC format.

For the different IP packet encapsulations used over Ethernet/802.3, it
will be necessary to adjust any admission-control calculations according
to the framing and to the padding requirements:

Encapsulation                          Framing Overhead  IP MTU
                                          bytes/pkt       bytes

IP EtherType (ip_len<=46 bytes)             64-ip_len    1500
             (1500>=ip_len>=46 bytes)         18         1500

IP EtherType over 802.1p/Q (ip_len<=42)     64-ip_len    1500*
             (1500>=ip_len>=42 bytes)         22         1500*

IP EtherType over LLC/SNAP (ip_len<=40)     64-ip_len    1492
             (1500>=ip_len>=40 bytes)         24         1492

Ghanwani et al.             Expires May 1998            [Page 5]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

* note that the draft IEEE 802.1Q specification exceeds the current IEEE
802.3 maximum packet length values by 4 bytes although work is
proceeding within IEEE to address this issue.

4.3 Token-Ring/802.5

The token ring standard [6] provides a priority mechanism that can be
used to control both the queuing of packets for transmission and the
access of packets to the shared media. The priority mechanisms are
implemented using bits within the Access Control (AC) and the Frame
Control (FC) fields of a LLC frame. The first three bits of the AC
field, the Token Priority bits, together with the last three bits of the
AC field, the Reservation bits, regulate which stations get access to
the ring. The last three bits of the FC field of an LLC frame, the User
Priority bits, are obtained from the higher layer in the user_priority
parameter when it requests transmission of a packet. This parameter also
establishes the Access Priority used by the MAC. The user_priority value
is conveyed end-to-end by the User Priority bits in the FC field and is
typically preserved through Token-Ring bridges of all types. In all
cases, 0 is the lowest priority.

Token-Ring also uses a concept of Reserved Priority: this relates to the
value of priority which a station uses to reserve the token for the next
transmission on the ring.  When a free token is circulating, only a
station having an Access Priority greater than or equal to the Reserved
Priority in the token will be allowed to seize the token for
transmission. Readers are referred to [14] for further discussion of
this topic.

A token ring station is theoretically capable of separately queuing each
of the eight levels of requested user priority and then transmitting
frames in order of priority.  A station sets Reservation bits according
to the user priority of frames that are queued for transmission in the
highest priority queue.  This allows the access mechanism to ensure that
the frame with the highest priority throughout the entire ring will be
transmitted before any lower priority frame.  Annex I to the IEEE 802.5
token ring standard recommends that stations send/relay frames as

            Application             user_priority

            non-time-critical data      0
                  -                     1
                  -                     2
                  -                     3
            LAN management              4
            time-sensitive data         5
            real-time-critical data     6

Ghanwani et al.             Expires May 1998            [Page 6]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

            MAC frames                  7

To reduce frame jitter associated with high-priority traffic, the annex
also recommends that only one frame be transmitted per token and that
the maximum information field size be 4399 octets whenever delay-
sensitive traffic is traversing the ring.  Most existing implementations
of token ring bridges forward all LLC frames with a default access
priority of 4.  Annex I recommends that bridges forward LLC frames that
have a user priorities greater that 4 with a reservation equal to the
user priority (although the draft IEEE P802.1p [2]   permits network
management override this behaviour). The capabilities provided by token
ring's user and reservation priorities and by IEEE 802.1p can provide
effective support for Integrated Services flows that request QoS using
RSVP. These mechanisms can provide, with few or no additions to the
token ring architecture, bandwidth guarantees with the network flow
control necessary to support such guarantees.

For the different IP packet encapsulations used over Token Ring/802.5,
it will be necessary to adjust any admission-control calculations
according to the framing requirements:

Encapsulation                          Framing Overhead  IP MTU
                                          bytes/pkt       bytes

IP EtherType over 802.1p/Q                    29          4370*
IP EtherType over LLC/SNAP                    25          4370*

*the suggested MTU from RFC 1042 [13] is 4464 bytes but there are issues
related to discovering what the maximum supported MTU between any two
points both within and between Token Ring subnets. We recommend here an
MTU consistent with the 802.5 Annex I recommendation.

4.4 FDDI

The Fiber Distributed Data Interface standard [16] provides a priority
mechanism that can be used to control both the queuing of packets for
transmission and the access of packets to the shared media. The priority
mechanisms are implemented using similar mechanisms to Token-Ring
described above. The standard also makes provision for "Synchronous"
data traffic with strict media access and delay guarantees - this mode
of operation is not discussed further here: this is an area within the
scope of the ISSLL WG that requires further work. In the remainder of
this document we treat FDDI as a 100Mbps Token Ring (which it is) using
a service interface compatible with IEEE 802 networks.

4.5 Demand-Priority/802.12

Ghanwani et al.             Expires May 1998            [Page 7]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

IEEE 802.12 [19] is a standard for a shared 100Mbit/s LAN. Data packets
are transmitted using either 803.3 or 802.5 frame formats. The MAC
protocol is called Demand Priority. Its main characteristics in respect
to QoS are the support of two service priority levels (normal- and high-
priority) and the service order: data packets from all network nodes
(e.g. end-hosts and bridges/switches) are served using a simple round
robin algorithm.

If the 802.3 frame format is used for data transmission then
user_priority is encoded in the starting delimiter of the 802.12 data
packet. If the 802.5 frame format is used then the priority is
additionally encoded in the YYY bits of the AC field in the 802.5 packet
header (see also section 4.3). Furthermore, the 802.1p/Q encapsulation
may also be applied in 802.12 networks with its own user_priority field.
Thus, in all cases, switches are able to recover any user_priority
supplied by a sender.

The same rules apply for 802.12 user_priority mapping through a bridge
as with other media types: the only additional information is that
"normal" priority is used by default for user_priority values 0 through
4 inclusive and "high" priority is used for user_priority levels 5
through 7: this ensures that the default Token-Ring user_priority level
of 4 for 802.5 bridges is mapped to "normal" on 802.12 segments.

The medium access in 802.12 LANs is deterministic: the demand priority
mechanism ensures that, once the normal priority service has been pre-
empted, all high priority packets have strict priority over packets with
normal priority. In the abnormal situation that a normal-priority packet
has been waiting at the front of a MAC transmit queue for a time period
longer than PACKET_PROMOTION (200 - 300 ms [15]),its priority is
automatically 'promoted' to high priority. Thus, even normal-priority
packets have a maximum guaranteed access time to the medium.

Integrated Services can be built on top of the 802.12 medium access
mechanism. When combined with admission control and bandwidth
enforcement mechanisms, delay guarantees as required for a Guaranteed
Service can be provided without any changes to the existing 802.12 MAC

Since the 802.12 standard supports the 802.3 and 802.5 frame formats,
the same framing overhead as reported in sections 4.2 and 4.3 must be
considered in the admission control equations for 802.12 links.

5 Requirements and Goals

This section discusses the requirements and goals which should drive the

Ghanwani et al.             Expires May 1998            [Page 8]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

design of an architecture for supporting Integrated Services over LAN
technologies.  The requirements refer to functions and features which
must be supported, while goals refer to functions and features which are
desirable, but are not an absolute necessity.  Many of the requirements
and goals are driven by the functionality supported by Integrated
Services and RSVP.

5.1 Requirements

- Resource Reservation: The mechanism must be capable of reserving
resources on a single segment or multiple segments and at
bridges/switches connecting them.  It must be able to provide
reservations for both unicast and multicast sessions.  It should be
possible to change the level of reservation while the session is in

- Admission Control: The mechanism must be able to estimate the level of
resources necessary to meet the QoS requested by the session in order to
decide whether or not the session can be admitted.  For the purpose of
management, it is useful to provide the ability to respond to queries
about availability of resources. It must be able to make admission
control decisions for different types of services such as guaranteed
delay, controlled load, etc.

- Flow Separation and Scheduling:  It is necessary to provide a
mechanism for traffic flow separation so that real-time flows can be
given preferential treatment over best effort flows.  Packets of real-
time flows can then be isolated and scheduled according to their service

- Policing:  Traffic policing must be performed in order to ensure that
sources adhere to their negotiated traffic specifications. Policing must
be implemented at the sources and must ensure that violating traffic is
either dropped or transmitted as best effort. Policing may optionally be
implemented in the bridges and switches.  Alternatively, traffic may be
shaped to insure conformance to the negotiated parameters.

- Soft State:  The mechanism must maintain soft state information about
the reservations.  This means that state information must be
periodically refreshed if the reservation is to be maintained; otherwise
the state information and corresponding reservations will expire after
some pre-specified interval.

- Centralized or Distributed Implementation: In the case of a
centralized implementation, a single entity manages the resources of the
entire subnet. This approach has the advantage of being easier to deploy
since bridges and switches may not need to be upgraded with additional
functionality. However, this approach scales poorly with geographical

Ghanwani et al.             Expires May 1998            [Page 9]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

size of the subnet and the number of end stations attached. In a fully
distributed implementation, each segment will have a local entity
managing its resources. This approach has better scalability than the
former.  However, it requires that all bridges and switches in the
network support new mechanisms. It is also possible to have a semi-
distributed implementation where there is more than one entity, each
managing the resources of a subset of segments and bridges/switches
within the subnet.  Ideally, implementation should be flexible; i.e. a
centralized approach may be used for small subnets and a distributed
approach can be used for larger subnets.  Examples of centralized and
distributed implementations are discussed in Section 4.

- Scalability:  The mechanism and protocols should have a low overhead
and should scale to the largest receiver groups likely to occur within a
single link layer domain.

- Fault Tolerance and Recovery:  The mechanism must be able to function
in the presence of failures; i.e.  there should not be a single point of
failure.  For instance, in a centralized implementation, some mechanism
must be specified for back-up and recovery in the event of failure.

- Interaction with Existing Resource Management Controls: The
interaction with existing infrastructure for resource management needs
to be specified.  For example, FDDI has a resource management mechanism
called the "Synchronous Bandwidth Manager". The mechanism must be
designed so that it takes advantage of, and specifies the interaction
with, existing controls where available.

5.2 Goals

- Independence from higher layer protocols: The mechanism should, as far
as possible, be independent of higher layer protocols such as RSVP and
IP. Independence from RSVP is desirable so that it can interwork with
other reservation protocols such as ST2 [10]. Independence from IP is
desirable so that it can interwork with network layer protocols such as
IPX, NetBIOS, etc.

- Receiver heterogeneity: this refers to multicast communication where
different receivers request different levels of service. For example, in
a multicast group with many receivers, it is possible that one of the
receivers desires a lower delay bound than the others.  A better delay
bound may be provided by increasing the amount of resources reserved
along the path to that receiver while leaving the reservations for the
other receivers unchanged.  In its most complex form, receiver
heterogeneity implies the ability to simultaneously provide various
levels of service as requested by different receivers.  In its simplest
form, receiver heterogeneity will allow a scenario where some of the

Ghanwani et al.             Expires May 1998           [Page 10]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

receivers use best effort service and those requiring service guarantees
make a reservation.  Receiver heterogeneity, especially for the
reserved/best effort scenario, is a very desirable function.  More
details on supporting receiver heterogeneity are provided in Section 6.

- Support for different filter styles: It is desirable to provide
support for the different filter styles defined by RSVP such as fixed
filter, shared explicit and wildcard.  Some of the issues with respect
to supporting such filter styles in the link layer domain are examined
in Section 6.

- Path Selection: In source routed LAN technologies such as token
ring/802.5, it may be useful for the mechanism to incorporate the
function of path selection. Using an appropriate path selection
mechanism may optimize utilization of network resources.

5.3 Non-goals

This document describes service mappings onto existing IEEE- and ANSI-
defined standard MAC layers and uses standard MAC-layer services as in
IEEE 802.1 bridging. It does not attempt to make use of or describe the
capabilities of other proprietary or standard MAC-layer protocols
although it should be noted that there exists published work regarding
MAC layers suitable for QoS mappings: these are outside the scope of the
IETF ISSLL working group charter.

5.4 Assumptions

For this framework, it is assumed that typical subnetworks that are
concerned about quality-of-service will be "switch-rich": that is to say
most communication between end stations using integrated services
support will pass through at least one switch. The mechanisms and
protocols described will be trivially extensible to communicating
systems on the same shared media, but it is important not to allow
problem generalisation to complicate the practical application that we
target: the access characteristics of Ethernet and Token-Ring LANs are
forcing a trend to switch-rich topologies. In addition, there have been
developments in the area of MAC enhancements to ensure delay-
deterministic access on network links e.g. IEEE 802.12 [19] and also
proprietary schemes.

Note that we illustrate most examples in this model using RSVP as an
"upper-layer" QoS signaling protocol but there are actually no real
dependencies on this protocol: RSVP could be replaced by some other
dynamic protocol or else the requests could be made by network
management or other policy entities. In particular, the SBM signaling
protocol [14], which is based upon RSVP, is designed to work seamlessly
in the architecture described in this memo.

Ghanwani et al.             Expires May 1998           [Page 11]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

There may be a heterogeneous mixture of switches with different
capabilities, all compliant with IEEE 802.1D [2] [3], but implementing
queuing and forwarding mechanisms in a range from simple 2-queue per
port, strict priority, up to more complex multi- queue (maybe even one
per-flow) WFQ or other algorithms.

The problem is broken down into smaller independent pieces: this may
lead to sub-optimal usage of the network resources but we contend that
such benefits are often equivalent to very small improvements in network
efficiency in a LAN environment. Therefore, it is a goal that the
switches in the network operate using a much simpler set of information
than the RSVP engine in a router. In particular, it is assumed that such
switches do not need to implement per-flow queuing and policing
(although they might do so).

It is a fundamental assumption of the int-serv model that flows are
isolated from each other throughout their transit across a network.
Intermediate queueing nodes are expected to police the traffic to ensure
that it conforms to the pre-agreed traffic flow specification. In the
architecture proposed here for mapping to layer-2, we diverge from that
assumption in the interests of simplicity: the policing function is
assumed to be implemented in the transmit schedulers of the layer-3
devices (end stations, routers). In the LAN environments envisioned, it
is reasonable to assume that end stations are "trusted" to adhere to
their agreed contracts at the inputs to the network and that we can
afford to over-allocate resources at admission -control time to
compensate for the inevitable extra jitter/bunching introduced by the
switched network itself.

These divergences have some implications on the types of receiver
heterogeneity that can be supported and  the statistical multiplexing
gains that might have been exploited, especially for Controlled Load
flows: this is discussed in a later section of this document.

6 Basic Architecture

The functional requirements described in Section 3 will be performed by
an entity which we refer to as the Bandwidth Manager (BM). The BM is
responsible for providing mechanisms for an application or higher layer
protocol to request QoS from the network.  For architectural purposes,
the BM consists of the following components.

6.1 Components

6.1.1 Requester Module

The Requester Module (RM) resides in every end station in the subnet.

Ghanwani et al.             Expires May 1998           [Page 12]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

One of its functions is to provide an interface between applications or
higher layer protocols such as RSVP, STII, SNMP, etc. and the BM. An
application can invoke the various functions of the BM by using the
primitives for communication with the RM and providing it with the
appropriate parameters.  To initiate a reservation, in the link layer
domain, the following parameters must be passed to the RM: the service
desired (Guaranteed Service or Controlled Load), the traffic descriptors
contained in the TSpec, and an RSpec specifying the amount of resources
to be reserved [9].  More information on these parameters may be found
in the relevant Integrated Services documents [6,7,8,9].  When RSVP is
used for signaling at the network layer, this information is available
and needs to be extracted from the RSVP PATH and RSVP RESV messages (See
[5] for details).  In addition to these parameters, the network layer
addresses of the end points must be specified.  The RM must then
translate the network layer addresses to link layer addresses and
convert the request into an appropriate format which is understood by
other components of the BM responsible admission control.  The RM is
also responsible for returning the status of requests processed by the
BM to the invoking application or higher layer protocol.

6.1.2 Bandwidth Allocator

The Bandwidth Allocator (BA) is responsible for performing admission
control and maintaining state about the allocation of resources in the
subnet.  An end station can request various services, e.g. bandwidth
reservation, modification of an existing reservation, queries about
resource availability, etc.  These requests are processed by the BA. The
communication between the end station and the BA takes place through the
RM. The location of the BA will depend largely on the implementation
method.  In a centralized implementation, the BA may reside on a single
station in the subnet. In a distributed implementation, the functions of
the BA may be distributed in all the end stations and bridges/switches
as necessary.  The BA is also responsible for deciding how to label
flows, e.g.  based on the admission control decision, the BA may
indicate to the RM that packets belonging to a particular flow be tagged
with some priority value which maps to the appropriate traffic class.

6.1.3 Communication Protocols

The protocols for communication between the various components of the BM
system must be specified.  These include the following:

- Communication between the higher layer protocols and the RM:  The BM
must define primitives for the application to initiate reservations,
query the BA about available resources, and change or delete
reservations, etc.  These primitives could be implemented as an API for
an application to invoke functions of the BM via the RM.

Ghanwani et al.             Expires May 1998           [Page 13]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

- Communication between the RM and the BA: A signaling mechanism must be
defined for the communication between the RM and the BA. This protocol
will specify the messages which must be exchanged between the RM and the
BA in order to service various requests by the higher layer entity.

- Communication between peer BAs: If there is more than one BA in the
subnet, a means must be specified for inter-BA communication.
Specifically, the BAs must be able to decide among themselves about
which BA would be responsible for which segments and bridges or
switches.  Further, if a request is made for resource reservation along
the domain of multiple BAs, the BAs must be able to handle such a
scenario correctly.  Inter-BA communication will also be responsible for
back-up and recovery in the event of failure.

6.2 Centralised vs. Distributed Implementations

Example scenarios are provided showing the location of the the
components of the bandwidth manager in centralized and fully distributed
implementations.  Note that in either case, the RM must be present in
all end stations which desire to make reservations. Essentially,
centralized or distributed refers to the implementation of the BA, the
component responsible for resource reservation and admission control.
In the figures below, "App" refers to the application making use of the
BM. It could either be a user application, or a higher layer protocol
process such as RSVP.

                           .-->|  BA     |<--.
                          /    +---------+    \
                         / .-->| Layer 2 |<--. \
                        / /    +---------+    \ \
                       / /                     \ \
                      / /                       \ \
  +---------+        / /                         \ \       +---------+
  |  App    |<----- /-/---------------------------\-\----->|  App    |
  +---------+      / /                             \ \     +---------+
  |  RM     |<----. /                               \ .--->|  RM     |
  +---------+      / +---------+        +---------+  \     +---------+
  | Layer 2 |<------>| Layer 2 |<------>| Layer 2 |<------>| Layer 2 |
  +---------+        +---------+        +---------+        +---------+

  RSVP Host/         Intermediate       Intermediate       RSVP Host/
     Router          Bridge/Switch      Bridge/Switch         Router

   Figure 1 - Bandwidth Manager with centralized Bandwidth Allocator

Ghanwani et al.             Expires May 1998           [Page 14]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

Figure 1 shows a centralized implementation where a single BA is
responsible for admission control decisions for the entire subnet. Every
end station contains a RM. Intermediate bridges and switches in the
network need not have any functions of the BM since they will not be
actively participating in admission control.  The RM at the end station
requesting a reservation initiates communication with its BA. For larger
subnets, a single BA may not be able to handle the reservations for the
entire subnet.  In that case it would be necessary to deploy multiple
BAs, each managing the resources of a non-overlapping subset of
segments.  In a centralized implementation, the BA must have some model
of the layer-2 topology of the subnet e.g. link layer spanning tree
information, in order to be able to reserve resources on appropriate
segments.  Without this topology information, the BM would have to
reserve resources on all segments for all flows which, in a switched
network, would lead to very inefficient utilization of resources.

  +---------+                                              +---------+
  |  App    |<-------------------------------------------->|  App    |
  +---------+        +---------+        +---------+        +---------+
  |  RM/BA  |<------>|  BA     |<------>|  BA     |<------>|  RM/BA  |
  +---------+        +---------+        +---------+        +---------+
  | Layer 2 |<------>| Layer 2 |<------>| Layer 2 |<------>| Layer 2 |
  +---------+        +---------+        +---------+        +---------+

  RSVP Host/         Intermediate       Intermediate       RSVP Host/
     Router          Bridge/Switch      Bridge/Switch         Router

Figure 2 - Bandwidth Manager with fully distributed Bandwidth Allocator

Figure 2 depicts the scenario of a fully distributed bandwidth manager.
In this case, all devices in the subnet have BM functionality.  All the
end hosts are still required to have a RM. In addition, all stations
actively participate in admission control. With this approach, each BA
would need only local topology information since it is responsible for
the resources on segments that are directly connected to it.  This local
topology information, such as a list of ports active on the spanning
tree and which unicast addresses are reachable from which ports, is
readily available in today's switches. Note that in the figures above,
the arrows between peer layers are used to indicate logical

7 Model of the Bandwidth Manager in a Network

In this section we describe how the model above fits with the existing
IETF Integrated Services model of IP hosts and IP routers. First we
describe layer-3 host and router implementations; later we describe how

Ghanwani et al.             Expires May 1998           [Page 15]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

the model is applied in layer-2 switches. Throughout we indicate any
differences between centralised and distributed implementations.

7.1 End-station model

7.1.1 Layer-3 Client Model

We assume the same client model as int-serv and RSVP where we use the
term "client" to mean the entity handling QoS in the layer-3 device at
each end of a layer-2 hop (e.g. end-station, router). In this model, the
sending client is responsible for local admission control and scheduling
packets onto its link in accordance with the service agreed. As with the
current int-serv model, this involves per-flow scheduling (a.k.a.
traffic shaping) in every such originating source.

For now, we assume that the client is running an RSVP process which
presents a session establishment interface to applications, signals over
the network, programs a scheduler and classifier in the driver and
interfaces to a policy control module. In particular, RSVP also
interfaces to a local admission control module: it is this entity that
we focus on here.

The following diagram is taken from the RSVP specification [5]:
                     |  _______                    |
                     | |       |   _______         |
                     | |Appli- |  |       |        |   RSVP
                     | | cation|  | RSVP <-------------------->
                     | |       <-->       |        |
                     | |       |  |process|  _____ |
                     | |_._____|  |       -->Polcy||
                     |   |        |__.__._| |Cntrl||
                     |   |data       |  |   |_____||
                     |   |   --------|  |    _____ |
                     |   |  |        |  ---->Admis||
                     |  _V__V_    ___V____  |Cntrl||
                     | |      |  |        | |_____||
                     | |Class-|  | Packet |        |
                     | | ifier|==>Schedulr|====================>
                     | |______|  |________|        |    data
                     |                             |

                    Figure 3 - RSVP in Sending Hosts

Note that we illustrate examples in this document using RSVP as the
"upper-layer" signaling protocol but there are no actual dependencies on

Ghanwani et al.             Expires May 1998           [Page 16]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

this protocol: RSVP could be replaced by some other dynamic protocol or
else the requests could be made by network management or other policy

7.1.2 Requests to layer-2 ISSLL

The local admission control entity within a client is responsible for
mapping these layer-3 session-establishment requests into layer-2

The upper-layer entity makes a request, in generalised terms to ISSLL of
the form:

   "May I reserve for traffic with <traffic characteristic> with
   <performance requirements> from <here> to <there> and how should I
   label it?"

   <traffic characteristic> = Sender Tspec
                            (e.g. bandwidth, burstiness, MTU)
   <performance requirements> = FlowSpec
                            (e.g. latency, jitter bounds)
   <here> = IP address(es)
   <there> = IP address(es) - may be multicast

Ghanwani et al.             Expires May 1998           [Page 17]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

7.1.3 At the Layer-3 Sender

The ISSLL  functionality in the sender is illustrated in Figure 4.

                    from IP     from RSVP
                  |    |            |            |
                  |  __V____     ___V___         |
                  | |       |   |       |        |
                  | | Addr  |<->|       |        | SBM signaling
                  | |mapping|   |Request|<------------------------>
                  | |_______|   |Module |        |
                  |  ___|___    |       |        |
                  | |       |<->|       |        |
                  | |  802  |   |_______|        |
                  | | header|     / | |          |
                  | |_______|    /  | |          |
                  |    |        /   | |   _____  |
                  |    | +-----/    | +->|Band-| |
                  |  __V_V_    _____V__  |width| |
                  | |      |  |        | |Alloc| |
                  | |Class-|  | Packet | |_____| |
                  | | ifier|==>Schedulr|======================>
                  | |______|  |________|         |  data

                 Figure 4 - ISSLL in End-station Sender

The functions of the Requestor Module may be summarised as:  - maps the
endpoints of the conversation to layer-2 addresses in the LAN, so that
the client can figure out what traffic is really going where (probably
makes reference to the ARP protocol cache for unicast or an algorithmic
mapping for multicast destinations).

- communicates with any local Bandwidth Allocator module for local
admission control decisions

- formats a SBM request to the network with the mapped addresses and
filter/flow specs

- receives response from the network and reports the YES/NO admission
control answer back to the upper layer entity, along with any negotiated
modifications to the session parameters.

- saves any returned user_priority to be associated with this session in
a "802 header" table: this will be used when adding layer-2 header
before sending any future data packet belonging to this session. This
table might, for example, be indexed by the RSVP flow identifier.

Ghanwani et al.             Expires May 1998           [Page 18]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

The Bandwidth Allocator (BA) component is only present when a
distributed BA model is implemented: when present, its functions can be
summarised as:  - applies local admission control on outgoing link
bandwidth and driver queueing resources

7.1.4 At the Layer-3 Receiver

The ISSLL functionality in the receiver is simpler. It is summarised
below and is illustrated by Figure 5.

The Requestor Module

- handles any received SBM protocol indications.

- communicates with any local BA for local admission control decisions

- passes indications up to RSVP if OK.

- accepts confirmations from RSVP and relays them back via SBM signaling
towards the requester.

- may program a receive classifier and scheduler, if any is used, to
identify traffic classes of received packets and accord them appropriate
treatment e.g. reserve some buffers for particular traffic classes.

- programs receiver to strip any 802 header information from received

The Bandwidth Allocator, present only in a distributed implementation

- applies local admission control to see if a request can be supported
with appropriate local receive resources.

Ghanwani et al.             Expires May 1998           [Page 19]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

                     to RSVP       to IP
                       ^            ^
                  |    |            |           |
                  |  __|____        |           |
                  | |       |       |           |
 SBM signaling    | |Request|    ___|___        |
<-----------------> |Module |   | Strip |       |
                  | |_______|   |802 hdr|       |
                  |    |   \    |_______|       |
                  |  __v___ \       ^           |
                  | | Band- |\      |           |
                  | |  width| \     |           |
                  | | Alloc |  \    |           |
                  | |_______|   \   |           |
                  |  ______     v___|____       |
                  | |Class-|   | Packet  |      |
===================>| ifier|==>|Scheduler|      |
     data         | |______|   |_________|      |

                Figure 5 - ISSLL in End-station Receiver

7.2 Switch Model

7.2.1 Centralised BA

Where a centralised Bandwidth Allocator model is implemented, switches
do not take part in the admission control process: all admission control
is implemented by a central BA e.g. a "Subnet Bandwidth Manager" (SBM)
as described in [14]. Note that this centralised BA  may actually be co-
located with a switch but its functions would not necessarily then be
closely tied with the switches forwarding functions as is the case with
the distributed BA described below.

7.2.2 Distributed BA

The model of layer-2 switch behaviour described here uses the
terminology of the SBM protocol as an example of an admission control
protocol: the model is equally applicable when other mechanisms e.g.
static configuration, network management are in use for admission
control. We define the following entities within the switch:

* Local admission control - one of these on each port accounts for the
available bandwidth on the link attached to that port. For half- duplex
links, this involves taking account of the resources allocated to both
transmit and receive flows. For full-duplex, the input port accountant's

Ghanwani et al.             Expires May 1998           [Page 20]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

task is trivial.

* Input SBM module: one instance on each port, performs the "network"
side of the signaling protocol for peering with clients or other
switches. Also holds knowledge of the mappings of int-serv classes to

* SBM propagation - relays requests that have passed admission control
at the input port to the relevant output ports' SBM modules. This will
require access to the switch's forwarding table (layer-2 "routing table"
cf. RSVP model) and port spanning-tree states.

* Output SBM module - forwards requests to the next layer-2 or -3
network hop.

* Classifier, Queueing and Scheduler - these functions are basically as
described by the Forwarding Process of IEEE 802.1p (see section 3.7 of
[3]). The Classifier module identifies the relevant QoS information from
incoming packets and uses this, together with the normal bridge
forwarding database, to decide to which output queue of which output
port to enqueue the packet. Different types of switches will use
different techniques for flow idenfication - see section 8.1 for details
of a taxonomy of switch types. In Class I switches, this information is
the "regenerated user_priority" parameter which has already been decoded
by the receiving MAC service and potentially re- mapped by the 802.1p
forwarding process (see description in section 3.7.3 of [3]). This does
not preclude more sophisticated classification rules which may be
applied in more complex Class III switches e.g. matching on individual
int-serv flows.

The Queueing and Scheduler module holds the output queues for ports and
provides the algorithm for servicing the queues for transmission onto
the output link in order to provide the promised int-serv service.
Switches will implement one or more output queues per port and all will
implement at least a basic strict priority dequeueing algorithm as their
default, in accordance with 802.1p.

* Ingress traffic class mapper and policing - as described in 802.1p
section 3.7. This optional module may check on whether the data within
traffic classes are conforming to the patterns currently agreed:
switches may police this and discard or re-map packets. The default
behaviour is to pass things through unchanged.

* Egress traffic class mapper - as described in 802.1p section 3.7. This
optional module may apply re-mapping of traffic classes e.g. on a per-
output port basis. The default behaviour is to pass things through

Ghanwani et al.             Expires May 1998           [Page 21]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

These are shown by the following diagram which is a superset of the IEEE
802.1D bridge model:

                  |  _____     ______     ______  |
 SBM signaling    | |     |   |      |   |      | | SBM signaling
<------------------>| IN  |<->| SBM  |<->| OUT  |<---------------->
                  | | SBM |   | prop.|   | SBM  | |
                  | |_____|   |______|   |______| |
                  |  / |          ^     /     |   |
    ______________| /  |          |     |     |   |_____________
   | \             / __V__        |     |   __V__             / |
   |   \      ____/ |Local|       |     |  |Local|          /   |
   |     \   /      |Admis|       |     |  |Admis|        /     |
   |       \/       |Cntrl|       |     |  |Cntrl|      /       |
   |  _____V \      |_____|       |     |  |_____|    / _____   |
   | |traff |  \               ___|__   V_______    /  |egrss|  |
   | |class |    \            |Filter| |Queue & | /    |traff|  |
   | |map & |=====|==========>|Data- |=| Packet |=|===>|class|  |
   | |police|     |           |  base| |Schedule| |    |map  |  |
   | |______|     |           |______| |________| |    |_____|  |
data in |                                                |data out
========+                                                +========>
                      Figure 6 - ISSLL in Switches

7.3 Admission Control

On reception of an admission control request, a switch performs the
following actions, again using SBM as an example: the behaviour is
different depending on whether the "Designated SBM"  for this segment is
within this switch or not - see [14] for a more detailed specification
of the DSBM/SBM actions:

* if the ingress SBM is the "Designated SBM" for this link/segment, it
translates any received user_priority or else selects a layer-2 traffic
class which appears compatible with the request and whose use does not
violate any administrative policies in force. In effect, it matches up
the requested service with those available in each of the user_priority
classes and chooses the "best" one. It ensures that, if this reservation
is successful, the selected value is passed back to the client.

* ingress DSBM observes the current state of allocation of resources on
the input port/link and then determines whether the new resource
allocation from the mapped traffic class would be excessive. The request
is passed to the reservation propagator if accepted so far.

Ghanwani et al.             Expires May 1998           [Page 22]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

* if the ingress SBM is not the "Designated SBM" for this link/segment
then it passes the request on directly to the reservation propagator

* reservation propagator relays the request to the bandwidth accountants
on each of the switch's outbound links to which this reservation would
apply (implied interface to routing/forwarding database).

* egress bandwidth accountant observes the current state of allocation
of queueing resources on its outbound port and bandwidth on the link
itself and determines whether the new allocation would be excessive.
Note that this is only the local decision of this switch hop: each
further layer-2 hop through the network gets a chance to veto the
request as it passes along.

* the request, if accepted by this switch, is then passed on down the
line on each output link selected. Any user_priority described in the
forwarded request must be translated according to any egress mapping

* if accepted, the switch must notify the client of the user_priority to
use for packets belonging to this flow.  Note that this is a
"provisional YES" - we assume an optimistic approach here: later
switches can still say "NO" later.

* if this switch wishes to reject the request, it can do so by notifying
the original client (by means of its layer-2 address).

7.4 QoS Signaling

The mechanisms described in this document make use of a signaling
protocol for devices to communicate their admission control requests
across the network: the service definitions to be provided by such a
protocol e.g. [14] are described below. Below, we illustrate the
primitives and information that need to be exchanged with such a
signaling protocol entity - in all these examples, appropriate
delete/cleanup mechanisms will also have to be provided for when
sessions are torn down.

7.4.1 Client service definitions

The following interfaces can be identified from Figure 4 and Figure 5

* SBM <-> Address mapping

 This is a simple lookup function which may cause ARP protocol
interactions, may be just a lookup of an existing ARP cache entry or may

Ghanwani et al.             Expires May 1998           [Page 23]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

be an algorithmic mapping. The layer-2 addresses are needed by SBM for
inclusion in its signaling messages to/from switches which avoids the
switches having to perform the mapping and, hence, have knowledge of
layer-3 information for the complete subnet:

 l2_addr = map_address( ip_addr )

* SBM <-> Session/802 header

This is for notifying the transmit path of how to add layer-2 header
information e.g. user_priority values to the traffic of each outgoing
flow: the transmit path will provide the user_priority value when it
requests a MAC-layer transmit operation for each packet (user_priority
is one of the parameters passed in the packet transmit primitive defined
by the IEEE 802 service model):

bind_l2_header( flow_id, user_priority )

* SBM <-> Classifier/Scheduler

This is for notifying transmit classifier/scheduler of any additional
layer-2 information associated with scheduling the transmission of a
flow packets: this primitive may be unused in some implementations or it
may be used, for example, to provide information to a transmit scheduler
that is performing per-traffic_class scheduling in addition to the per-
flow scheduling required by int-serv: the l2_header may be a pattern
(additional to the FilterSpec) to be used to identify the flow's

bind_l2schedulerinfo( flow_id, , l2_header, traffic_class )

* SBM <-> Local Admission Control

For applying local admission control for a session e.g. is there enough
transmit bandwidth still uncommitted for this potential new session? Are
there sufficient receive buffers? This should commit the necessary
resources if OK: it will be necessary to release these resources at a
later stage if the session setup process fails. This call would be made
by a segment's Designated SBM for example:

status = admit_l2session( flow_id, Tspec, FlowSpec )

* SBM <-> RSVP - this is outlined above in section 7.1.2 and fully
described in [14].

* Management Interfaces

Ghanwani et al.             Expires May 1998           [Page 24]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

Some or all of the modules described by this model will also require
configuration management: it is expected that details of the manageable
objects will be specified by future work in the ISSLL WG.

7.4.2 Switch service definitions

The following interfaces are identified from Figure 6:

* SBM <-> Classifier

This is for notifying receive classifier of how to match up incoming
layer-2 information with the associated traffic class: it may in some
cases consist of a set of read-only default mappings:

bind_l2classifierinfo( flow_id, l2_header, traffic_class )

* SBM <-> Queue and Packet Scheduler

This is for notifying transmit scheduler of additional layer-2
information associated with a given traffic class (it may be unused in
some cases - see discussion in previous section):

bind_l2schedulerinfo( flow_id, l2_header, traffic_class )

* SBM <-> Local Admission Control

 As for host above.

* SBM <-> Traffic Class Map and Police

 Optional configuration of any user_priority remapping that might be
implemented on ingress to and egress from the ports of a switch (note
that, for Class I switches, it is likely that these mappings will have
to be consistent across all ports):

 bind_l2ingressprimap( inport, in_user_pri, internal_priority )
 bind_l2egressprimap( outport, internal_priority, out_user_pri )

 Optional configuration of  any layer-2 policing function to be applied
on a per-class basis to traffic matching the l2_header. If the switch is
capable of per-flow policing then existing int-serv/RSVP models will
provide a service definition for that configuration:

 bind_l2policing( flow_id, l2_header, Tspec, FlowSpec )

* SBM <-> Filtering Database

SBM propagation rules need access to the layer-2 forwarding database to

Ghanwani et al.             Expires May 1998           [Page 25]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

determine where to forward SBM messages (analogous to RSRR interface in

output_portlist = lookup_l2dest( l2_addr )

* Management Interfaces

Some or all of the modules described by this model will also require
configuration management: it is expected that details of the manageable
objects will be specified by future work in the ISSLL WG.

8 Implementation Issues

As stated earlier, the Integrated Services working group has defined
various service classes offering varying degrees of QoS guarantees.
Initial effort will concentrate on enabling the Controlled Load [6] and
Guaranteed Service classes [7].  The Controlled Load service provides a
loose guarantee, informally stated as "the same as best effort would be
on an unloaded network".  The Guaranteed Service provides an upper-bound
on the transit delay of any packet.  The extent to which these services
can be supported at the link layer will depend on many factors including
the topology and technology used.  Some of the mapping issues are
discussed below in light of the emerging link layer standards and the
functions supported by higher layer protocols.  Considering the
limitations of some of the topologies under consideration, it may not be
possible to satisfy all the requirements for Integrated Services on a
given topology.  In such cases, it is useful to consider providing
support for an approximation of the service which may suffice in most
practical instances.  For example, it may not be feasible to provide
policing/shaping at each network element (bridge/switch) as required by
the Controlled Load specification.  But if this task is left to the end
stations, a reasonably good approximation to the service can be

8.1 Switch characteristics

For the sake of illustration, we divide layer-2 bridges/switches into
several categories, based on the level of sophistication of their QoS
and software protocol capabilities: these categories are not intended to
represent all possible implementation choices but, instead, to aid
discussion of what QoS capabilities can be expected from a network made
of these devices (the basic "class 0" device is included for
completeness but cannot really provide useful integrated service).

Class 0

Ghanwani et al.             Expires May 1998           [Page 26]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

   - 802.1D MAC bridging
   - single queue per output port, no separation of traffic classes
   - Spanning-Tree to remove topology loops (single active path)

Class I
   - 802.1p priority queueuing between traffic classes.
   - No multicast heterogeneity.
   - 802.1p GARP/GMRP pruning of individual multicast addresses.

Class II As (I) plus:
   - can map received user_priority on a per-input-port basis to some
   internal set of canonical values.
   - can map internal canonical values onto transmitted user_priority on
   a per-output-port basis giving some limited form of multicast
   - maybe implements IGMP snooping for pruning.

Class III As (II) plus:
   - per-flow classification
   - maybe per-flow policing and/or reshaping
   - more complex transmit scheduling (probably not per-flow)

8.2 Queueing

Connectionless packet-based networks in general, and LAN-switched
networks in particular, work today because of scaling choices in network
provisioning. Consciously or (more usually) unconsciously, enough excess
bandwidth and buffering is provisioned in the network to absorb the
traffic sourced by higher-layer protocols or cause their transmission
windows to run out, on a statistical basis, so that the network is only
overloaded for a short duration and the average expected loading is less
than 60% (usually much less).

With the advent of time-critical traffic such over-provisioning has
become far less easy to achieve. Time critical frames may find
themselves queued for annoyingly long periods of time behind temporary
bursts of file transfer traffic, particularly at network bottleneck
points, e.g. at the 100 Mb/s to 10 Mb/s transition that might occur
between the riser to the wiring closet and the final link to the user
from a desktop switch. In this case, however, if it is known (guaranteed
by application design, merely expected on the basis of statistics, or
just that this is all that the network guarantees to support) that the
time critical traffic is a small fraction of the total bandwidth, it
suffices to give it strict priority over the "normal" traffic. The worst
case delay experienced by the time critical traffic is roughly the
maximum transmission time of a maximum length non-time-critical frame -
less than a millisecond for 10 Mb/s Ethernet, and well below an end to

Ghanwani et al.             Expires May 1998           [Page 27]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

end budget based on human perception times.

When more than one "priority" service is to be offered by a network
element e.g. it supports Controlled-Load as well as Guaranteed Service,
the queuing discipline becomes more complex. In order to provide the
required isolation between the service classes, it will probably be
necessary to queue them separately. There is then an issue of how to
service the queues - a combination of admission control and more
intelligent queueing disciplines e.g. weighted fair queuing, may be
required in such cases. As with the service specifications themselves,
it is not the place for this document to specify queuing algorithms,
merely to observe that the external behaviour meet the services'

8.3 Mapping of Services to Link Level Priority

The number of traffic classes supported and access methods of the
technology under consideration will determine how many and what services
may be supported.  Native token ring/802.5, for instance, supports eight
priority levels which may be mapped to one or more traffic classes.
Ethernet/802.3 has no support for signaling priorities within frames.
However, the IEEE 802 standards committee has recently developed a new
standard for bridges/switches related to multimedia traffic expediting
and dynamic multicast filtering [3]. A packet format for carrying a User
Priority field on all IEEE 802 media types is now defined in [4]. These
standards allow for up to eight traffic classes on all media.  The User
Priority bits carried in the frame are mapped to a particular traffic
class within a bridge/switch.  The User Priority is signaled on an end-
to-end basis, unless overridden by bridge/switch management.  The
traffic class that is used by a flow should depend on the quality of
service desired and whether the reservation is successful or not.
Therefore, a sender should use the User Priority value which maps to the
best effort traffic class until told otherwise by the BM.  The BM will,
upon successful completion of resource reservation, specify the User
Priority to be used by the sender for that session's data.  An
accompanying memo [13] addresses the issue of mapping the various
Integrated Services to appropriate traffic classes.

8.4 Re-mapping of non-conformant aggregated flows

One other topic under discussion in the int-serv context is how to
handle the traffic for data flows from sources that are exceeding their
currently agreed traffic contract with the network. An approach that
shows some promise is to treat such traffic with "somewhat less than
best effort" service in order to protect traffic that is normally given
"best effort" service from having to back off (such traffic is often

Ghanwani et al.             Expires May 1998           [Page 28]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

"adaptive" using TCP or other congestion control algorithms and it would
be unfair to penalise it due to badly behaved traffic from reserved
flows which are often set up by non-adaptive applications).

One solution here might be to assign normal best effort traffic to one
user_priority and to label excess non-conformant traffic as a "lower"
user_priority although the re-ordering problems that might arise from
doing this may make this solution undesirable, particularly if the flows
are using TCP: for this reason the controlled load service recommends
dropping excess traffic, rather than re-mapping to a lower priority.
This topic is further discussed below.

8.5 Override of incoming user_priority

In some cases, a network administrator may not trust the user_priority
values contained in packets from a source and may wish to map these into
some more suitable set of values. Alternatively, due perhaps to
equipment limitations or transition periods, values may need to be
mapped to/from different regions of a network.

Some switches may implement such a function on input that maps received
user_priority into some internal set of values (this table is known in
802.1p as the "user_priority regeneration table"). These values can then
be mapped using the output table described above onto outgoing
user_priority values: these same mappings must also be used when
applying admission control to requests that use the user_priority values
(see e.g. [14]).  More sophisticated approaches may also be envisioned
where a device polices traffic flows and adjusts their onward
user_priority based on their conformance to the admitted traffic flow

Ghanwani et al.             Expires May 1998           [Page 29]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

8.6 Support for Different Reservation Styles

              +-----+       +-----+       +-----+
              | S1  |       | S2  |       | S3  |
              +-----+       +-----+       +-----+
                 |             |             |
                 |             v             |
                 |          +-----+          |
                 +--------->| SW  |<---------+
                             |   |
                        +----+   +----+
                        |             |
                        v             V
                     +-----+       +-----+
                     | R1  |       | R2  |
                     +-----+       +-----+

               Figure 7 - Illustration of filter styles.

In the figure above, SW is a bridge/switch in the link layer domain. S1,
S2, S3, R1 and R2 are end stations which are members of a group
associated with the same RSVP flow.  S1, S2 and S3 are upstream end
stations.  R1 and R2 are the downstream end-stations which receive
traffic from all the senders.  RSVP allows receivers R1 and R2 to
specify reservations which can apply to: (a) one specific sender only
(fixed filter); (b) any of two or more explicitly specified senders
(shared explicit filter); and (c) any sender in the group (shared
wildcard filter).  Support for the fixed filter style is
straightforward; a separate reservation is made for the traffic from
each of the senders.  However, support for the other two filter styles
has implications regarding policing; i.e. the merged flow from the
different senders must be policed so that they conform to traffic
parameters specified in the filter's RSpec. This scenario is further
complicated if the services requested by R1 and R2 are different.
Therefore, in the absence of policing within bridges/switches, it may be
possible to support only fixed filter reservations at the link layer.

8.7 Supporting Receiver Heterogeneity

At layer-3, the int-serv model allows heterogeneous multicast flows
where different branches of a tree can have different types of
reservations for a given multicast destination. It also supports the
notion that trees may have some branches with reserved flows and some
using best effort (default) service. If we were to treat a layer-2
subnet as a single "network element", as defined in [8], then all of the
branches of the distribution tree that lie within the subnet could be

Ghanwani et al.             Expires May 1998           [Page 30]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

assumed to require the same QoS treatment and be treated as an atomic
unit as regards admission control etc.. With this assumption, the model
and protocols already defined by int-serv and RSVP already provide
sufficient support for multicast heterogeneity. Note, however, that an
admission control request may well be rejected because just one link in
the subnet has reached its traffic limit and that this will lead to
rejection of the request for the whole subnet.

                           |  S  |
              +-----+      +-----+      +-----+
              | R1  |<-----| SW  |----->| R2  |
              +-----+      +-----+      +-----+

              Figure 8 - Example of receiver heterogeneity

As an example, consider Figure 8, SW is a Layer 2 device (bridge/switch)
participating in resource reservation, S is the upstream source end
station and R1 and R2 are downstream end station receivers.  R1 would
like to make a reservation for the flow while R2 would like to receive
the flow using best effort service.  S sends RSVP PATH messages which
are multicast to both R1 and R2.  R1 sends an RSVP RESV message to S
requesting the reservation of resources.

If the reservation is successful at Layer 2, the frames addressed to the
group will be categorized in the traffic class corresponding to the
service requested by R1.  At SW, there must be some mechanism which
forwards the packet providing service corresponding to the reserved
traffic class at the interface to R1 while using the best effort traffic
class at the interface to R2.  This may involve changing the contents of
the frame itself, or ignoring the frame priority at the interface to R2.

Another possibility for supporting heterogeneous receivers would be to
have separate groups with distinct MAC addresses, one for each class of
service.  By default, a receiver would join the "best effort" group
where the flow is classified as best effort.  If the receiver makes a
reservation successfully, it can be transferred to the group for the
class of service desired.  The dynamic multicast filtering capabilities
of bridges and switches implementing the emerging IEEE 802.1p standard
would be a very useful feature in such a scenario.  A given flow would
be transmitted only on those segments which are on the path between the
sender and the receivers of that flow.  The obvious disadvantage of such
an approach is that the sender needs to send out multiple copies of the

Ghanwani et al.             Expires May 1998           [Page 31]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

same packet corresponding to each class of service desired thus
potentially duplicating the traffic on a portion of the distribution

The above approaches would provide very sub-optimal utilisation of
resources given the size and complexity of the layer-2 subnets
envisioned by this document. Therefore, it is desirable to support the
ability of layer-2 switches to apply QoS differently on different egress
branches of a tree that divides at that switch: this is discussed in the
following paragraphs.

IEEE 802.1D and 802.1p specify a basic model for multicast whereby a
switch performs multicast routing decisions based on the destination
address: this would produce a list of output ports to which the packet
should be forwarded. In its default mode, such a switch would use the
user_priority value in received packets (or a value regenerated on a
per-input-port basis in the absence of an explicit value) to enqueue the
packets at each output port. All of the classes of switch identified
above can support this operation.

If a switch is selecting per-port output queues based only on the
incoming user_priority, as described by 802.1p, it must treat all
branches of all multicast sessions within that user_priority class with
the same queuing mechanism: no heterogeneity is then possible and this
could well lead to the failure of an admission control request for the
whole multicast session due to a single link being at its maximum
allocation, as described above. Note that, in the layer-2 case as
distinct from the layer-3 case with RSVP/int-serv, the option of having
some receivers getting the session with the requested QoS and some
getting it best effort does not exist as the Class I switches are unable
to re-map the user_priority on a per-link basis: this could well become
an issue with heavy use of dynamic multicast sessions. If a switch were
to implement a separate user_priority mapping at each output port, as
described under "Class II switch" above, then some limited form of
receiver heterogeneity can be supported e.g. forwarding of traffic as
user_priority 4 on one branch where receivers have performed admission
control reservations and as user_priority 0 on one where they have not.
We assume that per-user_priority queuing without taking account of input
or output ports is the minimum standard functionality for switches in a
LAN environment (Class I switch, as defined above) but that more
functional layer-2 or even layer-3 switches (a.k.a. routers) can be used
if even more flexible forms of heterogeneity are considered necessary to
achieve more efficient resource utilisation: note that the behaviour of
layer-3 switches in this context is already well standardised by IETF.

9 Network Topology Scenarios

Ghanwani et al.             Expires May 1998           [Page 32]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

As stated earlier, this memo is concerned with specifying a framework
for supporting Integrated Services in LAN technologies such as
Ethernet/IEEE 802.3, token ring/IEEE 802.5 and FDDI. The extent to which
service guarantees can be provided by a network depend to a large degree
on the ability to provide the key functions of flow identification and
scheduling in addition to admission control and policing.  This section
discusses some of the capabilities of these LAN technologies and
provides a taxonomy of possible topologies emphasizing the capabilities
of each with regard to supporting the above functions.  For the
technologies considered here, the basic topology of a LAN may be shared,
switched half duplex or switched full duplex.  In the shared topology,
multiple senders share a single segment.  Contention for media access is
resolved using protocols such as CSMA/CD in Ethernet and token passing
in token ring and FDDI. Switched half duplex, is essentially a shared
topology with the restriction that there are only two transmitters
contending for resources on any segment.  Finally, in a switched full
duplex topology, a full bandwidth path is available to the transmitter
at each end of the link at all times.  Therefore, in this topology,
there is no need for any access control mechanism such as CSMA/CD or
token passing as there is no contention between the transmitters -
obviously, this topology provides the best QoS capabilities.  Another
important element in the discussion of topologies is the presence or
absence of support for multiple traffic classes: these were discussed
earlier in section 4.1.Depending on the basic topology used and the
ability to support traffic classes, we identify six scenarios as
   1.      Shared topology without traffic classes
   2.      Shared topology with traffic classes.
   3.      Switched half duplex topology without traffic classes
   4.      Switched half duplex topology with traffic classes
   5.      Switched full duplex topology without traffic classes
   6.      Switched full duplex topology with traffic classes

There is also the possibility of hybrid topologies where two or more of
the above coexist.  For instance, it is possible that within a single
subnet, there are some switches which support traffic classes and some
which do not.  If the flow in question traverses both kinds of switches
in the network, the least common denominator will prevail.  In other
words, as far as that flow is concerned, the network is of the type
corresponding to the least capable topology that is traversed.  In the
following sections, we present these scenarios in further detail for
some of the different IEEE 802 network types with discussion of their
abilities to support the Integrated Service classes.

9.1 Full-duplex switched networks

We have up to now ignored the MAC access protocol. On a full-duplex

Ghanwani et al.             Expires May 1998           [Page 33]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

switched LAN (of either Ethernet or Token-Ring types - the MAC algorithm
is, by definition, unimportant) this can be factored in to the
characterisation parameters advertised by the device since the access
latency is well controlled (jitter = one largest packet time). Some
example characteristics (approximate):

        Type        Speed             Max Pkt   Max Access
                                       Length    Latency

        Ethernet         10Mbps         1.2ms     1.2ms
                        100Mbps         120us     120us
                          1Gbps          12us      12us
        Token-Ring        4Mbps           9ms       9ms
                         16Mbps           9ms       9ms
        FDDI            100Mbps         360us     8.4ms
        Demand-Priority 100Mbps         120us     253us

          Table 1 - Full-duplex switched media access latency

These delays should be also be considered in the context of speed-of-
light delays of e.g. ~400ns for typical 100m UTP links and ~7us for
typical 2km multimode fibre links.

Therefore we see Full-Duplex switched network topologies as offering
good QoS capabilities for both Controlled Load and Guaranteed Service
when supported by suitable queueing strategies in the switch nodes.

9.2 Shared-media Ethernet networks

We have not mentioned the difficulty of dealing with allocation on a
single shared CSMA/CD segment: as soon as any CSMA/CD algorithm is
introduced then the ability to provide any form of Guaranteed Service is
seriously compromised in the absence of any tight coupling between the
multiple senders on the link. There are a number of reasons for not
offering a better solution for this issue.

Firstly, we do not believe this is a truly solvable problem: it would
seem to require a new MAC protocol. There have been proposals for
enhancements to the MAC layer protocols e.g.  BLAM and enhanced flow-
control in IEEE 802.3; IEEE 802.1 has examined research showing
disappointing simulation results for performance guarantees on shared
CSMA/CD Ethernet without MAC enhancements. However, any solution
involving a new "software MAC" running above the traditional 802.3 MAC
or other proprietary MAC protocols is clearly outside the scope of the
work of the ISSLL WG and this document. Secondly, we are not convinced
that it is really an interesting problem. While not everyone in the
world is buying desktop switches today and there will be end stations
living on repeated segments for some time to come, the number of

Ghanwani et al.             Expires May 1998           [Page 34]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

switches is going up and the number of stations on repeated segments is
going down. This trend is proceeding to the point that we may be happy
with a solution which assumes that any network conversation requiring
resource reservations will take place through at least one switch (be it
layer-2 or layer-3). Put another way, the easiest QoS upgrade to a
layer-2 network is to install segment switching: only when this has been
done is it worthwhile to investigate more complex solutions involving
admission control.

Thirdly, in the core of the network (as opposed to at the edges), there
does not seem to be wide deployment of repeated segments as opposed to
switched solutions. There may be special circumstances in the future
(e.g. Gigabit buffered repeaters) but these have differing
characteristics to existing CSMA/CD repeaters anyway.

        Type             Speed        Max Pkt   Max Access
                                       Length    Latency

        Ethernet        10Mbps         1.2ms  unbounded
                       100Mbps         120us  unbounded
                         1Gbps          12us  unbounded

             Table 2 - Shared Ethernet media access latency

9.3 Half-duplex switched Ethernet networks

Many of the same arguments for sub-optimal support of Guaranteed Service
apply to half-duplex switched Ethernet as to shared media: in essence,
this topology is a medium that *is* shared between at least two senders
contending for each packet transmission opportunity. Unless these are
tightly coupled and cooperative then there is always the chance that the
best-effort traffic of one will interfere with the important traffic of
the other. Such coupling would seem to need some form of modifications
to the MAC protocol (see above).

Notwithstanding the above, half-duplex switched topologies do seem to
offer the chance to provide Controlled Load service: with the knowledge
that there are only a small limited number (e.g. two) of potential
senders that are both using prioritisation for their CL traffic (with
admission control for those CL flows based on the knowledge of the
number of potential senders) over best effort, the media access
characteristics, whilst not deterministic in the true mathematical
sense, are somewhat predictable. This is probably a close enough
approximation to CL to be useful.

        Type        Speed             Max Pkt   Max Access
                                       Length    Latency

Ghanwani et al.             Expires May 1998           [Page 35]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

        Ethernet        10Mbps         1.2ms  unbounded
                       100Mbps         120us  unbounded
                         1Gbps          12us  unbounded

      Table 3 - Half-duplex switched Ethernet media access latency

9.4 Half-duplex and shared Token Ring networks

In a shared Token Ring network, the network access time for high
priority traffic at any station is bounded and is given by (N+1)*THTmax,
where N is the number of stations sending high priority traffic and
THTmax is the maximum token holding time [14].  This assumes that
network adapters have priority queues so that reservation of the token
is done for traffic with the highest priority currently queued in the
adapter.  It is easy to see that access times can be improved by
reducing N or THTmax.  The recommended default for THTmax is 10 ms [6].
N is an integer from 2 to 256 for a shared ring and 2 for a switched
half duplex topology. A similar analysis applies for FDDI. Using default
values gives:

        Type        Speed               Max Pkt   Max Access
                                         Length    Latency

        Token-Ring  4/16Mbps shared         9ms    2570ms
                    4/16Mbps switched       9ms      30ms
        FDDI        100Mbps               360us       8ms

    Table 4 - Half-duplex and shared Token-Ring media access latency

Given that access time is bounded, it is possible to provide an upper
bound for end-to-end delays as required by Guaranteed Service assuming
that traffic of this class uses the highest priority allowable for user
traffic.  The actual number of stations that send traffic mapped into
the same traffic class as GS may vary over time but, from an admission
control standpoint, this value is needed a priori.  The admission
control entity must therefore use a fixed value for N, which may be the
total number of stations on the ring or some lower value if it is
desired to keep the offered delay guarantees smaller. If the value of N
used is lower than the total number of stations on the ring, admission
control must ensure that the number of stations sending high priority
traffic never exceeds this number. This approach allows admission
control to estimate worst case access delays assuming that all of the N
stations are sending high priority data even though, in most cases, this
will mean that delays are significantly overestimated.

Assuming that Controlled Load flows use a traffic class lower than that
used by GS, no upper-bound on access latency can be provided for CL

Ghanwani et al.             Expires May 1998           [Page 36]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

flows.  However, CL flows will receive better service than best effort

Note that, on many existing shared token rings, bridges will transmit
frames using an Access Priority (see section 4.3) value 4 irrespective
of the user_priority carried in the frame control field of the frame.
Therefore, existing bridges would need to be reconfigured or modified
before the above access time bounds can actually be used.

9.5 Half-duplex and shared Demand-Priority networks

In 802.12 networks, communication between end-nodes and hubs and between
the hubs themselves is based on the exchange of link control signals.
These signals are used to control the shared medium access. If a hub,
for example, receives a high-priority request while another hub is in
the process of serving normal-priority requests, then the service of the
latter hub can effectively be pre-empted in order to serve the high-
priority request first. After the network has processed all high-
priority requests, it resumes the normal-priority service at the point
in the network at which it was interrupted.

The time needed to preempt normal-priority network service (the high-
priority network access time) is bounded: the bound depends on the
physical layer and on the topology of the shared network. The physical
layer has a significant impact when operating in half-duplex mode as
e.g. used across unshielded twisted-pair cabling (UTP) links, because
link control signals cannot be exchanged while a packet is transmitted
over the link. Therefore the network topology has to be considered
since, in larger shared networks, the link control signals must
potentially traverse several links (and hubs) before they can reach the
hub which possesses the network control. This may delay the preemption
of the normal priority service and hence increase the upper bound that
may be guaranteed.

Upper bounds on the high-priority access time are given below for a UTP
physical layer and a cable length of 100 m between all end-nodes and
hubs using a maximum propagation delay of 570ns as defined in [15].
These values consider the worst case signaling overhead and assume the
transmission of maximum-sized normal-priority data packets while the
normal-priority service is being pre-empted.

        Type            Speed                  Max Pkt   Max Access
                                                Length    Latency

        Demand Priority 100Mbps, 802.3pkt, UTP   120us     253us
                                 802.5pkt, UTP   360us     733us

   Table 5 - Half-duplex switched Demand-Priority UTP access latency

Ghanwani et al.             Expires May 1998           [Page 37]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

Shared 802.12 topologies can be classified using the hub cascading level
"N". The simplest topology is the single hub network (N = 1). For a UTP
physical layer, a maximum cascading level of N = 5 is supported by the
standard. Large shared networks with many hundreds nodes can however
already be built with a level 2 topology. The bandwidth manager could be
informed about the actual cascading level by using network management
mechanisms and use this information in its admission control algorithms.

        Type            Speed             Max Pkt  Max Access Topology
                                           Length   Latency

        Demand Priority 100Mbps, 802.3pkt  120us     262us      N=1
                                           120us     554us      N=2
                                           120us     878us      N=3
                                           120us     1.24ms     N=4
                                           120us     1.63ms     N=5

        Demand Priority 100Mbps, 802.5pkt  360us     722us      N=1
                                           360us     1.41ms     N=2
                                           360us     2.32ms     N=3
                                           360us     3.16ms     N=4
                                           360us     4.03ms     N=5

          Table 6 - Shared Demand-Priority UTP access latency

In contrast to UTP, the fibre-optic physical layer operates in dual
simplex mode: Upper bounds for the high-priority access time are given
below for 2 km multimode fibre links with a propagation delay of 10 us.

        Type            Speed                  Max Pkt   Max Access
                                                Length    Latency

        Demand Priority 100Mbps,802.3pkt,Fibre   120us     139us
                                802.5pkt,Fibre   360us     379us

  Table 7 - Half-duplex switched Demand-Priority Fibre access latency

For shared-media with distances of 2km between all end-nodes and hubs,
the 802.12 standard allows a maximum cascading level of 2. Higher levels
of cascaded topologies are supported but require a reduction of the
distances [15].

        Type            Speed             Max Pkt  Max Access Topology
                                           Length   Latency

        Demand Priority 100Mbps,802.3pkt    120us     160us     N=1
                                            120us     202us     N=2

Ghanwani et al.             Expires May 1998           [Page 38]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

        Demand Priority 100Mbps,802.5pkt    360us     400us     N=1
                                            360us     682us     N=2

         Table 8 - Shared Demand-Priority Fibre access latency

The bounded access delay and deterministic network access allow the
support of service commitments required for Guaranteed Service and
Controlled Load, even on shared-media topologies. The support of just
two priority levels in 802.12, however, limits the number of services
that can simultaneously be implemented across the network.

10 Justification

An obvious comment is that this whole model is too complex, it is what
RSVP is doing already, why do we think we can do better by reinventing
the solution to this problem at layer-2?

The key is that there are a number of simple layer-2 scenarios that
cover a considerable proportion of the real QoS problems that will
occur: a solution that covers nearly all of the problems at
significantly lower cost is beneficial. Full RSVP/int-serv with per-flow
queueing in strategically-positioned high-function switches or routers
may be needed to completely solve all issues but devices implementing
the architecture described in this document will allow a significantly
simpler network.

11 Summary

This document has specified a framework for providing Integrated
Services over shared and switched LAN technologies.  The ability to
provide QoS guarantees necessitates some form of admission control and
resource management.  The requirements and goals of a resource
management scheme for subnets have been identified and discussed. We
refer to the entire resource management scheme as a Bandwidth Manager.
Architectural considerations were discussed and examples were provided
to illustrate possible implementations of a Bandwidth Manager. Some of
the issues involved in mapping the services from higher layers to the
link layer have also been discussed. Accompanying memos from the ISSLL
working group address service mapping issues [13] and provide a protocol
specification for the Bandwidth Manager protocol [14] based on the
requirements and goals discussed in this document.

12 References

Ghanwani et al.             Expires May 1998           [Page 39]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

[1] IEEE Standards for Local and Metropolitan Area Networks: Overview
        Architecture ANSI/IEEE Std. 802.1

[2] ISO/IEC 10038, ANSI/IEEE Std 802.1D-1993 "MAC Bridges"

[3] ISO/IEC 15802-3 "Information technology - Telecommunications and
        information exchange between systems - Local and metropolitan
        networks - Common specifications - Part 3: Media Access Control
        Bridges" (current draft available as IEEE P802.1p/D8)

[4] IEEE Standards for Local and Metropolitan Area Networks: Draft
        for Virtual Bridged Local Area Networks, P802.1Q/D7, October

[5] B. Braden, L. Zhang, S. Berson, S. Herzog and S. Jamin, "Resource
        Reservation Protocol (RSVP) - Version 1 Functional
Specification" RFC
        2205 September 1997

[6] J. Wroclawski, "Specification of the Controlled Load Network Element
        Service" RFC 2211 September 1997

[7] S. Shenker, C. Partridge and R. Guerin, "Specification of Guaranteed
        Quality of Service" RFC 2212 September 1997

[8] R. Braden, D. Clark and S. Shenker, "Integrated Services in the
        Architecture: An Overview" RFC 1633, June 1994.

[9] J. Wroclawski, "The Use of RSVP with IETF Integrated Services" RFC
        September 1997

[10] S. Shenker and J. Wroclawski, "Network Element Service
        Template" Internet-Draft

[11] S. Shenker and J. Wroclawski, "General Characterization Parameters
        Integrated Service Network Elements" RFC 2215 September 1997

[12] L. Delgrossi and L. Berger (Editors), "Internet Stream Protocol
Version 2
        (ST2)  Protocol Specification - Version ST2+", RFC 1819, August

[13] M.Seaman, A.Smith, E.Crawley "Integrated Service Mappings on IEEE
        Networks", Internet Draft, November 1997,

[14] D.Hoffman et al. "SBM (Subnet Bandwidth Manager): A Proposal for
        Admission Control over Ethernet", Internet Draft, November 1997

[15] "Carrier Sense Multiple Access with Collision Detection (CSMA/CD)
        Method and Physical Layer Specifications"

Ghanwani et al.             Expires May 1998           [Page 40]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

       ANSI/IEEE Std 802.3-1985.

[16] "Token-Ring Access Method and Physical Layer Specifications"
      ANSI/IEEE Std 802.5-1995

[17] "A Standard for the Transmission of IP Datagrams over IEEE 802
        RFC 1042, February 1988

[18] C. Bisdikian, B. V. Patel, F. Schaffa, and M Willebeek-LeMair,
        The Use of Priorities on Token-Ring Networks for Multimedia
        Traffic, IEEE Network, Nov/Dec 1995.

[19] "Demand Priority Access Method, Physical Layer and Repeater
        Specification for 100Mbit/s", IEEE Std. 802.12-1995.

[20] "Fiber Distributed Data Interface MAC",
        ANSI Std. X3.139-1987

13 Security Considerations

Implementation of the model described in this memo creates no known new
avenues for malicious attack on the network infrastructure although
are referred to section 2.8 of the RSVP specification [5] for a
discussion of
the impact of the use of admission control signaling protocols on

14 Acknowledgements

Much of the work presented in this document has benefited greatly
from discussion held at the meetings of the Integrated Services over
Specific Link Layers (ISSLL) working group.  In particular we would
like to thank Eric Crawley, Don Hoffman and Raj Yavatkar.

Authors' Addresses

        Anoop Ghanwani
        IBM Corporation
        P.O.Box 12195
        Research Triangle Park, NC 27709
        +1 (919) 254-0260

        J. Wayne Pace
        IBM Corporation
        P. O. Box 12195
        Research Triangle Park, NC 27709

Ghanwani et al.             Expires May 1998           [Page 41]

INTERNET DRAFT    Framework for Int-Serv over IEEE 802     November 1997

        +1 (919) 254-4930

        Vijay Srinivasan
        IBM Corporation
        P. O. Box 12195
        Research Triangle Park, NC 27709
        +1 (919) 254-2730

        Andrew Smith
        Extreme Networks
        10460 Bandley Drive
        Cupertino CA 95014
        +1 (408) 863 2821

        Mick Seaman
        3Com Corp.
        5400 Bayfront Plaza
        Santa Clara CA 95052-8145
        +1 (408) 764 5000

Ghanwani et al.             Expires May 1998           [Page 42]