INTERNET-DRAFT S. Carter
T. Hodgkinson
Expires April, 1999 A. O'Neill,
D. Mortimore.
BT Laboratories
November 1998
A Bounded-delay service for the internet
<draft-carter-bounded-delay-00.txt >
Status of this Memo
This document is an Internet-Draft. Internet-Drafts are working
documents of the Internet Engineering Task Force (IETF), its areas,
and its working groups. Note that other groups may also distribute
working documents as Internet-Drafts.
Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
To view the entire list of current Internet-Drafts, please check the
"1id-abstracts.txt" listing contained in the Internet-Drafts Shadow
Directories on ftp.is.co.za (Africa), ftp.nordu.net (Northern
Europe), ftp.nis.garr.it (Southern Europe), munnari.oz.au (Pacific
Rim), ftp.ietf.org (US East Coast), or ftp.isi.edu (US West Coast).
Abstract
This document outlines a proposed new service class for the internet
which is intended to provide for some small proportion of the total
traffic on each link, a quantifiable, low-delay service having
essentially no packet-loss. We call this proposed service, Bounded-
delay (BD). The approach is consistent with the architecture and
philosophy of diff-serv, and could potentially make use of the
proposed EF PHB. BD is similar in many respects to Premium service
which has been described previously [Twobit], except that BD is
intended specifically to allow control of end-to-end delay in a
quantifiable manner.
1. Introduction
This draft proposes a new service, which we call bounded-delay (BD).
Bounded-delay service conforms to the architectural principles of
diff-serv[Arch], and allows each micro-flow carried to be granted a
combination of guaranteed bandwidth, essentially zero packet-loss,
S. Carter et.al. [Page 1]
^L
Internet Draft A Bounded-delay service for the Internet November 1998
and a quantifiable, low end-to-end delay. BD is similar in very many
respects to Premium service, the only significant difference being
that its delay performance is both quantified and more tightly
controlled. We believe these aspects to be an important requirement.
Currently we envisage two possible applications of BD:
a)To provide diff-serv domains or confederations of such domains with
a quantifiable "edge-to- edge" delay. These can be used for example,
to support end-to-end services between leaf-domains that utilise
int-serv and RSVP.
b)As an end-to-end service, with forwarding at each node based on the
diff-serv model, and with dynamic access controlled by means of an
entity such as a Bandwidth Broker.
We envisage that BD should offer a "pragmatic" bound, which takes
some account of traffic statistics, network topology, and link-
speeds. This is in contrast to a strict "worst-case" bound, such as
used by int-serv "Guaranteed" service, which we believe would be
unreasonably pessimistic in this context. Too pessimistic a
formulation of delay bound generally translates into an unnecessarily
low limit on the volume of traffic that can be granted a usefully low
delay-bound [Efficient].
2 Overview of bounded-delay service
This section describes the essential features of Bounded-delay
service, most of which are common also to Premium service. The two
variants of BD service, i.e. the end-to-end and the edge-to-edge
forms, are described separately. In each case, either static (network
management) or dynamic (signalling) set-up is possible.
2.1 End-to-end BD service
The essential features are as follows:
(i)Hosts must constrain their traffic to a peak-rate agreed
(i.e. reserved) with the network. For applications which are
inherently bursty, the host has the choice of reserving a
peak-rate equal to the application's peak-rate, or reserving
some lesser rate, shaping to this, and incurring host queuing-
delay as a result. Several possibilities exist for shaping,
including a token-bucket shaper where the bucket-depth is made
equal to the maximum BD packet size. Hosts must also mark the
DSCP (diff-serv code-point) of BD packets with an appropriate
PHB (per-hop behaviour).
S. Carter et.al. [Page 2]
^L
Internet Draft A Bounded-delay service for the Internet November 1998
(ii)Traffic admission control operates to ensure the peak-rate
sum of individual micro-flows never exceeds a prescribed level
on any link in the end-to-end path. Additionally, at certain
nodes (essentially those with lower-speed output links) further
admission constraints are applied, and this is the important
distinction between BD and Premium service. Typically admission
control may be applied at micro-flow granularity in leaf-
domains and coarser granularity elsewhere. The control of
resources for aggregate traffic is not specifically addressed
in this draft, but we believe there are no major requirements
specific to BD, and mechanisms similar to those already under
discussion, such as Bandwidth Brokers[TwoBit] and RSVP
aggregation[Aggregate], are potentially suitable.
(iii)Policing is used to enforce admission control, and would
usually be applied at the first-hop router. Policing (and in
some cases shaping) of BD aggregates may be applied
additionally at domain boundaries.
(iv)Routers support a PHB which places BD traffic into a FIFO
queue which is treated by some kind of preferential forwarding,
such as Priority Queuing. Requirements for this are discussed
later.
2.2 Edge-to-edge BD service
An obvious application example is where a diff-serv domain
operating BD service is used to support interconnection of int-
serv/RSVP leaf-domains, i.e. BD service operates between two diff-
serv Boundary routers[RSVP, Boundary]. In this case the Boundary
router rather than the host performs shaping, and possibly
marking. For int-serv Guaranteed service, such shaping would
typically be carried out at the reserved rate, R, contained in the
receiver's RSpec [Guaranteed]. In addition, border routers will
be required to update the ADSPEC values within incoming RSVP PATH
messages before forwarding these to the int-serv region. This is
to support the feature of Guaranteed service whereby the "C" and
"D" terms in the ADSPEC provide hosts with accumulated delay
parameters along the entire end-to-end path. The delay for an
entire BD region can be accounted for by a single contribution to
the accumulated "D" value.
2.3 PHB needed to support BD service, and relationship to EF
At a high-level, the only difference between Premium and BD is
that BD depends on the application of more stringent admission
rules at certain nodes. These differences occur at the service
S. Carter et.al. [Page 3]
^L
Internet Draft A Bounded-delay service for the Internet November 1998
level, however, and have no impact on the fundamental forwarding
behaviour required of routers. Therefore, the Expedited
Forwarding [EF] PHB has all the right characteristics for
supporting BD, although, as we later point out, the choice of
scheduling implementation may affect delay bound formulation on
lower- speed links, leading to some preferred choices. Thus,
(perhaps unrealistically) if BD were the only service that used EF
forwarding, no new PHB definition would be required. However, if
it were required to support Premium and BD simultaneously on the
same network, then separate PHBs are likely to be required, though
it is quite possible that the distinction need only be maintained
in the "edges" of the network, rather than the backbone.
We make no proposal for a new PHB at this stage, but will do so if
sufficient interest is shown. Such a proposal would not merely
consist of a suggestion for another EF-like PHB with a new code-
point but would also need discussion of its relationship to EF for
the case when both were supported simultaneously.
3 Formulation of Queuing Delay bounds
This section describes an approach to controlling and quantifying
end-to-end delay. The principle is to formulate individual per-node
delay bounds which lead directly to the admission control rules that
need to be applied if a target delay is not to be exceeded at a
particular node. Where possible, where flows are sufficiently well-
aggregated, statistical formulations for per-node delay are used.
Such bounds are not absolute, but take the form of specifying a
delay-bound which may only be exceeded with some specified, low
probability. However, a particular path may often include some nodes
for which a worst-case formulation is necessary. The overall end-to-
end delay is then determined by combining the per-node delays along
the path. Various combination rules are possible, ranging from simple
addition to more complex statistical methods.
3.1 A simple single-node "worst-case" bound
This section describes how a worst-case bound may be set, together
with corresponding admission control rules. For the purposes of
explanation we need distinguish between only two classes of
traffic, BD, and non-BD, and for convenience we refer to the
latter as "best-effort"(BE), though it should be understood that
in practice this might include various other categories of
traffic, supported using a range of PHBs. The discussion is based
on non pre-emptive Priority Queuing as the scheduling discipline,
though the results can be readily adapted for other disciplines.
We also assume that the implementation is "ideal", i.e. no account
S. Carter et.al. [Page 4]
^L
Internet Draft A Bounded-delay service for the Internet November 1998
is taken of implementation-specific delays, or of link-layer
bandwidth overheads.
Consider a router with multiple input ports and a particular
output port, P, together with the following traffic conditions:
a)There are N active BD flows destined for port P, and each
arrives on a separate input port.
b)Each BD flow is peak-rate shaped.
c)The sum of peak-rates, pk_sum, of all flows destined for P is
less than R, the link-speed of P.
d)BE traffic destined for P may be present on any input port with
no special constraints.
e)The maximum packet sizes of BD and BE traffic are L_BD and L_BE
respectively.
The worst-case delay, T_WC for BD packets at node P is simply:
T_WC = (N.L_BD + L_BE)/R. (Eq. 1)
This value is simply based on the time to service the last packet
in a maximum-size backlog of BD packets, and this worst-case event
is caused by the arrival of a maximum-size BE packet while the BD
buffer is empty, followed by the immediate and simultaneous
arrival of packets from every BD session. Condition (c) is
sufficient to ensure that this backlog cannot grow further because
the service-rate is faster than the maximum possible rate of new
arrivals.
This simple example is useful because it represents an upper
delay-bound that can never be exceeded, but the more usual case is
the one where instead of all N flows arriving on N separate ports,
they are concentrated onto fewer ports (say, M). In this case,
although the number of flows carried is the same, the maximum-size
backlog is reduced because the most concentrated packet "clump"
possible of packets destined for the same destination is now more
dispersed in time. The arrival of a worst-case "clump" is
illustrated in figure 1, which shows a router with 4 input ports
and 4 output ports. Packets corresponding to 4 flows per port,
giving a total of 16 flows, are shown in their arrival order at
each input port, where "1"s indicate packets intended for port P,
and "0"s represent packets destined for other ports.
S. Carter et.al. [Page 5]
^L
Internet Draft A Bounded-delay service for the Internet November 1998
>>>>>>>>>>
0 0 0 0 0 1 1 1 1 0 0------| |------ P
| |
0 0 0 0 0 1 1 1 1 0 0------| |------
| |
0 0 0 0 0 1 1 1 1 0 0------| |------
| |
0 0 0 0 0 1 1 1 1 0 0------| |------
| |
>>>>>>>>>>
Figure 1: Illustrating worst-case packet-clumping at a router, for
packets destined for port P.
Equations can be derived to cover such more general cases. For the
particular case where N = 16, M = 4 (and with equal link-speeds on
input and output):
T_WC = (13L_BD + L_BE)/R, Eq. (2)
which is slightly smaller than the delay-bound given by Eq. 1,
i.e. (16L_BD + L_BE)/R. A still smaller bound can be formulated
if the flows are spread unequally across the input ports because
the worst-case clump is further dispersed.
3.2 "Worst-case" admission control rules
A worst-case bound can be used to provide admission control rules.
Both Eq. (1) or Eq. (2) can be re-arranged to give the number of
flows that can be admitted in order to meet a specific value of
T_WC corresponding to a pre-determined delay-bound limit. In
addition to meeting this condition it is also necessary to ensure
that the sum of the peak-bandwidths of all flows is less than the
link-speed, R. A node with a "worst-case" delay bound therefore
requires two conditions to be satisfied, i.e. both the total
number of sessions, and the peak-rate sum must be controlled.
In practice it is likely that peak-bandwidth would usually be
limited to substantially below the line-rate, both in order to
guarantee bandwidth for other services and to ensure low-delay for
bounded-delay traffic at subsequent nodes. We use the term
BD_load_factor to describe the relative level of BD traffic. When
BD traffic is scheduled using priority queuing, BD_load_factor is
simply equal to the peak-rate sum of BD traffic divided by R, the
S. Carter et.al. [Page 6]
^L
Internet Draft A Bounded-delay service for the Internet November 1998
link-speed.
3.3 Applicability of worst-case and statistical queuing delay bounds
We believe that in addition to the worst-case delay bound just
described, at least two other formulations of queuing delay bound
may be appropriate, depending on the conditions at a node. This
leads to three different regimes, with admission criteria as
follows:
i)On low-speed links, where an acceptable delay bound
corresponds to a relatively small number of "packet-times", a
worst-case bound is appropriate, and admission control is based
on a combination of a "peak-rate sum test" and a "worst-case
delay test". A packet-time is the time taken for the BD queue
to serve a maximum-size packet. Where Priority queuing is used,
this is simply L_BD/R.
ii)For somewhat faster link-speeds, admission control may be
based on a combination of peak-rate sum test and a statistical
delay bound test. The latter is based on admitting flows up
until the probability of exceeding a specified delay exceeds a
certain (small) value.
iii)For still higher link speeds, where the packet-time is
sufficiently small compared with the required delay bound,
statistical considerations may lead to the conclusion that only
the "peak- rate sum test" need be applied. On high-speed back-
bone links delay bounds may well be achieved which are
significantly less than speed-of-light propagation delays. This
also represents the regime where "Premium" service achieves
very low queuing delays, and broadly represents the conditions
under which BD and Premium traffic may share the same PHB.
3.4 Analysis and simulation results
For those wishing some justification of the above, these
conclusions were reached following both analysis and simulation of
queuing statistics at a node. Results were obtained over a large
range of different numbers of simultaneous incoming flows, where
each flow had been peak-rate shaped, and also had the same maximum
packet-size and nominal rate. For each different analysis or
simulation, queuing statistics were obtained taking account of
variations in the relative timing alignments between flows, and
for various different relative levels of BD traffic. Where
tractable, results were obtained analytically, but numerical
statistical modelling techniques were also used. Some graphical
S. Carter et.al. [Page 7]
^L
Internet Draft A Bounded-delay service for the Internet November 1998
results are included in the pdf version of this draft, which can
be found on ftp://ftp.labs.bt.com/Internet-Research/BDelay.pdf,
and figures 2 - 4 refer to this.
Figure 2 shows results where only BD traffic is present at the
node. This traffic is placed in a FIFO queue and served at the
line-rate. For these results, rates are set so that the peak-rate
sum is always exactly 50% of the line-rate. The horizontal axis
represents a target delay bound (either statistical or worst-case)
and is normalised in units of packet-time (i.e. L_BD/R), while the
vertical axis represents the maximum number of sessions that may
be admitted as a function of the chosen delay bound. The dashed
line represents the worst-case admission criterion, derived from
eq (1), while the solid line represents a statistical admission
criterion, based on a 1E-6 probability of any particular packet
not exceeding the chosen bound. The behaviour of the solid line
illustrates the three regimes referred to above: In the first
regime, the "statistical" curve initially tracks the "worst-case"
curve. In the second regime, it becomes steeper, where the number
of sessions that can be admitted increases more sharply with delay
bound. This is the regime where both a dual admission test is
still required, but some statistical advantage over the worst-case
admission begins to emerge. Finally, the curve tends to a vertical
line, and this is the last regime where there is no limit on the
number of flows that can be admitted, so admission control may now
be based on peak-bandwidth alone. The three regimes are
illustrated by shading.
Figure 3 shows similar results, but includes additional curves.
These represent peak-rate sums of BD traffic equal to exactly
20%, 50% and 80% of the link-rate (i.e. BD_load_factor = 0.2, 0.5,
and 0.8), and show how the progression between the three
connection-admission regimes depends on this factor. In each case
the worst-case admission criterion stays the same.
Figure 4 has additional results superimposed for when other
traffic (denoted BE) is also present. This was modelled using non
pre-emptive priority queuing, assuming an infinite backlog of BE
traffic at the node, and where the ratio of BE to BD packet-sizes
is five. Essentially, the additional delay statistics of the BE
traffic cause a translation of the original curves along the x-
axis.
Numerical examples of appropriate conditions for each of the three
types of admission strategy, derived from the above, are now
included for illustrative purposes. They should not be taken as
any form of recommendation, since in practice more conservative
rules both in terms of numbers of flows and peak-rate sum may be
S. Carter et.al. [Page 8]
^L
Internet Draft A Bounded-delay service for the Internet November 1998
advisable for reasons discussed later. In each case MTUs for BD
and BE packets are chosen as 200bytes and 1000bytes respectively,
and the examples are based on a conservative node model where each
flow is treated as if it arrives on an independent input port:
Example of node type (i):
Link-speed is 512kb/s, => BD and BE packet-times are 3.13ms
and 15.6ms respectively. Admission control rules are configured
to allow flows to be admitted up to a peak-rate sum of 256kb/s
(i.e. BD_load_factor < 0.5), while not exceeding a worst-case
node delay of 25ms. As a result, up to 3 BD flows with
maximum-sized packets can be admitted with a total bandwidth of
256kb/s. (3 x 3.13ms + 15.6ms = 25ms). If the link operates
with (layer-2) pre-emption, then up to 8 BD flows can be
admitted up to the same total bandwidth sum.
Example of node type (ii):
Link-speed is 2Mb/s, => BD and BE packet-times are 0.8ms and
4ms respectively. Admission control rules are configured to
allow flows to be admitted up to a peak-rate sum of 1Mb/s,
while not exceeding a probability of greater than 1E-6 of
exceeding a node delay of 10ms. As a result, up to about 32 BD
flows with maximum-sized packets can be admitted.
Example of node type (iii):
Link-speed is 100Mb/s, => BD and BE packet-times are 16us and
80us respectively. Admission control rules are configured to
allow flows to be admitted up to a peak-rate sum of 50Mb/s. The
above analysis indicates less than 1E-6 probability of
exceeding 256us, assuming all flows are composed of maximum-
sized packets. To put this value into context, this is
equivalent in delay terms to about 50km of optical fibre.
3.5 Discussion
The above results are not completely general because they were
obtained using fixed packet-sizes and flows of equal rates.
However, they do serve the purposes of illustrating how different
admission control rules can be used at different points within a
network, and they also give an indication of potentially
achievable values of delay-bound. Nevertheless, we believe that
using a constant packet-size equal to the maximum allowed value
and sharing the peak-bandwidth equally between all flows, does
represent the most stringent traffic condition, so is useful in
this respect. It is likely, however, that statistical analysis
based on the actual conditions at any particular node may
potentially yield lower bounds than our current results.
S. Carter et.al. [Page 9]
^L
Internet Draft A Bounded-delay service for the Internet November 1998
Delay clearly depends on the relative level of BD traffic,
(reducing BD_load_factor, causes the delay-bound to also reduce in
the second and third regimes), and restricting the value of this
factor is also desirable for other reasons described in section
3.6. However, it may be considered a not unreasonable constraint
to limit the level of BD traffic on most links in the network to
say, no more than a few tens of percent at the most. There is also
a clear dependence on the maximum packet-size, and this provides
good reason to restrict the maximum BD packet-size to a smaller
value than that allowed for other types of traffic.
The overall end-to-end path is likely to contain a mix of the
three different admission control strategies, with the location of
the worst-case admission rule most likely being at the edges of
the network. Formulating overall end-to-end delay from such a mix
of different types of per-node delay bound could in principle be
carried out in a number of ways, including some based on complex
statistical methods. However, given the trend toward ever-faster
backbone links, queuing delay in such nodes may well be
insignificant when compared with speed-of-light propagation delays
across the backbone. Should this be the case, when specifying the
delay of an end-to-end path it might then be reasonable to ignore
backbone queuing delay, while perhaps using simple addition to
combine queuing delays in other parts of the network.
3.6 Effect of flow interactions
Both the worst-case and the statistical bound for node delay are
based on the assumption that arriving flows are peak-rate shaped.
It is well-known, however, that individual traffic flows tend to
become re-shaped as they propagate through a network. In other
words, intra-flow bunching tends to occur as a result of
differences in the queuing time experienced by successive packets
in a flow, corresponding to transitory increases in peak-rate
above the original ingress rate. The potential danger of this is
that coincident excessive peak-rate transients in a collection of
flows arriving at a node might breach the bounds calculated under
the assumption of uniform peak-rate arrivals.
The effect of this is a topic for further research, and the
influence of BD_load_factor, network size and topology, and the
possible role of traffic shaping are all factors requiring
investigation. However, our initial conclusions are that in
networks with a well-structured topology, and where the
BD_load_factor is substantially less than one, it should be
possible to place upper limits on these effects.
3.7 Effect of correlated sources
S. Carter et.al. [Page 10]
^L
Internet Draft A Bounded-delay service for the Internet November 1998
Statistical bounds depend on assumptions about the degree of
correlation between the different traffic sources, while the
examples above assume zero correlation. Consideration should be
given to the possibility of correlation between host packet
transmission times occurring either through the natural behaviour
of applications (e.g. some types of large-scale multicast
applications, or large multi-point sessions using unicast
transmission) or through a deliberate and co-ordinated attack
involving a number of hosts. This potential problem requires
further analysis, but we believe it can be controlled through
judicious use of traffic-shaping.
4 Effect of scheduling discipline used for BD
In this discussion we assume that admission control mechanisms are
sufficient to ensure that levels of BD traffic are maintained below a
prescribed peak-bandwidth level on every link. This allows nodes to
operate with "un-policed" scheduling algorithms (such as priority
queuing), without risk of denying service to other traffic types.
Previous discussion of delay bounds in this draft has assumed use of
priority queuing, although in agreement with the EF-draft[EF], we
believe a range of scheduling algorithms may be suitable for
implementing the required forwarding behaviour. Nevertheless, as
demonstrated by the valuable simulation results contained in the EF-
draft, significant differences do exist in the timeliness of various
schemes. For this reason, we propose a simple characterisation that
allows such differences to be expressed, and which may be applied to
quantify exactly the effect of different implementations on "worst-
case" per-node delay.
We define a rate, S, as the average servicing rate granted to BD.
This is the hypothetical average bandwidth available to BD traffic,
though as noted above, admission control constraints will generally
restrict actual traffic levels to substantially below this. In
addition, we define a delay term, D. Following a fresh backlog of BD
packets, this represents the time-lag between the service that would
have been received if dedicated service of rate, S had been provided,
and the minimum service that can actually be guaranteed. Taken
together, S and D constitute a service-curve; service curves are
usually applied in the formulation of guarantees for individual
micro-flows, but can also be applied to a behaviour aggregate.
We now give some examples of S and D for a few well-known scheduling
algorithms (for ideal implementations):
S. Carter et.al. [Page 11]
^L
Internet Draft A Bounded-delay service for the Internet November 1998
i)Pre-emptive priority queuing:
S = R, and D = 0, where R = link-speed.
ii)Non pre-emptive priority queuing:
S = R, and D = L_max/R, where L_max is the maximum packet-size of
all traffic types.
iii)Weighted-fair Queuing (WFQ):
S = R_share (WFQ), and D = L_max/R, where BD is assigned to one
particular queue with a weight equivalent to a bandwidth share,
R_share.
iv)Deficit round-robin (DRR):
S = R_share, D ~ L_sum/R, where "R_share" is relatively large
(i.e. greater than R divided by the number of queues), and L_sum
is the sum of maximum packet-sizes for each other supported queue.
These expressions may be applied directly to the worst-case delay
bound formulations of eq (1) and eq (2), by substituting the value of
S for R, and D for L_max/R. Similarly, the more general expression
for the "packet-time" defined in section 3.3, and which partly
determines whether a statistical or worst-case bound is appropriate,
is now L_BD divided by S, rather than divided by R, while
BD_load_factor, defined in 3.2, is now the peak-rate sum divided by
S. The application of service curves to statistical bounds is still
under study, but at this point we believe that "D" is often a close
if not exact indication of the relative differences in statistical
delay bound.
To conclude this section, we believe that for nodes with faster
output links, a range of scheduling algorithms are capable of
yielding values of D that may be judged acceptably small or
insignificant, so allowing any of these algorithms to be considered.
However, for nodes on lower- speed links, in may be deemed necessary
to specify an algorithm that closely emulates priority queuing. The
service curve formulation clearly exposes differences between say,
true non pre- emptive priority queuing, and deficit round-robin where
a large "weight" has been applied to the BD queue.
5 Discovery of end-to-end delay-bound
We believe that because low delay bounds are possible with BD
service, discovery of actual values may not be required by many
applications. If this information were required though, nodes or
possibly entire domains could advertise the delay characteristics
they supported for the BD service. For example, a network-provider
could make known a "ceiling" value for any path within a domain,
S. Carter et.al. [Page 12]
^L
Internet Draft A Bounded-delay service for the Internet November 1998
which may be sufficient where hosts within a session were attached to
this same domain. This model could be expanded to multiple domains
using new path-explicit "delay-discovery" mechanisms. Alternatively,
if RSVP signalling were used [RSVP], perhaps for setting up aggregate
BD virtual trunks, this already provides a mechanism to discover the
accumulated delay-bound along an end-to-end path.
6. Host Architecture
Assuming that the host (sender) wishes to send packets simultaneously
in both BD class and other classes, the host output interface
architecture could be based on a similar queue structure to a network
node output interface. When an application that is inherently bursty
makes use of the BD class, the host can choose the peak-rate contract
with the network in order to control the local buffering required,
and the resulting local queuing-delay that is incurred, rather like
the manner in which a receiver using int-serv Guaranteed service
chooses the "R" parameter contained within the RSpec, so that lower
delay can be achieved at the (presumed) expense of a higher-bandwidth
reservation. Choice of packet-size may also be used to influence the
delay on outgoing and destination access links although this may have
little effect on queuing delay elsewhere.
7 Security Issues
One potential security issue relating to correlated application
behaviour has been highlighted in section 3.7. Other security
issues relate mainly to the invocation process and to the subsequent
policing of admitted flows, neither of which has been discussed in
any detail in this document.
8 Conclusions
This document describes a bounded-delay service. This is based on
FIFO queuing at each node, with strict dimensioning and admission
controls used to bound the queuing delay, and is in many ways similar
to Premium service, except that it offers specification and tighter
control of delay in return for more stringent admission control at
certain nodes. Early analysis and simulation results give indications
of performance and admission metrics, but experiments or trials will
be needed to establish these with greater confidence.
9 Declaration of Intellectual Property Rights.
S. Carter et.al. [Page 13]
^L
Internet Draft A Bounded-delay service for the Internet November 1998
Certain aspects of this draft are subject to patent applications by
British Telecommunications plc.
10 References
[EF] Van Jacobson et. al., "An Expedited Forwarding PHB", Internet
Draft, <draft-ietf-diffserv-phb-ef-00.txt>, August 1998.
[] Nichols et al., "Definition of the Differentiated Services
Field (DS Byte) in the IPv4 and IPv6 Headers", Internet draft, <draft-
ietf-diffserv-header-00.txt>, May 1998.
[Two-bit] Nichols et al. "A Two-bit Differentiated Services Architecture
for the Internet", Internet draft, <draft-nichols-diff-svc-arch-00.txt>,
Nov'97.
[Guaranteed] Shenker et al., "Specification of Guaranteed Quality of
Service", Internet RFC 2212, Sept 1997.
[RSVP] Bernet et al., "A Framework for Use of RSVP with Diff-serv",
Internet draft, <draft-ietf-diffserv-rsvp-00.txt>, June 1998.
[Boundary] Bernet et al., "Requirements of diff-serv boundary routers",
Internet draft, <draft- ietf-diffserv-rsvp-00.txt>, July 1998.
[Arch] Black et al., "An Architecture for Differentiated Services",
Internet draft, <draft- ietf-diffserv-arch-02.txt>, October 1998.
[Efficient] Georgiadis et al., "Efficient Support of Delay and Rate
Guarantees in an Internet", Computer Communication Review, vol. 26, no.
4, October 1996.
[Aggregate] Guerin et al., "Aggregating RSVP-based QoS Requests",
Internet draft, <draft- guerin-aggreg-rsvp-00.txt>, November 1997.
11 Authors' addresses.
Simon Carter
B29 Room 136,
BT Laboratories,
Martlesham Heath,
Ipswich, IP5 3RE
England
Phone: +44 1473 646990
e-mail: simon.carter@bt-sys.bt.co.uk
S. Carter et.al. [Page 14]
^L
Internet Draft A Bounded-delay service for the Internet November 1998
Terry Hodgkinson
B29 Room 129,
BT Laboratories,
Martlesham Heath,
Ipswich, IP5 3RE
England
Phone: +44 1473 645313
e-mail: terry.hodgkinson@bt-sys.bt.co.uk
Alan O'Neill
Internet Transport Group
B29 Room 129
BT Laboratories
Martlesham Heath
IPSWICH
Suffolk IP5 3RE
England
Phone: +44 1473 646459
Fax: +44 1473 640709
e-mail: alan.oneill@bt-sys.bt.co.uk
David Mortimore
B29 Room 136, BT Labs,
Martlesham Heath,
Ipswich, IP5 3RE
England
+44 1473 642575
e-mail: david.mortimore@bt-sys.bt.co.uk
S. Carter et.al. [Page 15]
^L
Internet Draft A Bounded-delay service for the Internet November 1998