Isochronous applications do not require jitter-controlled networks
RFC 1257

Document Type RFC - Informational (September 1991; No errata)
Last updated 2013-03-02
Stream Legacy
Formats plain text pdf html bibtex
Stream Legacy state (None)
Consensus Boilerplate Unknown
RFC Editor Note (None)
IESG IESG state RFC 1257 (Informational)
Telechat date
Responsible AD (None)
Send notices to (None)
Network Working Group                                       C. Partridge
Request for Comments: 1257         Swedish Institute of Computer Science
                                                          September 1991

   Isochronous Applications Do Not Require Jitter-Controlled Networks

Status of this Memo

   This memo provides information for the Internet community.  It does
   not specify an Internet standard.  Distribution of this memo is
   unlimited.

Abstract

   This memo argues that jitter control is not required for networks to
   support isochronous applications.  A network providing bandwidth and
   bounds delay is sufficient.  The implications for gigabit
   internetworking protocols are briefly considered.

Introduction

   An oft-stated goal of many of the ongoing gigabit networking research
   projects is to make it possible to support high bandwidth isochronous
   applications.  An isochronous application is an application which
   must generate or process regular amounts of data at fixed intervals.
   Examples of such applications include telephones, which send and
   receive voice samples at regular intervals, and fixed rate video-
   codecs, which generate data at regular intervals and which must
   receive data at regular intervals.

   One of the properties of isochronous applications like voice and
   video data streams is that their users may be sensitive to the
   variation in interarrival times between data delivered to the final
   output device.  This interarrival time is called "jitter" for very
   small variances (less than 10 Hz) and "wander" if it is somewhat
   larger (less than one day).  For convenience, this memo will use the
   term jitter for both jitter and wander.

   A couple of examples help illustrate the sensitivity of applications
   to jitter.  Consider a user watching a video at her workstation.  If
   the screen is not updated regularly every 30th of a second or faster,
   the user will notice a flickering in the image.  Similarly, if voice
   samples are not delivered at regular intervals, voice output may
   sound distorted.  Thus the user is sensitive to the interarrival time
   of data at the output device.

   Observe that if two users are conferring with each other from their

Partridge                                                       [Page 1]
RFC 1257                 Isochronous and Jitter           September 1991

   workstations, then beyond sensitivity to interarrival times, the
   users will also be sensitive to end-to-end delay.  Consider the
   difference between conferencing over a satellite link and a
   terrestrial link.  Furthermore, for the data to be able to arrive in
   time, there must be sufficient bandwidth.  Bandwidth requirements are
   particularly important for video: HDTV, even after compression,
   currently requires bandwidth in excess of 100 Mbits/second.

   Because multimedia applications are sensitive to jitter, bandwidth
   and delay, it has been suggested that the networks that carry
   multimedia traffic must be able to allocate and control jitter,
   bandwidth and delay [1,2].

   This memo argues that a network which simply controls bandwidth and
   delay is sufficient to support networked multimedia applications.
   Jitter control is not required.

Isochrony without Jitter Control

   The key argument of this memo is that an isochronous service can be
   provided by simply bounding the maximum delay through the network.

   To prove this argument, consider the following scenario.

   The network is able to bound the maximum transit delay on a channel
   between sender and receiver and at least the receiver knows what the
   bound is.  (These assumptions come directly from our assertion that
   the network can bound delay).  The term "channel" is used to mean
   some amount of bandwidth delivered over some path between sender and
   receiver.

   Now imagine an operating system in which applications can be
   scheduled to be active at regular intervals. Further assume that the
   receiving application has buffer space equal to the channel bandwidth
   times the maximum interarrival variance.  (Observe that the maximum
   interarrival variance is always known - in the worst case, the
   receiver can assume the maximum variance equals the maximum delay).

   Now consider a situation in which the sender of the isochronous data
   timestamps each piece of data when it is generated, using a universal
   time source, and then sends the data to the receiver.  The receiver
   reads a piece data in as soon as it is received and and places the
   timestamped data into its buffer space.  The receiver processes each
   piece of data only at the time equal to the data's timestamp plus the
   maximum transit delay.

   I argue that the receiver is processing data isochronously and thus
   we have shown that a network need not be isochronous to support

Partridge                                                       [Page 2]
Show full document text