[Search] [pdf|bibtex] [Tracker] [WG] [Email] [Nits]

Versions: 00 01 02 03 04 05 06 07 08                                    
Network Working Group                                           Kutscher
Internet-Draft                                                       Ott
Expires: October 18, 2001                                        Bormann
                                                TZI, Universitaet Bremen
                                                          April 19, 2001

             Session Description and Capability Negotiation

Status of this Memo

   This document is an Internet-Draft and is in full conformance with
   all provisions of Section 10 of RFC2026.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF), its areas, and its working groups. Note that
   other groups may also distribute working documents as

   Internet-Drafts are draft documents valid for a maximum of six
   months and may be updated, replaced, or obsoleted by other documents
   at any time. It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

     The list of current Internet-Drafts can be accessed at

     The list of Internet-Draft Shadow Directories can be accessed at
   This Internet-Draft will expire on October 18, 2001.

Copyright Notice

   Copyright (C) The Internet Society (2001). All Rights Reserved.


   This document defines a language for describing multimedia sessions
   with respect to configuration parameters and capabilities of end

   This document is a product of the Multiparty Multimedia Session
   Control (MMUSIC) working group of the Internet Engineering Task
   Force. Comments are solicited and should be addressed to the working
   group's mailing list at confctrl@isi.edu and/or the authors.

Document Revision

   $Revision: 1.8 $

Kutscher, et. al.       Expires October 18, 2001                [Page 1]

Internet-Draft                   SDPng                        April 2001

Table of Contents

   1.    Introduction . . . . . . . . . . . . . . . . . . . . . . . .  3
   2.    Terminology and System Model . . . . . . . . . . . . . . . .  5
   3.    SDPng  . . . . . . . . . . . . . . . . . . . . . . . . . . .  8
   3.1   Conceptual Outline . . . . . . . . . . . . . . . . . . . . .  8
   3.1.1 Definitions  . . . . . . . . . . . . . . . . . . . . . . . .  8
   3.1.2 Components & Configurations  . . . . . . . . . . . . . . . . 10
   3.1.3 Constraints  . . . . . . . . . . . . . . . . . . . . . . . . 11
   3.1.4 Session  . . . . . . . . . . . . . . . . . . . . . . . . . . 12
   3.2   Syntax Proposal  . . . . . . . . . . . . . . . . . . . . . . 12
   3.3   External Definition Packages . . . . . . . . . . . . . . . . 14
   3.3.1 Profile Definitions  . . . . . . . . . . . . . . . . . . . . 15
   3.3.2 Library Definitions  . . . . . . . . . . . . . . . . . . . . 15
   3.4   Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . 17
   4.    Open Issues  . . . . . . . . . . . . . . . . . . . . . . . . 18
         References . . . . . . . . . . . . . . . . . . . . . . . . . 19
         Authors' Addresses . . . . . . . . . . . . . . . . . . . . . 19
         Full Copyright Statement . . . . . . . . . . . . . . . . . . 21

Kutscher, et. al.       Expires October 18, 2001                [Page 2]

Internet-Draft                   SDPng                        April 2001

1. Introduction

   Multiparty multimedia conferencing is one application that requires
   the dynamic interchange of end system capabilities and the
   negotiation of a parameter set that is appropriate for all sending
   and receiving end systems in a conference. For some applications,
   e.g. for loosely coupled conferences, it may be sufficient to simply
   have session parameters be fixed by the initiator of a conference.
   In such a scenario no negotiation is required because only those
   participants with media tools that support the predefined settings
   can join a media session and/or a conference.

   This approach is applicable for conferences that are announced some
   time ahead of the actual start date of the conference. Potential
   participants can check the availability of media tools in advance
   and tools like session directories can configure media tools on
   startup. This procedure however fails to work for conferences
   initiated spontaneously like Internet phone calls or ad-hoc
   multiparty conferences. Fixed settings for parameters like media
   types, their encoding etc. can easily inhibit the initiation of
   conferences, for example in situations where a caller insists on a
   fixed audio encoding that is not available at the callee's end

   To allow for spontaneous conferences, the process of defining a
   conference's parameter set must therefore be performed either at
   conference start (for closed conferences) or maybe (potentially)
   even repeatedly every time a new participant joins an active
   conference. The latter approach may not be appropriate for every
   type of conference without applying certain policies: For
   conferences with TV-broadcast or lecture characteristics (one main
   active source) it is usually not desired to re-negotiate parameters
   every time a new participant with an exotic configuration joins
   because it may inconvenience existing participants or even exclude
   the main source from media sessions. But conferences with equal
   "rights" for participants that are open for new participants on the
   other hand would need a different model of dynamic capability
   negotiation, for example a telephone call that is extended to a
   3-parties conference at some time during the session.

   SDP [1] allows to specify multimedia sessions (i.e. conferences,
   "session" as used here is not to be confused with "RTP session"!)
   by providing general information about the session as a whole and
   specifications for all the media streams (RTP sessions and others)
   to be used to exchange information within the multimedia session.

   Currently, media descriptions in SDP are used for two purposes:

   o  to describe session parameters for announcements and invitations

Kutscher, et. al.       Expires October 18, 2001                [Page 3]

Internet-Draft                   SDPng                        April 2001

      (the original purpose of SDP)

   o  to describe the capabilities of a system (and possibly provide a
      choice between a number of alternatives). Note that SDP was not
      designed to facilitate this.

   A distinction between these two "sets of semantics" is only made

   In the following we first introduce a model for session description
   and capability negotiation and define some terms that are later used
   to express some requirements. Note that this list of requirements is
   possibly incomplete. The purpose of this document is to initiate the
   development of a session description and capability negotiation

Kutscher, et. al.       Expires October 18, 2001                [Page 4]

Internet-Draft                   SDPng                        April 2001

2. Terminology and System Model

   Any (computer) system has, at a time, a number of rather fixed
   hardware as well as software resources. These resources ultimately
   define the limitations on what can be captured, displayed, rendered,
   replayed, etc. with this particular device. We term features enabled
   and restricted by these resources "system capabilities".

      Example: System capabilities may include: a limitation of the
      screen resolution for true color by the graphics board; available
      audio hardware or software may offer only certain media encodings
      (e.g. G.711 and G.723.1 but not GSM); and CPU processing power
      and quality of implementation may constrain the possible video
      encoding algorithms.

   In multiparty multimedia conferences, participants employ different
   "components" in conducting the conference.

      Example: In lecture multicast conferences one component might be
      the voice transmission for the lecturer, another the transmission
      of video pictures showing the lecturer and the third the
      transmission of presentation material.

   Depending on system capabilities, user preferences and other
   technical and political constraints, different configurations can be
   chosen to accomplish the "deployment" of these components.

   Each component can be characterized at least by (a) its intended use
   (i.e. the function it shall provide) and (b) a one or more possible
   ways to realize this function. Each way of realizing a particular
   function is referred to as a "configuration".

      Example: A conference component's intended use may be to make
      transparencies of a presentation visible to the audience on the
      Mbone. This can be achieved either by a video camera capturing
      the image and transmitting a video stream via some video tool or
      by loading a copy of the slides into a distributed electronic
      whiteboard. For each of these cases, additional parameters may
      exist, variations of which lead to additional configurations (see

   Two configurations are considered different regardless of whether
   they employ entirely different mechanisms and protocols (as in the
   previous example) or they choose the same and differ only in a
   single parameter.

      Example: In case of video transmission, a JPEG-based still image
      protocol may be used, H.261 encoded CIF images could be sent as
      could H.261 encoded QCIF images. All three cases constitute

Kutscher, et. al.       Expires October 18, 2001                [Page 5]

Internet-Draft                   SDPng                        April 2001

      different configurations. Of course there are many more detailed
      protocol parameters.

   Each component's configurations are limited by the participating
   system's capabilities. In addition, the intended use of a component
   may constrain the possible configurations further to a subset
   suitable for the particular component's purpose.

      Example: In a system for highly interactive audio communication
      the component responsible for audio may decide not to use the
      available G.723.1 audio codec to avoid the additional latency but
      only use G.711. This would be reflected in this component only
      showing configurations based upon G.711. Still, multiple
      configurations are possible, e.g. depending on the use of A-law
      or u-Law, packetization and redundancy parameters, etc.

   In this system model, we distinguish two types of configurations:

   o  potential configurations
      (a set of any number of configurations per component) indicating
      a system's functional capabilities as constrained by the intended
      use of the various components;

   o  actual configurations
      (exactly one per instance of a component) reflecting the mode of
      operation of this component's particular instantiation.

      Example: The potential configuration of the aforementioned video
      component may indicate support for JPEG, H.261/CIF, and
      H.261/QCIF. A particular instantiation for a video conference may
      use the actual configuration of H.261/CIF for exchanging video

   In summary, the key terms of this model are:

   o  A multimedia session (streaming or conference) consists of one or
      more conference components for multimedia "interaction".

   o  A component describes a particular type of interaction (e.g.
      audio conversation, slide presentation) that can be realized by
      means of different applications (possibly using different

   o  A configuration is a set of parameters that are required to
      implement a certain variation (realization) of a certain
      component. There are actual and potential configurations.

      *  Potential configurations describe possible configurations that
         are supported by an end system.

Kutscher, et. al.       Expires October 18, 2001                [Page 6]

Internet-Draft                   SDPng                        April 2001

      *  An actual configuration is an "instantiation" of one of the
         potential configurations, i.e. a decision how to realize a
         certain component.

      In less abstract words, potential configurations describe what a
      system can do ("capabilities") and actual configurations describe
      how a system is configured to operate at a certain point in time
      (media stream spec).

   To decide on a certain actual configuration, a negotiation process
   needs to take place between the involved peers:

   1.  to determine which potential configuration(s) they have in
       common, and

   2.  to select one of this shared set of common potential
       configurations to be used for information exchange (e.g. based
       upon preferences, external constraints, etc.).

   In SAP [9] -based session announcements on the Mbone, for which SDP
   was originally developed, the negotiation procedure is non-existent.
   Instead, the announcement contains the media stream description sent
   out (i.e. the actual configurations) which implicitly describe what
   a receiver must understand to participate.

   In point-to-point scenarios, the negotiation procedure is typically
   carried out implicitly: each party informs the other about what it
   can receive and the respective sender chooses from this set a
   configuration that it can transmit.

   Capability negotiation must not only work for 2-party conferences
   but is also required for multi-party conferences. Especially for the
   latter case it is required that the process of determining the
   subset of allowable potential configurations is deterministic to
   reduce the number of required round trips before a session can be

   In the following, we elaborate on requirements for an SDPng
   specification, subdivided into general requirements and requirements
   for session descriptions, potential and actual configurations as
   well as negotiation rules.

Kutscher, et. al.       Expires October 18, 2001                [Page 7]

Internet-Draft                   SDPng                        April 2001

3. SDPng

   This section outlines a proposed solution for describing
   capabilities that meets most of the above requirements. Note that at
   this early point in time not all of the details are completely
   filled in; rather, the focus is on the concepts of such a capability
   description and negotiation language.

3.1 Conceptual Outline

   Our concept for the description language follows the system model
   introduced in the beginning of this document. We use a rather
   abstract language to avoid misinterpretations due to different
   intuitive understanding of terms as far as possible.

   Our concept of a capability description language addresses various
   pieces of a full description of system and application capabilities
   in four separate "sections":

      Definitions (elementary and compound)

      Potential or Actual Configurations


      Session attributes

3.1.1 Definitions

   The definition section specifies a number of basic abstractions that
   are later referenced to avoid repetitions in more complex
   specifications and allow for a concise representation. Definition
   elements are labelled with an identifier by which they may be
   referenced. They may be elementary or compound (i.e. combinations of
   elementary entities). Examples of definitions of that sections
   include (but are not limited to) codec definitions, redundancy
   schemes, transport mechanisms and payload formats.

   Elementary definition elements do not reference other elements. Each
   elementary entity only consists of one of more attributes and their
   values. Default values specified in the definition section may be
   overridden in descriptions for potential (and later actual)
   configurations. The concrete mechanisms for overriding definitions
   are still to be defined.

   For the moment, elementary elements are defined for media types
   (i.e. codecs) and for media transports. For each transport and for
   each codec to be used, the respective attributes need to be defined.
   This definition may either be provided within the "Definition"

Kutscher, et. al.       Expires October 18, 2001                [Page 8]

Internet-Draft                   SDPng                        April 2001

   section itself or in an external document (similar to the
   audio-video profile or an IANA registry that define payload types
   and media stream identifiers.

   Examples for elementary definitions:

   <audio-codec name="audio-basic" encoding="PCMU sampling_rate="8000 channels="1"/>

   <audio-codec name="audio-L16-mono" encoding="L16 sampling_rate="44100 channels="1"/>

   The element type "audio-codec" is used in these examples to define
   audio codec configurations. The configuration parameters are given
   as attribute values.

   Compound elements combine a number of elementary and/or other
   compound elements for more complex descriptions. This mechanism can
   be used for simple standard configurations such as G.711 over
   RTP/AVP as well as to express more complex coding schemes including
   e.g. FEC schemes, redundancy coding, and layered coding. Again, such
   definitions may be standardized and externalized so that there is no
   need to repeat them in every specification.

   An example for the definition of a audio-redundancy format:

   <audio-red name="red-pcm-gsm-fec">
     <use ref="audio-basic"/> <use ref="audio-gsm"/> <use ref="parityfec"/>

   In this example, the element type "audio-red" is used to define a
   redundant audio configuration that is labelled "red-pcm-gsm-fec" for
   later referencing. In the definition itself, the element type "use"
   is used to reference other definitions.

   Definitions may have default values specified along with them for
   each attribute. Some of these default values may be overridden so
   that a codec definition can easily be re-used in a different context
   (e.g. by specifying a different sampling rate) without the need for
   a large number of base specifications.

   This approach allows to have simple as well as more complex
   definitions which are commonly used be available in an extensible
   set of reference documents. Section 3.3 specifies the mechanisms for
   external references.

   Note: For negotiation between endpoints, it may be helpful to define
   two modes of operation: explicit and implicit. Implicit
   specifications may refer to externally defined entities to minimize
   traffic volume, explicit specifications would list all external

Kutscher, et. al.       Expires October 18, 2001                [Page 9]

Internet-Draft                   SDPng                        April 2001

   definitions used in a description in the "Definitions" section.
   Again, please see Section 3.3 for complete discussion of external

3.1.2 Components & Configurations

   The "Configurations" section contains all the components that
   constitute the multimedia conference (IP telephone call, multiplayer
   gaming session etc.). For each of these components, the potential
   and, later, the actual configurations are given. Potential
   configurations are used during capability exchange and/or
   negotiation, actual configurations to configure media streams after
   negotiation or in session announcements (e.g. via SAP). A potential
   and the actual configuration of a component may be identical.

   Each component is labelled with an identifier so that it can be
   referenced, e.g. to associate semantics with a particular media
   stream. For such a component, any number of configurations may be
   given with each configuration describing an alternate way to realize
   the functionality of the respective component.

   Each configuration (potential as well as actual) is labelled with an
   identifier. A configuration combines one or more (elementary and/or
   compound) entities from the "Definitions" section to describe a
   potential or an actual configuration. Within the specification of
   the configuration, default values from the referenced entities may
   be overwritten.

     <component name="audio1" media="audio">
       <alt name= AVP-audio-0">
         <rtp transport="udp-ip" format="audio-basic">
           <addr type="mc">
          <ipv4></ipv4> <port>30000</port>

       <alt name="AVP-audio-11">
         <rtp transport="udp-ip" format="audio-L16-mono">
            <addr type="mc">
           <ipv4></ipv4> <port>30000</port>

   For example, an IP telephone call may require just a single

Kutscher, et. al.       Expires October 18, 2001               [Page 10]

Internet-Draft                   SDPng                        April 2001

   component id=interactive-audio with two possible ways of
   implementing it. The two corresponding configurations are
   "AVP-audio-0" without modification, the other ("AVP-audio-11") uses
   linear 16-bit encoding. Typically, transport address parameters such
   as the port number would also be provided. In this example, this
   information is given by the "addr" element.

   During/after the negotiation phase, an actual configuration is
   chosen out of a number of alternative potential configurations, the
   actual configuration may refer to the potential configuration just
   by its "id", possibly allowing for some parameter modifications.
   Alternatively, the full actual configuration may be given.

3.1.3 Constraints

   Definitions specify media, transport, and other capabilities,
   whereas configurations indicate which combinations of these could be
   used to provide the desired functionality in a certain setting.

   There may, however, be further constraints within a system (such as
   CPU cycles, DSP available, dedicated hardware, etc.) that limit
   which of these configurations can be instantiated in parallel (and
   how many instances of these may exist). We deliberately do not
   couple this aspect of system resource limitations to the various
   application semantics as the constraints exist across application
   boundaries. Also, in many cases, expressing such constraints is
   simply not necessary (as many uses of the current SDP show), so
   additional overhead can be avoided where this is not needed.

   Therefore, we introduce a "Constraints" section to contain these
   additional limitations. Constraints refer to potential
   configurations and to entity definitions and express and use simple
   logic to express mutual exclusion, limit the number of
   instantiations, and allow only certain combinations. The following
   example shows the definition of a constraints that restricts the
   maximum number of instantiation of two alternatives (that would have
   to be defined in the configuration section before) when they are
   used in parallel:

       <use ref="AVP-audio-11" max="5"> <use ref="AVP-video-32" max="1">

   As the example shows, contraints are defined by defining limits on
   simultaneous instantiations of alternatives. They are not defined by
   expressing abstract endsystem resources, such as CPU speed or memory

Kutscher, et. al.       Expires October 18, 2001               [Page 11]

Internet-Draft                   SDPng                        April 2001

   By default, the "Constraints" section is empty (or missing) which
   means that no further restrictions apply.

3.1.4 Session

   The "Session" section is used to describe general meta-information
   parameters of the communication relationship to be invoked or
   modified. It contains most (if not all) of the general parameters of
   SDP (and thus will easily be usable with SAP for session

   In addition to the session description parameters, the "Session"
   section also ties the various components to certain semantics. If,
   in current SDP, two audio streams were specified (possibly even
   using the same codecs), there was little way to differentiate
   between their uses (e.g. live audio from an event broadcast vs. the
   commentary from the TV studio).

   This section also allows to tie together different media streams or
   provide a more elaborate description of alternatives (e.g. subtitles
   or not, which language, etc.).

     <subject>SDPng test</subject>
     <about>A test conference</about>
     <info name="audio1" function="speaker">
       Video stream for the different speakers

   Further uses are envisaged but need to be defined in future versions
   of this document.

3.2 Syntax Proposal

   In order to allow for the possibility to validate session
   descriptions and in order to allow for structured extensibility it
   is proposed to rely on a syntax framework that provides concepts as
   well as concrete procedures for document validation and extending
   the set of allows syntax elements.

   SGML/XML technologies allow for the preparation of Document Type
   Definitions (DTDs) that can define the allowed content models for
   the elements of conforming documents. Documents can be formally
   validated against a given DTD to check their conformance and
   correctness. For XML, mechanisms have been defined that allow for
   structured extensibility of a model of allowed syntax: XML Namespace
   and XML Schema.

Kutscher, et. al.       Expires October 18, 2001               [Page 12]

Internet-Draft                   SDPng                        April 2001

   XML Schema mechanisms allows to constrain the allowed document
   content, e.g. for documents that contain structured data and also
   provide the possibility that document instances can conform to
   several XML Schema definitions at the same time, while allowing
   Schema validators to check the conformance of these documents.

   Extensions of the session description language, say for allowing to
   express the parameters of a new media type, would require the
   creation of a corresponding XML schema definition that contains the
   specification of element types that can be used to describe
   configurations of components for the new media type. Session
   description documents have to reference the non-standard Schema
   module, thus enabling parsers and validators to identify the
   elements of the new extension module and to either ignore them (if
   they are not supported) or to consider them for processing the
   session/capability description.

   It is important to note that the functionality of validating
   capability and session description documents is not necessarily
   required to generate or process them. For example, endpoints would
   be configured to understand only those parts of description
   documents that are conforming to the baseline specification and
   simply ignore extensions they cannot support. The usage of XML and
   XML Schema is thus rather motivated by the need to allow for
   extensions being defined and added to the language in a structured
   way that does not preclude the possibility to have applications to
   identify and process the extensions elements they might support. The
   baseline specification of XML Schema definitions and profiles must
   be well-defined and targeted to the set of parameters that are
   relevant for the protocols and algorithms of the Internet Multimedia
   Conferencing Architecture, i.e. transport over RTP/UDP/IP, the audio
   video profile of RFC1890 etc.

   The example below shows how the definition of codecs,
   transport-variants and configuration of components could be
   realized. Please note that this is not a complete example and that
   identifiers have been chosen arbitrarily.

     <audio-codec name="audio-basic" encoding="PCMU sampling_rate="8000 channels="1"/>

     <audio-codec name="audio-L16-mono" encoding="L16 sampling_rate="44100 channels="1"/>

     <fec name="parityfec"/>

     <audio-red name="red-pcm-gsm-fec">
       <use ref="audio-basic"/> <use ref="audio-gsm"/> <use ref="parityfec"/>

Kutscher, et. al.       Expires October 18, 2001               [Page 13]

Internet-Draft                   SDPng                        April 2001

     <component name="audio1" media="audio">
       <alt name= AVP-audio-0">
         <rtp transport="udp-ip" format="audio-basic">
           <addr type="mc">
             <ipv4></ipv4> <port>30000</port>

       <alt name="AVP-audio-11">
         <rtp transport="udp-ip" format="audio-L16-mono">
           <addr type="mc">
             <ipv4></ipv4> <port>30000</port>

       <use ref="AVP-audio-11" max="5"> <use ref="AVP-video-32" max="1">

     <subject>SDPng test</subject>
     <about>A test conference</about>
     <info name="audio1" function="speaker">
       Video stream for the different speakers

   The example does also not include specifications of XML Schema
   definitions or references to such definitions. This will be provided
   in a future version of this draft.

   A real-world capability description would likely be shorter than the
   presented example because the codec and transport definitions can be
   factored-out to profile definition documents that would only be
   referenced in capability description documents.

3.3 External Definition Packages

Kutscher, et. al.       Expires October 18, 2001               [Page 14]

Internet-Draft                   SDPng                        April 2001

3.3.1 Profile Definitions

   In order to allow for extensibility it must be possible to define
   extensions to the basic SDPng configuration options.

   For example if some application requires the use of a new esoteric
   transport protocol endpoints must be able describe their
   configuration with respect to the parameters of that transport
   protocol. The mandatory and optional parameters that can be
   configured and negotiated when using the transport protocol will be
   specified in a definition document. Such a definition document is
   called a "profile".

   A profile contains rules that specify how SDPng is used to describe
   conferences or endsystem capabilities with respect to the parameters
   of the profile. The concrete properties of the profile definitions
   mechanism are still to be defined.

   An example of such a profile would be the RTP profile that defines
   how to specify RTP parameters. Another example would be the audio
   codec profiles that defines how specify audio codec parameters.

   SDPng document can reference profiles and provide concrete
   definitions, for example the definition for the GSM audio codec.
   (This would be done in the "Definitions" section of a SDPng
   document.) A SDPng document that references a profile and provides
   concrete defintions of configurations can be validated against the
   profile definition.

3.3.2 Library Definitions

   While profile definitions specify the allowed parameters for a given
   profile SDPng definition sections refer to profile definitions and
   define concrete configurations based on a specific profile.

   In order to such definitions to be imported into SDPng documents,
   there will be the notion of "SDPng libraries". A library is a set of
   definitions that is conforming to a certain profile definition (or
   to more than one profile definition -- this needs to be defined).

   The purpose of the library concept is to allow certain common
   definitions to be factored-out so that not every SDPng document has
   to include the basic definitions, for example the PCMU codec
   definition. SDP [1] uses a similar concept by relying on the well
   known static payload types (defined in RFC1890 [3]) that are also
   just referenced but never defined in SDP documents.

   An SPDng document that references definitions from an external
   library has to declare the use of the external library. The external

Kutscher, et. al.       Expires October 18, 2001               [Page 15]

Internet-Draft                   SDPng                        April 2001

   library, being a set of configuration definitions for a given
   profile, again needs to declare the use of the profile that it is
   conformant to.

   There are different possibilities of how profiles definitions and
   libraries can be used in SDPng documents:

   o  In an SPDng document a profile definition can be referenced and
      all the configuration definitions are provided within the
      document itself. The SDPng document is self-contained with
      respect to the definitions it uses.

   o  In an SPDng document the use of an external library can be
      declared. The library references a profile definition and the
      SDPng document references the library. There are two alternatives
      how external libraries can be referenced:

      by name: Referencing libraries by names implies the use of a
         registration authority where definitions and reference names
         can be registered with. It is conceivable that the most common
         SDPng definitions be registered that way and that there will
         be a baseline set of definitions that minimal implementations
         must understand. Secondly, a registration procedure will be
         defined, that allows vendors to register frequently used
         definitions with a registration authority (e.g., IANA) and to
         declare the use of registered definition packages in
         conforming SDPng documents. Of course, care should be taken
         though not to make the external references too complex and
         thus require too much a priori knowledge in a protocol engine
         implementing SDPng. Relying on this mechanism in general is
         also problematic because it impedes the extensiblity, because
         it requires implementors to provide support for new extensions
         in their products before they can interoperate. Registration
         is not useful for spontaneous or experimental extensions that
         are defined in an SDPng library.

      by address: An alternative to referencing libraries by name is to
         declare the use of an external library by providing an
         address, i.e., an URL, that specifies where the library can be
         obtained. While is allows the use of arbitrary third-party
         libraries that can extend the basic SDPng set of configuration
         options in many ways there are problems if the referenced
         libraries cannot be accessed by all communication partners.

   o  Because of these problematic properties of external libraries,
      the final SDPng specification will have to provide a set of
      recommendations under which circumstances the different
      mechanisms of externalizing definitions should be used.

Kutscher, et. al.       Expires October 18, 2001               [Page 16]

Internet-Draft                   SDPng                        April 2001

3.4 Mappings

   A mapping needs to be defined in particular to SDP that allows to
   translate final session descriptions (i.e. the result of capability
   negotiation processes) to SDP documents. In principle, this can be
   done in a rather schematic fashion.

   Furthermore, to accommodate SIP-H.323 gateways, a mapping from SDPng
   to H.245 needs to be specified at some point.

Kutscher, et. al.       Expires October 18, 2001               [Page 17]

Internet-Draft                   SDPng                        April 2001

4. Open Issues


      Sytnax for referencing profiles and libraries

      Registry (reuse of SDP mechanisms and names etc.)


Kutscher, et. al.       Expires October 18, 2001               [Page 18]

Internet-Draft                   SDPng                        April 2001


   [1]  Handley, M. and V. Jacobsen, "SDP: Session Description
        Protocol", RFC 2327, April 1998.

   [2]  Schulzrinne, H., Casner, S., Frederick, R. and V. Jacobsen,
        "RTP: A Transport Protocol for Real-Time Applications", RFC
        1889, January 1996.

   [3]  Schulzrinne, H., "RTP Profile for Audio and Video Conferences
        with Minimal Control", RFC 1890, January 1996.

   [4]  Perkins, C., Kouvelas, I., Hodson, O., Hardman, V., Handley,
        M., Bolot, J., Vega-Garcia, A. and S. Fosse-Parisis, "RTP
        Payload for Redundant Audio Data", RFC 2198, September 1997.

   [5]  Klyne, G., "A Syntax for Describing Media Feature Sets", RFC
        2533, March 1999.

   [6]  Klyne, G., "Protocol-independent Content Negotiation
        Framework", RFC 2703, September 1999.

   [7]  Rosenberg, J. and H. Schulzrinne, "An RTP Payload Format for
        Generic Forward Error Correction", RFC 2733, December 1999.

   [8]  Perkins, C. and O. Hodson, "Options for Repair of Streaming
        Media", RFC 2354, June 1998.

   [9]  Handley, M., Perkins, C. and E. Whelan, "Session Announcement
        Protocol", RFC 2974, October 2000.

Authors' Addresses

   Dirk Kutscher
   TZI, Universitaet Bremen
   Bibliothekstr. 1
   Bremen  28359

   Phone: +49.421.218-7595
   Fax:   +49.421.218-7000
   EMail: dku@tzi.uni-bremen.de

Kutscher, et. al.       Expires October 18, 2001               [Page 19]

Internet-Draft                   SDPng                        April 2001

   Joerg Ott
   TZI, Universitaet Bremen
   Bibliothekstr. 1
   Bremen  28359

   Phone: +49.421.201-7028
   Fax:   +49.421.218-7000
   EMail: jo@tzi.uni-bremen.de

   Carsten Bormann
   TZI, Universitaet Bremen
   Bibliothekstr. 1
   Bremen  28359

   Phone: +49.421.218-7024
   Fax:   +49.421.218-7000
   EMail: cabo@tzi.org

Kutscher, et. al.       Expires October 18, 2001               [Page 20]

Internet-Draft                   SDPng                        April 2001

Full Copyright Statement

   Copyright (C) The Internet Society (2001). All Rights Reserved.

   This document and translations of it may be copied and furnished to
   others, and derivative works that comment on or otherwise explain it
   or assist in its implmentation may be prepared, copied, published
   and distributed, in whole or in part, without restriction of any
   kind, provided that the above copyright notice and this paragraph
   are included on all such copies and derivative works. However, this
   document itself may not be modified in any way, such as by removing
   the copyright notice or references to the Internet Society or other
   Internet organizations, except as needed for the purpose of
   developing Internet standards in which case the procedures for
   copyrights defined in the Internet Standards process must be
   followed, or as required to translate it into languages other than

   The limited permissions granted above are perpetual and will not be
   revoked by the Internet Society or its successors or assigns.

   This document and the information contained herein is provided on an


   Funding for the RFC editor function is currently provided by the
   Internet Society.

Kutscher, et. al.       Expires October 18, 2001               [Page 21]