INTERNET-DRAFT                                              John Lazzaro
March 1, 2003                                             John Wawrzynek
Expires: September 1, 2003                                   UC Berkeley


 An Implementation Guide to the MIDI Wire Protocol Packetization (MWPP)

           <draft-lazzaro-avt-mwpp-coding-guidelines-02.txt>


Status of this Memo

This document is an Internet-Draft and is subject to all provisions of
Section 10 of RFC2026.

Internet-Drafts are working documents of the Internet Engineering Task
Force (IETF), its areas, and its working groups.  Note that other groups
may also distribute working documents as Internet-Drafts.

Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference material
or to cite them other than as "work in progress."

The list of current Internet-Drafts can be accessed at
http://www.ietf.org/1id-abstracts.html

The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html

                                Abstract

     This memo offers non-normative implementation guidance for the MIDI
     Wire Protocol Packetization (MWPP), an RTP packetization for the
     MIDI command language. In the main body of the memo, we discuss one
     MWPP application in detail: an interactive, two-party, single-
     stream session over unicast UDP transport that uses RTCP. In the
     Appendices, we discuss specialized implementation issues: MWPP
     without RTCP, MWPP with TCP, multi-stream sessions, multi-party
     sessions, and content streaming.











Lazzaro/Wawrzynek                                               [Page 1]


INTERNET-DRAFT                                             1 March 2003


0. Change Log for <draft-lazzaro-avt-mwpp-coding-guidelines-02.txt>


This document update maintains compatibility with the -06.txt update to
the normative document.

Changes in this document:

  o References to "checkpoint management policy" are now references
    to "sending policy", in keeping with -06.txt terminology change.

  o The use of MWPP parameters on the RTSP Transport line has been
    deleted in Appendix C, in accordance with [6].

  o The use of a new -06.txt feature that supports the use of the
    timestamp of the last command in the MIDI Command Section as
    a proxy for the sending time has been added to Section 4.1.


































Lazzaro/Wawrzynek                                               [Page 2]


INTERNET-DRAFT                                             1 March 2003


                           Table of Contents


1. Introduction  . . . . . . . . . . . . . . . . . . . . . . . . . .   5
2. Session Management: Starting MWPP Sessions  . . . . . . . . . . .   6
3. Session Management: Session Housekeeping  . . . . . . . . . . . .  12
4. Sending MWPP Streams: General Considerations  . . . . . . . . . .  13
     4.1 Queuing and Coding Incoming MIDI Data . . . . . . . . . . .  14
     4.2 Sending MWPP Packets with Empty MIDI Lists  . . . . . . . .  15
     4.3 Bandwidth Management and Congestion Control . . . . . . . .  16
5. Sending MWPP Streams: The Recovery Journal  . . . . . . . . . . .  18
     5.1 Initializing the RJSS . . . . . . . . . . . . . . . . . . .  21
     5.2 Traversing the RJSS . . . . . . . . . . . . . . . . . . . .  21
     5.3 Updating the RJSS . . . . . . . . . . . . . . . . . . . . .  22
     5.4 Trimming the RJSS . . . . . . . . . . . . . . . . . . . . .  23
     5.5 Implementation Notes  . . . . . . . . . . . . . . . . . . .  24
6. Receiving MWPP Streams: General Considerations  . . . . . . . . .  25
     6.1 The NMP Receiver Design . . . . . . . . . . . . . . . . . .  26
     6.2 Receiver Design Issues  . . . . . . . . . . . . . . . . . .  28
7. Receiving MWPP Streams: The Recovery Journal  . . . . . . . . . .  29
     7.1 Chapter W: MIDI Pitch Wheel (0xE) . . . . . . . . . . . . .  32
     7.2 Chapter N: MIDI NoteOn (0x8) and NoteOff (0x9)  . . . . . .  33
     7.3 Chapter C: MIDI Control Change (0xB)  . . . . . . . . . . .  35
     7.4 Chapter P: MIDI Program Change (0xC)  . . . . . . . . . . .  36
8. Congestion Control  . . . . . . . . . . . . . . . . . . . . . . .  38
9. Security Considerations . . . . . . . . . . . . . . . . . . . . .  38
10. Acknowledgments  . . . . . . . . . . . . . . . . . . . . . . . .  38
Appendix A. Content Streaming with MWPP  . . . . . . . . . . . . . .  39
     A.1 Session Management  . . . . . . . . . . . . . . . . . . . .  39
     A.2 Baseline Algorithms . . . . . . . . . . . . . . . . . . . .  41
     A.3 Packet Replacement Streams  . . . . . . . . . . . . . . . .  43
Appendix B. Multi-party MWPP Sessions  . . . . . . . . . . . . . . .  45
     B.1 Session Management (simulated multicast)  . . . . . . . . .  45
     B.2 Session Management (true multicast) . . . . . . . . . . . .  48
     B.3 Sender Issues . . . . . . . . . . . . . . . . . . . . . . .  49
     B.4 Receiver Issues . . . . . . . . . . . . . . . . . . . . . .  51
     B.5 Scaling Issues  . . . . . . . . . . . . . . . . . . . . . .  52
Appendix C. MWPP and Reliable Transport  . . . . . . . . . . . . . .  54
     C.1 Session Management  . . . . . . . . . . . . . . . . . . . .  55
     C.2 Sending and Receiving . . . . . . . . . . . . . . . . . . .  57
     C.3 RTSP Interleaving . . . . . . . . . . . . . . . . . . . . .  57
Appendix D. Using MWPP without RTCP  . . . . . . . . . . . . . . . .  59
     D.1 Session Management  . . . . . . . . . . . . . . . . . . . .  59
     D.2 Sender Issues . . . . . . . . . . . . . . . . . . . . . . .  61
     D.3 Receiver Issues . . . . . . . . . . . . . . . . . . . . . .  62
Appendix E. Multi-stream MWPP Sessions . . . . . . . . . . . . . . .  64
     E.1 Session Scenarios . . . . . . . . . . . . . . . . . . . . .  64
     E.2 Synchronization Issues  . . . . . . . . . . . . . . . . . .  71



Lazzaro/Wawrzynek                                               [Page 3]


INTERNET-DRAFT                                             1 March 2003


     E.3 Name Space Issues . . . . . . . . . . . . . . . . . . . . .  73
Appendix F. References . . . . . . . . . . . . . . . . . . . . . . .  75
     F.1 Normative References  . . . . . . . . . . . . . . . . . . .  75
     F.2 Informative References  . . . . . . . . . . . . . . . . . .  76
Appendix G. Author Addresses . . . . . . . . . . . . . . . . . . . .  77














































Lazzaro/Wawrzynek                                               [Page 4]


INTERNET-DRAFT                                             1 March 2003


1. Introduction

The MIDI Wire Protocol Packetization (MWPP, [1]) is a general-purpose
RTP/AVP [2,3] packetization for the MIDI [4] command language.

[1] normatively defines the MWPP RTP bitfield syntax, and also defines
the Session Description Protocol (SDP, [5]) parameters that may be used
to customize MWPP session behavior. However, [1] does not define
algorithms for sending and receiving MWPP streams. Implementors are free
to use any sending or receiving algorithm that conforms to the normative
text in [1].

In this memo, we offer advice on how to implement sending, receiving,
and session management algorithms for MWPP. Unlike [1], this memo is not
normative.

The application space for MWPP is diverse, and may be categorized in the
following ways:

  o Interactive or streaming. Interactive applications (such as the
    remote operation of musical instruments) require low end-to-end
    latency, preferably near the underlying network latency. Streaming
    applications (such as the incremental delivery of MIDI files)
    trade off higher latency for better fidelity and efficiency.

  o Two-party or multi-party. Two-party MWPP applications have two
    session participants; multi-party MWPP applications have more
    than two participants. Multi-party applications map efficiently
    to multicast transport, but may also use multiple unicast flows.

  o Transport. MWPP streams may use unreliable transport (such as
    unicast or multicast UDP) or reliable transport (such as TCP).

  o Single-stream or multi-stream. Simple MWPP sessions use one
    RTP stream to convey a single MIDI name space (16 voice channels
    + systems). Multi-stream sessions use several RTP streams to
    convey more than 16 voice channels. Multi-stream sessions are also
    used to split a MIDI name space across different transport types.

  o RTCP or no RTCP. The RTP standard [2] defines a backchannel
    protocol, the RTP Control Protocol (RTCP). MWPP RTP streams
    work best if paired with an RTCP stream, but MWPP may be used
    without RTCP.








Lazzaro/Wawrzynek                                               [Page 5]


INTERNET-DRAFT                                             1 March 2003


In the main body of this memo, we describe an interactive, two-party,
single-stream session over unicast UDP transport that uses RTCP.
Sections 2 and 3 cover session management; Sections 4 and 5 cover
sending MWPP streams; Sections 6 and 7 cover receiving MWPP streams.

The main text is written with a specific application in mind: network
musical performance over wide-area networks. As defined in [13], a
network musical performance occurs when a group of musicians, located at
different physical locations, interact over a network to perform as they
would if located in the same room.

However, the methods we describe in the main text are also applicable to
local-area network (LAN) applications, such as the remote control of
musical instruments. The main text includes several discussions of LAN
issues, such as LAN receiver design guidance in Section 6.

In the Appendices of this memo, we discuss implementation issues for
other session types. For example, Appendix A describes implementation
issues in content streaming. Each Appendix covers session management,
sender design, and receiver design.

This memo is limited in scope, in that it assumes that all session
participants have access to the SDP session description(s) that describe
the session. We do not discuss the creation, negotiation, or
distribution of session descriptions, apart from a discussion of the
Real Time Streaming Protocol (RTSP, [6]) in Appendix A. We anticipate
that other memos will define frameworks for session description issues
for MWPP, and that these memos will include implementation guidance.


2. Session Management: Starting MWPP Sessions

In this section, we discuss how interactive MWPP applications start
sessions. We limit our discussion to two-party sessions over unicast UDP
transport that use RTCP. In the Appendices, we discuss startup issues
for other types of sessions.

We assume that the two parties have agreed on a session configuration,
embodied by a pair of Session Description Protocol (SDP, [5]) session
descriptions. One session description (Figure 1) defines how the first
party wishes to receive its stream; the other session description
(Figure 2) defines how the second party wishes to receive its stream.
Even if one party is not receiving an RTP stream (indicated by the SDP
attribute sendonly [5]), the party still defines a session description,
in order to describe how it receives its RTCP stream.






Lazzaro/Wawrzynek                                               [Page 6]


INTERNET-DRAFT                                             1 March 2003


v=0
o=first 2520644554 2838152170 IN IP4 first.example.net
s=Example
t=0 0
c=IN IP4 192.0.2.94
m=audio 16112 RTP/AVP 96
a=rtpmap: 96 mwpp/44100

         Figure 1 -- Session description for first participant.


v=0
o=second 2520644554 2838152170 IN IP4 second.example.net
s=Example
t=0 0
c=IN IP4 192.0.2.105
m=audio 5004 RTP/AVP 101
a=rtpmap: 101 mwpp/44100

         Figure 2 -- Session description for second participant.




The session description in Figure 1 codes that the first party intends
to receive an MWPP RTP stream on IP4 number 192.0.2.94 (coded in the c=
line) at UDP port 16112 (coded in the m= line). Implicit in the SDP m=
line syntax [5] is that the first party also intends to receive an RTCP
stream on 192.0.2.94 at UDP port 16113 (16112 + 1). The receiver expects
that the PTYPE field of each RTP header in the received stream will be
set to 96 (coded in the m= and a= lines).

Likewise, the session description in Figure 2 codes that the second
party intends to receive an MWPP RTP stream on IP4 number 192.0.2.105 at
UDP port 5004, and also intends to receive an RTCP stream on 192.0.2.105
at UDP port 5005 (5004 + 1).  The second party expects that the PTYPE
RTP header field of received stream will be set to 101.

The session descriptions do not use the SDP parameter render (Appendix
A.5 of [1]) to indicate the rendering method for the MIDI stream. If
render was in use, the parties would use this information to set up the
appropriate rendering algorithms for the MIDI stream.

We now show example code that implements the actions the parties take
during the session. The code is written in C, and uses the sockets API
and other POSIX systems calls. We show code for the first party (the
second party takes a symmetric set of actions).




Lazzaro/Wawrzynek                                               [Page 7]


INTERNET-DRAFT                                             1 March 2003


Figure 3 shows how the first party initializes a pair of socket
descriptors (rtp_fd and rtcp_fd) to send and receive UDP packets. The
code sets up the descriptors to listen to ports 16112 and 16113 on the
IP4 network connection for 192.0.2.94.  Note that the code assumes a
single-homed machine. The ERROR_RETURN macro is used to flag fatal setup
errors (this macro is not defined in Figure 3).

After the code in Figure 3 runs, the first party may check for new RTP
or RTCP packets by calling recv() on rtp_fd or rtcp_fd. By default, a
recv() call on these socket descriptors blocks until a packet arrives.
Figure 4 shows how configure these sockets as non-blocking, so that
recv() calls may be done in time-critical code without fear of I/O
blocking. Figure 5 shows how to use recv() to check a non-blocking
socket for new packets.

The first party also uses rtp_fd and rtcp_fd to send RTP and RTCP
packets to the second party. In Figure 6, we show how to initialize
socket structures that address the second party. In Figure 7, we show
how to use one of these structures in a sendto() call to send an RTP
packet to the second party.

Note that the code shown in Figures 3-7 assumes a clear network path
between the participants. The code may not work if firewalls or Network
Address Translation (NAT) devices are present in the network path. See
[15] for standardized methods for overcoming network obstacles.


























Lazzaro/Wawrzynek                                               [Page 8]


INTERNET-DRAFT                                             1 March 2003


#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>

  int rtp_fd, rtcp_fd;       /* socket descriptors */
  struct sockaddr_in addr;   /* for bind address   */

  /*********************************/
  /* create the socket descriptors */
  /*********************************/

  if ((rtp_fd = socket(AF_INET, SOCK_DGRAM, 0)) < 0)
    ERROR_RETURN("Couldn't create Internet RTP socket");

  if ((rtcp_fd = socket(AF_INET, SOCK_DGRAM, 0)) < 0)
    ERROR_RETURN("Couldn't create Internet RTCP socket");


  /**********************************/
  /* bind the RTP socket descriptor */
  /**********************************/

  memset(&(addr.sin_zero), 0, 8);
  addr.sin_family = AF_INET;
  addr.sin_addr.s_addr = htonl(INADDR_ANY);
  addr.sin_port = htons(16112); /* port 16112, from SDP */

  if (bind(rtp_fd, (struct sockaddr *)&addr,
        sizeof(struct sockaddr)) < 0)
     ERROR_RETURN("Couldn't bind Internet RTP socket");


  /***********************************/
  /* bind the RTCP socket descriptor */
  /***********************************/

  memset(&(addr.sin_zero), 0, 8);
  addr.sin_family = AF_INET;
  addr.sin_addr.s_addr = htonl(INADDR_ANY);
  addr.sin_port = htons(16113); /* port 16113, from SDP */

  if (bind(rtcp_fd, (struct sockaddr *)&addr,
        sizeof(struct sockaddr)) < 0)
      ERROR_RETURN("Couldn't bind Internet RTCP socket");


        Figure 3 -- Setup code for listening for RTP/RTCP packets.




Lazzaro/Wawrzynek                                               [Page 9]


INTERNET-DRAFT                                             1 March 2003


#include <unistd.h>
#include <fcntl.h>

int one = 1;

  /*******************************************************/
  /* set non-blocking status, shield spurious ICMP errno */
  /*******************************************************/

  if (fcntl(rtp_fd, F_SETFL, O_NONBLOCK))
    ERROR_RETURN("Couldn't unblock Internet RTP socket");

  if (fcntl(rtcp_fd, F_SETFL, O_NONBLOCK))
    ERROR_RETURN("Couldn't unblock Internet RTCP socket");

  if (setsockopt(rtp_fd,  SOL_SOCKET, SO_BSDCOMPAT,
              &one, sizeof(one)))
    ERROR_RETURN("Couldn't shield RTP socket");

  if (setsockopt(rtcp_fd,  SOL_SOCKET, SO_BSDCOMPAT,
              &one, sizeof(one)))
    ERROR_RETURN("Couldn't shield RTCP socket");


    Figure 4 -- Code to set socket descriptors to be non-blocking.



#include <errno.h>
#define UDPMAXSIZE 1472     /* based on Ethernet MTU of 1500 */

unsigned char packet[UDPMAXSIZE+1];
int len;


 while ((len = recv(rtp_fd, packet, UDPMAXSIZE + 1, 0)) > 0)
  {
    /* process packet[], be cautious if (len == UDPMAXSIZE + 1) */
  }

 if ((len == 0) || (errno != EAGAIN))
  {
    /* while() may have exited in an unexpected way */
  }


        Figure 5 -- Code to check rtp_fd for new RTP packets.




Lazzaro/Wawrzynek                                              [Page 10]


INTERNET-DRAFT                                             1 March 2003


#include <arpa/inet.h>
#include <netinet/in.h>

struct sockaddr_in * rtp_addr;      /* RTP destination IP/port  */
struct sockaddr_in * rtcp_addr;     /* RTCP destination IP/port */


  /* set RTP address, as coded in Figure 2's SDP */

  rtp_addr = calloc(1, sizeof(struct sockaddr_in));
  rtp_addr->sin_family = AF_INET;
  rtp_addr->sin_port = htons(5004);
  rtp_addr->sin_addr.s_addr = inet_addr("192.0.2.105");

  /* set RTCP address, as coded in Figure 2's SDP */

  rtcp_addr = calloc(1, sizeof(struct sockaddr_in));
  rtcp_addr->sin_family = AF_INET;
  rtcp_addr->sin_port = htons(5005);   /* 5004 + 1 */
  rtcp_addr->sin_addr.s_addr = rtp_addr->sin_addr.s_addr;


    Figure 6 -- Initializing destination addresses for RTP and RTCP.






unsigned char packet[UDPMAXSIZE];  /* RTP packet to send   */
int size;                          /* length of RTP packet */


  /* first fill packet[] and set size ... then: */

  if (sendto(rtp_fd, packet, size, 0, rtp_addr,
          sizeof(struct sockaddr))  == -1)
    {
      /*
       * try again later if errno == EAGAIN or EINTR
       *
       * other errno values --> an operational error
       */
    }


           Figure 7 -- Using sendto() to send an RTP packet.




Lazzaro/Wawrzynek                                              [Page 11]


INTERNET-DRAFT                                             1 March 2003


3. Session Management: Session Housekeeping

After the two-party interactive session is set up, the parties begin to
send and receive MWPP RTP packets. In Sections 4-7, we discuss MWPP RTP
sending and receiving algorithms.  In this section, we describe session
"housekeeping" tasks that the participants also perform.

One housekeeping function is the maintenance of the 32-bit SSRC value
that uniquely identifies each party. Section 8 of [2] describes SSRC
issues in detail.

Another housekeeping function is the sending and receiving of RTCP. MWPP
uses the standard techniques for sending and receiving RTCP, which are
described in Section 6 of [2]. However, MWPP defines the sampling
instant of an RTP packet in an unusual way (Section 2.1 of [1]),
affecting the calculation of RTCP reception statistics.

Another housekeeping function concerns security. As detailed in the
Security Considerations section of [1], per-packet authentication is
strongly recommended for use with MWPP, because the acceptance of rogue
MWPP packets may lead to the execution of arbitrary MIDI commands. [16]
describes a standard for authenticating RTP and RTCP packets. To
simplify the presentation of sending and receiving algorithms in this
memo, our examples do not authenticate packets.

A final housekeeping function concerns the termination of an MWPP RTP
session. In our two-party example, the session terminates upon the exit
of one of the participants. A clean termination may require active
effort by a receiver, as a MIDI stream stopped at an arbitrary point may
cause stuck notes and other indefinite artifacts in the MIDI renderer.

The exit of a party may be signalled in several ways. Session management
tools may offer a reliable signal for termination (such as the SIP BYE
method [14]). The (unreliable) RTCP BYE packet [2] may also signal the
exit of a party.  Receivers may also sense the lack of RTCP activity and
timeout a party, or may use transport methods to detect an exit.















Lazzaro/Wawrzynek                                              [Page 12]


INTERNET-DRAFT                                             1 March 2003


4. Sending MWPP Streams: General Considerations

In this section we discuss sender implementation issues, for a two-party
interactive session. The session represents a network musical
performance between two players over a wide-area network.

An interactive MWPP sender is a real-time data-driven entity. On an on-
going basis, the sender checks to see if the local player has generated
new MIDI data. At any time, the sender may transmit a new MWPP RTP
packet to the remote player, for the reasons described below:

  1. New MIDI data has been generated by the local player, and the
     sender decides it is time to issue a packet coding the data.

  2. The local player has not generated new MIDI data, but the
     sender decides too much time has elapsed since the last
     RTP packet transmission. The sender transmits a packet in
     order to relay updated header and recovery journal data.

In both cases, the sender generates a packet that consists of an RTP
header, a MIDI Command section, and a recovery journal. In the first
case, the MIDI list of the MIDI Command section codes the new MIDI data.
In the second case, the MIDI list is empty. Figure 8 shows the 5 steps a
sender takes to issue a packet.


 Algorithm for Sending an MWPP Packet:

  1. Generate the RTP header for the new packet. See Section 2.1
     of [1] for details.

  2. Generate the MIDI Command section for the new packet. See
     Section 3 of [1] for details.

  3. Generate the recovery journal for the new packet. We discuss
     this process in Section 5.2. The generation algorithm examines
     the Recovery Journal Sending Structure (RJSS), a stateful
     coding of a history of the stream.

  4. Send the new packet to the receiver.

  5. Update the RJSS to include the data coded in the MIDI Command
     section of the packet sent in step 4. We discuss the update
     procedure in Section 5.3.


   Figure 8 -- A 5 step algorithm for sending an MWPP RTP packet.




Lazzaro/Wawrzynek                                              [Page 13]


INTERNET-DRAFT                                             1 March 2003


The algorithm shown in Figure 8 corresponds to the code fragment for
sending RTP packets shown in Figure 7 of Section 2. Steps 1, 2, and 3
occur before the sendto() call in the code fragment. Step 4 corresponds
to the sendto() call itself. Step 5 may occur once Step 3 completes.

In the sections that follow, we discuss specific sender implementation
issues in detail.

4.1 Queuing and Coding Incoming MIDI Data

In this section, we describe how a sender decides when to transmit a new
RTP packet. We also discuss sender timestamp coding issues.

Simple senders transmit a new MWPP RTP packet as soon as the local
player generates a complete MIDI command. The system described in [13]
uses this algorithm. This algorithm has zero sender queuing latency, as
the sender never delays the transmission of a new MIDI command.

In a relative sense, this algorithm uses bandwidth inefficiently, as it
does not amortize the overhead of an MWPP RTP packet over several MIDI
commands. This inefficiency may be acceptable for sparse MIDI data
streams (see Appendix A.4 of [13]). More sophisticated sending
algorithms [17] improve efficiency by coding small groups of MIDI
commands into a single RTP packet, at the expense of non-zero sender
queuing latency.

Senders assign a timestamp value to each MIDI command in the stream.
The default timestamp semantics are defined in Section 3 of [1]. The SDP
parameters tsmode, linerate, octpos, and mperiod (Appendix C.2 of [1])
may be used to customize timestamp semantics during session setup.

Senders may code the timestamp values for MIDI commands in two ways. The
most efficient method is to set the RTP timestamp of the packet to the
timestamp of the first command in the MIDI list. In this method, the Z
bit of the MIDI command section header (Figure 2 of [1]) is set to 0.
The RTP timestamps of the stream increment at a non-uniform rate.

In some applications, senders may wish to generate a stream whose RTP
timestamps increment at a uniform rate (perhaps to improve the
performance of header compression [18]). To code the timestamp of the
first command in the MIDI list, the sender uses the optional delta time
field. The Z bit of the MIDI command section header is set to 1.

Finally, as we discuss in Section 6, interactive receivers may model the
network latency, and use the model to optimize its rendering
performance. By necessity, models use the timestamp of the last command
coded in the MIDI list as a proxy for the sending time. To facilitate
this coding technique, Section 2.2 of [1] permits the last command in



Lazzaro/Wawrzynek                                              [Page 14]


INTERNET-DRAFT                                             1 March 2003


the MIDI list to be null. If the MIDI list is empty, the RTP timestamp
serves as the proxy.

To the extent possible, interactive senders should maintain a constant
relationship between this proxy and the actual sending time. To the
receiver, variance in this relationship is indistinguishable from
network jitter.


4.2 Sending MWPP Packets with Empty MIDI Lists

As we described in the preamble of Section 4, interactive senders may
decide to transmit MWPP RTP packets with empty MIDI lists. Senders
generate "empty packets" in two contexts: as "keep-alive" packets during
periods of no MIDI activity, and as "guard" packets to improve the
performance of the recovery journal system. In this section, we discuss
implementation issues for empty packets.

In an interactive session, musicians might refrain from generating MIDI
data for extended periods of time (seconds or even minutes). If an MWPP
RTP stream followed the dynamics of a silent MIDI source, and stopped
sending RTP packets for an extended periods, systems behavior might be
degraded in the following ways:

  o  Receivers may misinterpret the silent stream as a dropped
     network connection.

  o  Network middleboxes (such as Network Address Translators)
     may "time-out" the silent stream and drop the port and IP
     association state.

  o  The receiver's model of network performance may fall out
     of date.

Senders avoid these problems by sending "keep-alive" MWPP packets during
periods of network inactivity. Keep-alive packets have empty MIDI lists.
Session participants may specify the frequency of keep-alive packets
during session configuration with the SDP parameter maxptime (Appendix
C.3 of [1]). As a point of reference, the system described in [13] sends
a keep-alive packet if no RTP packet has been sent for 30 seconds.

Senders may also send empty MWPP packets to improve the performance of
the recovery journal system. As we describe in Section 6, the recovery
process begins when a receiver detects a break in the RTP sequence
number pattern of the stream. The receiver uses the recovery journal of
the break packet to guide corrective rendering actions, such as ending
stuck notes and updating out-of-date controller values.




Lazzaro/Wawrzynek                                              [Page 15]


INTERNET-DRAFT                                             1 March 2003


Consider the situation where the local player produces a MIDI NoteOff
command (which the sender promptly transmits in an MWPP packet), but
then 5 seconds pass before the player produces another MIDI command
(which the sender transmits in a second MWPP packet). If the MWPP packet
coding the NoteOff is lost, the receiver will not be aware of the packet
loss incident for 5 seconds, and the rendered MIDI performance will
contain a note that sounds for 5 seconds too long.

To handle this situation, senders may transmit empty MWPP packets to
"guard" the stream during silent sections. The guard packet algorithm
defined in Section 7.3 of [13], as applied to the situation described
above, would send a guard packet after 100 ms of player inactivity, and
would send a second guard packet 100 ms. later. Subsequent guard packets
would be sent with an exponential backoff, with a limiting period of 1
second. Guard packet transmissions would cease once MIDI activity
resumes, or once RTCP receiver reports indicate that the receiver is up
to date.

We view the perceptual quality of guard packet sending algorithms as a
quality of implementation factor for MWPP applications. Sophisticated
implementations may tailor the guard packet sending rate to the nature
of the MIDI commands recently sent in the stream, to minimize the
perceptual impact of moderate packet loss.

As an example of this sort of specialization, the guard packet algorithm
described in [13] protects against the transient artifacts that occur
when NoteOn MIDI commands are lost. The algorithm sends a guard packet 1
ms after an MWPP packet whose MIDI list contains a NoteOn command. The Y
bit in Chapter N note logs (Appendix A.4 of [1]) supports this use of
guard packets.

Bandwidth management and congestion control are key issues in guard
packet algorithms. We discuss these issues in the next section.


4.3 Bandwidth Management and Congestion Control

Senders may control the instantaneous sending rate of an MWPP stream in
a variety of ways. In this section, we describe the mechanics of MWPP
rate control, in the contexts of congestion control and bandwidth
management.

RTP implementations have a responsibility to implement congestion
control mechanisms to protect the network infrastructure (see Section 10
of [2]). In general, senders implement congestion control by monitoring
packet loss via RTCP receiver reports, and reducing the stream sending
rate if packet loss is excessive. Section 6.4.4 of [2] provides guidance
for using the RTCP receiver report fields for congestion control.



Lazzaro/Wawrzynek                                              [Page 16]


INTERNET-DRAFT                                             1 March 2003


Bandwidth management is a second use for MWPP sending rate control.  An
SDP session description may optionally include a bandwidth line (b=, as
defined in Section 6 of [5]) to specify the maximum bandwidth an RTP
stream may use. If an MWPP session description includes a bandwidth
line, senders control the instantaneous sending rate of the stream so
that the maximum bandwidth is not exceeded.

Interactive MWPP senders have a variety of methods to control the
instantaneous sending rate:

  o As described in Section 4.1, MWPP senders may pack several
    MIDI commands into a single MWPP packet, thereby reducing
    instantaneous stream bandwidth at the expense of increasing
    sender queuing latency.

  o Guard packet algorithms (Section 4.2) may be designed in
    a parametric way, so that the tradeoff between artifact
    reduction and stream bandwidth may be tuned dynamically.

  o The recovery journal size may be reduced, by adapting the
    techniques described in Section 5 of this memo and in
    Section 4.1 of [1]. Note that in all cases, the recovery
    journal sender must conform to the mandate defined in
    Section 4 of [1].

  o The incoming MIDI stream may be modified, to reduce the
    number of MIDI commands without significantly altering the
    MIDI performance. Lossy "MIDI filtering" algorithms are well
    developed in the MIDI community, and may be directly applied
    to MWPP rate management.

MWPP senders incorporate these rate control methods into feedback
loops to implement congestion control and bandwidth management.


















Lazzaro/Wawrzynek                                              [Page 17]


INTERNET-DRAFT                                             1 March 2003


5. Sending MWPP Streams: The Recovery Journal

In this section, we describe how senders implement the recovery
journal system. We begin by describing the Recovery Journal Sending
Structure (RJSS). Senders use the RJSS to generate the recovery
journal section for MWPP RTP packets.

The RJSS is a hierarchical representation of the checkpoint history of
the stream. The checkpoint history holds the MIDI commands that are at
risk to packet loss (see Appendix A.1 of [1] for a precise definition
of the checkpoint history). The layout of the RJSS mirrors the
hierarchical structure of the recovery journal bitfields.

Figure 9 shows a RJSS implementation for a simple MWPP sender. The
sender transmits most voice command types, but does not transmit
system commands.  The leaf level of the hierarchy (the jsend_chapter
structures) corresponds to channel chapters (Appendices A.2-7 in [1]).
The second level of the hierarchy (jsend_channel) corresponds to the
channel journal header (Figure 8 in [1]).  The top level of the
hierarchy (jsend_journal) corresponds to the recovery journal header
(Figure 7 in [1]).

Each level in the RJSS may code several items:

  1. The current contents of the recovery journal bitfield for
     the level (jheader[], cheader[], and the chapter bitfields).

  2. A seqnum variable. Seqnum codes the extended RTP sequence
     number of the most recent packet that added information to the
     checkpoint history, at the level or at any level below it. A
     seqnum variable is set to zero if the checkpoint history
     contains no information at the level or at any level below it.

  3. Ancillary variables used by the sending algorithm.

In the sections that follow, we describe the tasks a sender performs to
manage the recovery journal system.














Lazzaro/Wawrzynek                                              [Page 18]


INTERNET-DRAFT                                             1 March 2003


  typedef unsigned char  uint8;      /* must be 1 octet  */
  typedef unsigned short uint16;     /* must be 2 octet  */
  typedef unsigned long  uint32;     /* must be 4 octets */

  /***********************************************************/
  /* leaf level of hierarchy: Chapter W, Appendix A.3 of [1] */
  /***********************************************************/

  typedef struct jsend_chapterw {  /* Pitch Wheel (0xE) */

   uint8  chapterw[2]; /* bitfield (Figure A.3.1, [1])   */
   uint32 seqnum;      /* extended sequence number, or 0 */

  } jsend_chapterw;

  /***********************************************************/
  /* leaf level of hierarchy: Chapter N, Appendix A.4 of [1] */
  /***********************************************************/

  typedef struct jsend_chaptern { /* Note commands (0x8, 0x9) */

   uint8  chaptern[272];  /* bitfield (Figure A.4.1, [1])   */
   uint16 size;           /* actual size of chaptern[]      */
   uint32 seqnum;         /* extended sequence number, or 0 */

   uint32 note_seqnum[128];  /* most recent note seqnum, or 0 */
   uint32 note_tstamp[128];  /* NoteOn execution timestamp    */
   uint8  note_state[128];   /* NoteOn velocity, 0 -> NoteOff */

  } jsend_chaptern;

  /***********************************************************/
  /* leaf level of hierarchy: Chapter C, Appendix A.7 of [1] */
  /***********************************************************/

  typedef struct jsend_chapterc {     /* Control Change (0xB) */

   uint8  chapterc[257];    /* bitfield (Figure A.7.1, [1])   */
   uint16 size;             /* actual size of chapterc[]      */
   uint32 seqnum;           /* extended sequence number, or 0 */

   uint8  control_state[128];     /* per-number control state */
   uint32 control_seqnum[128];    /* most recent seqnum, or 0 */

  } jsend_chapterc;


   Figure 9 -- Recovery Journal Sending Structure (part 1)



Lazzaro/Wawrzynek                                              [Page 19]


INTERNET-DRAFT                                             1 March 2003


  /***********************************************************/
  /* leaf level of hierarchy: Chapter P, Appendix A.2 of [1] */
  /***********************************************************/

  typedef struct jsend_chapterp { /* MIDI Program Change (0xC) */

   uint8  chapterp[3]; /* bitfield (Figure A.2.1, [1])   */
   uint32 seqnum;      /* extended sequence number, or 0 */

  } jsend_chapterp;

  /***************************************************/
  /* second-level of hierarchy, for channel journals */
  /***************************************************/

  typedef struct jsend_channel {

   uint8  cheader[3]; /* header bitfield (Figure 8, [1]) */
   uint32 seqnum;     /* extended sequence number, or 0  */

   jsend_chapterp chapterp;           /* chapter P info  */
   jsend_chapterw chapterw;           /* chapter W info  */
   jsend_chaptern chaptern;           /* chapter N info  */
   jsend_chapterc chapterc;           /* chapter C info  */

  } jsend_channel;

  /*******************************************************/
  /* top level of hierarchy, for recovery journal header */
  /*******************************************************/

   typedef struct jsend_journal {

   uint8 jheader[3]; /* header bitfield (Figure 7, [1])  */
                     /* Note: Empty journal has a header */

   uint32 seqnum;    /* extended sequence number, or 0   */
                     /* seqnum = 0 codes empty journal   */

   jsend_channel channels[16];  /* channel journal state */
                                /* index is MIDI channel */

   } jsend_journal;



  Figure 9 (continued) -- Recovery Journal Sending Structure




Lazzaro/Wawrzynek                                              [Page 20]


INTERNET-DRAFT                                             1 March 2003


5.1 Initializing the RJSS

At the start of a stream, the sender initializes the RJSS.  All seqnum
variables are set to zero, including all elements of note_seqnum[] and
control_seqnum[].

The sender initializes jheader[] to form a recovery journal header that
codes an empty journal. The S bit of the header is set to 1, and the A,
Y, R, and TOTCHAN header fields are set to zero. The checkpoint packet
sequence number field is set to the sequence number of the upcoming
first RTP packet (per Appendix A.1 of [1]).

In jsend_chaptern, elements of note_tstamp[] and note_state[] are set to
zero. In jsend_chapterc, control_state[] is initialized to the default
value for each controller number, in the format of the chosen tool type
(as defined in Appendix A.7 in [1]).


5.2 Traversing the RJSS

Whenever an MWPP RTP packet is created (Step 3 in the algorithm defined
in Figure 8), the sender traverses the RJSS to create the recovery
journal for the packet. The traversal begins at the top level of the
RJSS. The sender copies jheader[] into the packet, and then sets the S
bit of jheader[] to 1.

The traversal continues depth-first, visiting every jsend_channel whose
seqnum variable is non-zero. The sender copies the cheader[] array into
the packet, and then sets the S bit of cheader[] to 1.  After each
cheader[] copy, the sender visits each leaf-level chapter, in order of
its appearance in the chapter journal Table of Contents (first P, then
W, then N, then C, as shown in Figure 8 of [1]).

If a chapter has a non-zero seqnum, the sender copies the chapter
bitfield array into the packet, and then sets the S bit of the RJSS
array to 1. For chaptern[], the B bit is also set to 1. For the
variable-length chapters (chaptern[] and chapterc[]), the sender checks
the size variable to determine the bitfield length

Before copying chaptern[], the sender updates the Y bit of each note log
to code the onset of the associated NoteOn command (Figure A.4.3 in
[1]). To determine the Y bit value, the sender checks the note_tstamp[]
array for note timing information.








Lazzaro/Wawrzynek                                              [Page 21]


INTERNET-DRAFT                                             1 March 2003


5.3 Updating the RJSS

After an MWPP RTP packet is sent, the sender updates the RJSS to refresh
the checkpoint history (Step 5 in the sending algorithm defined in
Figure 8). For each command in the MIDI list of the sent packet, the
sender performs the update procedure we describe below.

The update procedure begins at the leaf level. The sender generates a
new bitfield array for the chapter associated with the MIDI command,
using the chapter-specific semantics defined in Appendix A of [1].  For
the fixed-length chapterp[] or chapterw[], the sender operates directly
on the bitfields. For the variable-length chaptern[] or chapterc[], the
sender uses a two-step update algorithm:

  1. The sender updates the state arrays for the command note number
     (Chapter N) or controller number (Chapter C). These arrays, in
     jsend_chaptern or jsend_chapterc in Figure 9, code the packet
     extended sequence number (note_seqnum[] and control_seqnum[]),
     the command execution timestamp (note_tstamp[]), and information
     from the command data field (note_state[] or control_state[]).

  2. The sender generates the chaptern[] or chapterc[] bitfields, by
     looping through the state arrays. If the note_seqnum[] or
     control_seqnum[] value for an array index is non-zero, the
     sender examines the associated note_state[] or control_state[]
     array element, and codes data from the element into the bitfield.
     After the looping completes, the sender sets the chapter size
     variable to code the final bitfield length.

In addition, the sender clears the S bit of the chapterp[], chapterw[],
or chapterc[] bitfield. For chaptern[], the sender clears the S bit or
the B bit of the bitfield, as described in Appendix A.4 of [1].

Next, the sender refreshes the upper levels of the RJSS hierarchy. At
the second-level, the sender updates the cheader[] bitfield of the
channel associated with the command. The sender sets the S bit of
cheader[] to 0. If the new command forced the addition of a new chapter
or channel journal, the sender may also update other cheader[] fields.
At the top-level, the sender updates the top-level jheader[] bitfield in
a similar manner.

Finally, the sender updates the seqnum variables associated with the
changed bitfield arrays. The sender sets the seqnum variables to the
extended sequence number of the packet.







Lazzaro/Wawrzynek                                              [Page 22]


INTERNET-DRAFT                                             1 March 2003


5.4 Trimming the RJSS

At regular intervals, receivers send RTCP receiver reports to the sender
(as described in Section 6.4.2 of [2]). These reports include the
extended highest sequence number received (EHSNR) field. This field
codes the highest sequence number that the receiver has observed from
the sender, extended to disambiguate sequence number rollover.

When the sender receives an RTCP receiver report, it runs the RJSS
trimming algorithm. The trimming algorithm uses the EHSNR to trim away
parts of the RJSS, and thus reduce the size of recovery journals sent in
subsequent RTP packets.

The algorithm (as applied to a two-party session) relies on the
following observation: if the EHSNR indicates that a packet with
sequence number K has been received, MIDI commands sent in packets with
sequence numbers I <= K may be removed from the RJSS without violating
the recovery journal mandate defined in Section 4 of [1].

To begin the trimming algorithm, the sender extracts the EHSNR field
from the receiver report, and adjusts the EHSNR to reflect the sequence
number extension prefix of the sender. Then, the sender compares the
adjusted EHSNR value with seqnum fields at each level of the RJSS,
starting at the top level.

Levels whose seqnum is less than or equal to the adjusted EHSNR are
trimmed, by setting the seqnum to zero. If necessary, the jheader[] and
cheader[] arrays above the trimmed level are adjusted to match the new
journal layout. The checkpoint packet sequence number field of jheader[]
is updated to match the EHSNR.

At the leaf level, the sender trims the size of the variable-length
chaptern[] and chapterc[] bitfields. The sender loops through the
note_seqnum[] or control_seqnum[] array, and clears elements whose value
is less than or equal to the adjusted EHSNR. The sender then creates a
new chaptern[] or chapterc[] bitfield, and updates the LENGTH field of
the associated cheader[] bitfield.

Note that the trimming algorithm does not add information to the
checkpoint history. As a consequence, the trimming algorithm does not
clear the S bit (and for chaptern[], the B bit) of any recovery journal
bitfield. As a second consequence, the trimming algorithm does not set
RJSS seqnum variables to the EHSNR value.








Lazzaro/Wawrzynek                                              [Page 23]


INTERNET-DRAFT                                             1 March 2003


5.5 Implementation Notes

For clarity, the recovery journal sender implementation we describe has
been simplified in several ways. In this section, we discuss the
improvements that would be found in a complete, efficient sender
implementation suitable for use in a production system.

In a production implementation, the sending structure shown in Figure 9
would be modified to cover the full recovery journal syntax.  Chapter
journal structures would be added for the missing channel and system
chapters defined in Appendices A and B of [1].

An efficient implementation would use enhanced versions of the
traversing, updating, and trimming algorithms presented in Sections
5.2-4. In particular, the Chapter N and Chapter C algorithms would use
more sophisticated RJSS data structures, in order to avoid looping
through all 128 note or controller numbers. The recovery journal sender
implemented in [19] includes enhancements of this type.

Finally, a production sender implementation would probably implement
algorithms that support a variety of MWPP application domains (two-party
topologies and multi-party topologies, RTCP and no-RTCP, etc). In the
Appendices of this memo, we discuss recovery journal sender issues for
application domains beyond the two-party example system described above.



























Lazzaro/Wawrzynek                                              [Page 24]


INTERNET-DRAFT                                             1 March 2003


6. Receiving MWPP Streams: General Considerations

In this section, we discuss MWPP receiver implementation issues, in the
context of the interactive session introduced in Section 2.

To begin, we imagine that an ideal network carries the RTP stream.
Packets are never lost or reordered, and the end-to-end latency is
constant. In addition, we assume that all MIDI commands coded in the
MIDI list of a packet share the same command execution timestamp (as
defined in Section 3 of [1]), and that the default semantics for command
timestamps are in effect.

Under these conditions, a simple algorithm may be used to render a high-
quality performance. Upon the receipt of an RTP packet, the receiver
immediately executes the commands coded in the MIDI command section of
the payload. Commands are executed in order of their appearance in the
MIDI list. The command timestamps are ignored.

Unfortunately, this simple algorithm breaks down once we relax our
assumptions about the network and the MIDI list:

  1. If we permit lost and reordered packets to occur in the
     network, the algorithm may produce unrecoverable rendering
     artifacts, violating the mandate defined in Section 4 of [1].

  2. If we permit the network to exhibit variable latency, the
     algorithm modulates the network jitter onto rendered MIDI
     command stream.

  3. If we permit a MIDI list to code commands with different
     timestamps, the algorithm adds temporal jitter to the
     rendered performance, as it ignores MIDI list timestamps.

In this section, we discuss interactive receiver design techniques under
these relaxed assumptions (see Appendix A for a discussion of content
streaming receiver design).

Interactive receiver design is not a "one size fits all" endeavor.
Applications often target specific types of network environments, and
receiver algorithms are crafted to work well on those networks. In the
sections below, we describe a complete receiver design for high-
performance WAN networks (Section 6.1) and discuss design issues for
other types of networks (Section 6.2).








Lazzaro/Wawrzynek                                              [Page 25]


INTERNET-DRAFT                                             1 March 2003


6.1 The Network Musical Performance (NMP) Receiver Design

In this section, we describe the MWPP receiver implemented in the
Network Music Performance (NMP) system described in [13] and implemented
in [19].

The NMP system is an interactive musical performance application that
uses an early prototype version of MWPP. Musicians located at different
sites interact over the network to perform as they would if located in
the same room, using MIDI controllers as instruments. NMP is designed
for use between university sites within the State of California in the
USA, using the CalREN2 network.

In an NMP session, network artifacts may affect how a musician hears the
MIDI performances of remote players. However, the network does not
affect how a musician hears his own performance. In this way, NMP
differs from MWPP LAN applications. In LAN work, a musician usually
hears his own MIDI performance via the network link.

Several aspects of CalREN2 network behavior (as measured in 2001
timeframe, as documented in [13]) guided the NMP system design:

  o  The median symmetric latency (1/2 the round-trip time)
     of packets sent between network sites is comparable to the
     acoustic latency between two musicians located in the same
     room. For example, the latency between Berkeley and Stanford
     is 2.1 ms, corresponding to an acoustic distance of 2.4 feet
     (0.72 meters). These campuses are 40 miles (64 km) apart.

  o  For most times of day, the nominal temporal jitter is
     quite short (for Berkeley-Stanford, the standard deviation
     of the round-trip time was under 200 microseconds).

  o  For most times of day, a few percent (0-4%) of the packets
     sent arrive significantly late (> 40 ms), probably due
     to a queuing transient somewhere in the network path.
     More rarely (< 0.1%), a packet is lost during the transient.

  o  At predictable times during the day (before lunchtime,
     at the end of the workday, etc), network performance
     deteriorates (10-20% late packets) in a manner that makes
     the network unsuitable for low-latency interactive use.

  o  CalREN2 has deeply over-provisioned bandwidth, relative to
     MIDI bandwidth usage.






Lazzaro/Wawrzynek                                              [Page 26]


INTERNET-DRAFT                                             1 March 2003


The NMP sender freely uses network bandwidth to improve the performance
experience. As soon as a musician generates a MIDI command, an RTP
packet coding the command is sent to the other players. This sending
algorithm reduces latency at the cost of bandwidth. In addition, guard
packets (described in Section 4.2) are sent at frequent intervals, to
minimize the impact of packet loss.

The NMP receiver maintains a model of the stream, and uses this model as
the basis of its resiliency system. Upon the receipt of an MWPP packet,
the receiver predicts the RTP sequence number and the RTP timestamp
(with error bars) of the packet. Under normal network conditions, about
95% of received packets fit the predictions [13]. In this common case,
the receiver immediately executes the MIDI command coded in the packet.
Note that the NMP receiver does not use a playout buffer; the design is
optimized for lowest latency at the expense of command jitter.

Occasionally, an incoming packet fits the sequence number prediction but
falls outside the timestamp prediction error bars (see Appendix B of
[13] for timestamp model details). In most cases, the receiver still
executes the MIDI command coded in the packet. An important exception is
MIDI NoteOn commands with non-zero velocity: the receiver discards these
commands. By discarding late commands that sound notes, the receiver
prevents "straggler notes" from disturbing a performance. By executing
all other late MIDI commands, the receiver quiets "soft stuck notes"
immediately, and updates all other MIDI state in an acceptable way.

More rarely, an incoming packet does not fit the sequence number
prediction.  The receiver keeps track of the highest sequence number
received in the stream, and predicts that an incoming packet will have a
sequence number one greater than this value. If the sequence number of
an incoming packet is greater than the prediction, a packet loss has
occurred. If the sequence number of the received packet is less than the
prediction, the packet has been received out of order. All sequence
number calculations are modulo 2^16, and use standard methods (described
in [2]) to avoid tracking errors during rollover.

If a packet loss has occurred, the receiver examines the journal section
of the received packet, and uses it to gracefully recover from the loss
episode. We describe this recovery procedure in Section 7 of this memo.
The recovery process may result in the execution of one or more MIDI
commands. After executing the recovery commands, the receiver processes
the MIDI command encoded in the packet, using the timestamp model test
described above.








Lazzaro/Wawrzynek                                              [Page 27]


INTERNET-DRAFT                                             1 March 2003


If a packet is received out of order, the receiver ignores the packet.
The receiver takes this action because a packet received out of order is
always preceded by a packet that signalled a loss event. This loss event
triggered the recovery process, which may have executed recovery
commands. The MIDI command coded in the out-of-order packet might, if
executed, duplicate these recovery commands, and this duplication might
endanger the integrity of the stream. Thus, ignoring the out-of-order
packet is the safe approach.

6.2 Receiver Design Issues

The NMP receiver targets a network with a particular set of
characteristics: low nominal jitter, low packet loss, and occasional
outlier packets that arrive very late. In this section, we consider how
networks with different characteristics impact MWPP receiver design.

Networks with significant nominal jitter cannot use the buffer-free
receiver design described in Section 6.1. For example, the NMP system
performs poorly for musicians that use dial-up modem connections,
because the buffer-free receiver design modulates modem jitter onto the
performances. Receivers designed for high-jitter networks should use a
playout buffer. References [17] and [20] describe how to use playout
buffers in latency-critical applications. Appendix A.2 may also be
interest, as it addresses MWPP-specific playout buffer issues.

Receivers intended for use on LANs face a different set of issues. A
dedicated LAN fabric built with modern hardware is in many ways a
predictable environment. The network problems addressed by the NMP
receiver design (packet loss and outlier late packets) might only occur
under extreme network overload conditions.

Systems designed for this environment may choose to configure streams
without the recovery journal system (Appendix C.1.1 of [1]).  Receivers
may also wish to forego, or simplify, the detection of outlier late
packets. Receivers should monitor the RTP sequence numbers of incoming
packets, to detect network unreliability.

However, in some respects, LAN applications may be more demanding than
WAN applications. In LAN applications, musicians may be receiving
performance feedback from audio that is rendered from the MWPP stream.
The tolerance a musician has for latency and jitter in this context may
be quite low.

To reduce the perceived jitter, receivers may use a small playout buffer
(in the range of 100us to 2ms). The buffer does add a a small amount of
latency to the system, that may be annoying to some players. Receiver
designs should include buffer tuning parameters, to let musicians adjust
the tradeoff between latency and jitter.



Lazzaro/Wawrzynek                                              [Page 28]


INTERNET-DRAFT                                             1 March 2003


7. Receiving MWPP Streams: The Recovery Journal

In this section, we describe the recovery algorithm used by the NMP
receiver [13]. In most ways, the recovery techniques we describe are
generally applicable to interactive MWPP receiver design. However, a few
aspects of the design are specialized for the NMP system:

  o The recovery algorithm covers the subset of MIDI commands
    used by MPEG 4 Structured Audio [7]. Structured Audio does
    not use use MIDI Systems (0xF) commands, and uses MIDI
    Control Change (0xB) commands in a simplified way.

  o The NMP system does not use a playout buffer, and so the
    recovery algorithm does not address interactions with a
    playout buffer.

In addition, to simplify the discussion, we omit receiver support for
the Poly Aftertouch (0xA) and Channel Aftertouch (0xD) voice commands.

At a high level, the receiver algorithm works as follows. Upon the
detection of a packet loss, the receiver examines the recovery journal
of the packet that ends the loss event. If necessary, the receiver
executes one or more MIDI commands to recover from the loss.

To prepare for recovery, a receiver maintains a data structure, the
Recovery Journal Receiver Structure (RJRS). The RJRS codes information
about the MIDI commands the receiver executes (both incoming stream
commands and self-generated recovery commands). At the start of the
stream, the RJRS is initialized to code that no commands have been
executed.  Immediately after executing a MIDI command, the receiver
updates the RJRS with information about the command.

We now describe the recovery algorithm in detail. We begin with two
definitions that classify loss events. These definitions assume that the
packet that ends the loss event has RTP sequence number I.

  o Single-packet loss. A single-packet loss occurs if the last
    packet received before the loss event (excluding out-of-order
    packets) has the sequence number I-2 (modulo 2^16).

  o Multi-packet loss. A multi-packet loss occurs if the last
    packet received before the loss event (excluding out-of-order
    packets) has a sequence number less than I-2 (modulo 2^16).








Lazzaro/Wawrzynek                                              [Page 29]


INTERNET-DRAFT                                             1 March 2003


Upon the detection of a packet loss, the recovery algorithm begins by
examining the recovery journal header (Figure 7 of [1]), to check for
several special cases:

  o If the header field A is 0, the recovery journal has no channel
    journals, and so no action is taken. Note that if this algorithm
    supported MIDI Systems commands, it would also examine the Y field.

  o If a single-packet loss has occurred, and the header S bit is
    1, the lost packet has a MIDI command section with an empty
    MIDI list. No action is taken.

If these checks fail, the recovery algorithm proceeds to parse the
recovery journal body. For each channel journal (Figure 8 in [1]) in the
recovery journal, the receiver compares the data in each chapter journal
(Appendix A of [1]) to the RJRS data for the chapter. If the data are
inconsistent, the algorithm infers that MIDI command(s) related to the
chapter journal have been lost. The recovery algorithm executes MIDI
commands to repair this loss, and updates the RJRS to reflect the
repair.

For multi-packet losses, the receiver parses each channel and chapter
journal and checks for inconsistency. For single-packet losses, journal
parsing is more efficient, as the receiver may skip channel and chapter
journals whose S bits are set to 1.

If the NMP recovery algorithm had supported MIDI System commands, the
system chapters (Appendix B in [1]) of the system journal (Figure 9 in
[1]) would be compared to systems data stored in the RJRS. If the
recovery algorithm discovered inconsistency, MIDI System commands would
be executed to repair the loss.

In the sections that follow, we describe the recovery steps that are
specific to each chapter journal. We also describe how to update the
RJRS for the command types associates with the chapter journal. We cover
4 chapter journal types: W (Pitch Wheel, 0xE), N (Note, 0x8 and 0x9), C
(Control Change, 0xB) and P (Program Change, 0xC). Chapters are parsed
in the order of appearance in the Table of Contents of the channel
journal header (P, then W, then N, then C).

The sections below reference the C implementation of the RJRS shown in
Figure 10. This structure is hierarchical, reflecting the recovery
journal architecture. At the leaf level, specialized data structures
(jrec_chapterw, jrec_chaptern, jrec_chapterc, and jrec_chapterp) code
state variables for a single chapter journal type. A mid-level structure
(jrec_channel) represents a single MIDI channel, and a top-level
structure (jrec_stream) represents the entire MIDI stream.




Lazzaro/Wawrzynek                                              [Page 30]


INTERNET-DRAFT                                             1 March 2003


  typedef unsigned char  uint8;       /* must be 1 octet  */
  typedef unsigned short uint16;      /* must be 2 octets */
  typedef unsigned long  uint32;      /* must be 4 octets */


  /***********************************************************/
  /* leaf level of hierarchy: Chapter W, Appendix A.3 of [1] */
  /***********************************************************/

  typedef struct jrec_chapterw {   /* MIDI Pitch Wheel (0xE) */

   uint16 val;           /* most recent 14-bit wheel value   */

  } jrec_chapterw;


  /***********************************************************/
  /* leaf level of hierarchy: Chapter N, Appendix A.4 of [1] */
  /***********************************************************/

  typedef struct jrec_chaptern { /* Note commands (0x8, 0x9) */

   /* arrays of length 128 --> one for each MIDI Note number */

   uint32 time[128];    /* exec time of most recent NoteOn */
   uint32 extseq[128];  /* extended seqnum for that NoteOn */
   uint8  vel[128];     /* NoteOn velocity (0 for NoteOff) */

  } jrec_chaptern;


  /***********************************************************/
  /* leaf level of hierarchy: Chapter C, Appendix A.7 of [1] */
  /***********************************************************/

  typedef struct jrec_chapterc {     /* Control Change (0xB) */

   /* array of length 128 --> one for each controller number */

   uint8 value[128];   /* Chapter C value tool state */
   uint8 count[128];   /* Chapter C count tool state */
   uint8 toggle[128];  /* Chapter C toggle tool state */

  } jrec_chapterc;



   Figure 10 -- Recovery Journal Receiving Structure (part 1)



Lazzaro/Wawrzynek                                              [Page 31]


INTERNET-DRAFT                                             1 March 2003


  /***********************************************************/
  /* leaf level of hierarchy: Chapter P, Appendix A.2 of [1] */
  /***********************************************************/

  typedef struct jrec_chapterp { /* MIDI Program Change (0xC) */

   uint8 prognum;       /* most recent 7-bit program value  */
   uint8 prognum_qual;  /* 1 once first 0xC command arrives */

   uint8 coarse;        /* most recent bank coarse value  */
   uint8 coarse_qual;   /* 1 once first 0xBn 0x00 arrives */

   uint8 fine;          /* most recent bank fine value    */
   uint8 fine_qual;     /* 1 once first 0xBn 0x20 arrives */

  } jrec_chapterp;



  /***************************************************/
  /* second-level of hierarchy, for MIDI channels    */
  /***************************************************/

  typedef struct jrec_channel {

   jrec_chapterw chapterw;  /* Pitch Wheel (0xE) info  */
   jrec_chaptern chaptern;  /* Note (0x8, 0x9) info  */
   jrec_chapterp chapterp;  /* Program Change (0xC) info  */
   jrec_chapterc chapterc;  /* Control Change (0xB) info  */

  } jrec_channel;



  /***********************************************/
  /* top level of hierarchy, for the MIDI stream */
  /***********************************************/

   typedef struct jrec_stream {

   jrec_channel channels[16];  /* index is MIDI channel */

   } jrec_stream;




  Figure 10 (continued) -- Recovery Journal Receiving Structure



Lazzaro/Wawrzynek                                              [Page 32]


INTERNET-DRAFT                                             1 March 2003


7.1 Chapter W: MIDI Pitch Wheel (0xE)

Chapter W of the recovery journal protects against the loss of MIDI
Pitch Wheel (0xE) commands. A common use of the Pitch Wheel command is
to transmit the current position of a "pitch wheel" controller placed on
the side of MIDI piano controllers. Players use the pitch wheel to
dynamically alter the pitch of all depressed keys.

The NMP receiver maintains the jrec_chapterw structure (Figure 10) for
each voice channel in jrec_stream, to code pitch wheel state
information. In jrec_chapterw, val holds the 14-bit data value of the
most recent Pitch Wheel command that has arrived on a channel. At the
start of the stream, val is initialized to the default pitch wheel value
(0x2000).

The NMP receiver uses jrec_chapterw in its recovery algorithm. While
parsing the recovery journal, it may find a Chapter W (Appendix A.3 in
[1]) bitfield in a channel journal. This chapter codes the 14-bit data
value of the most recent MIDI Pitch Wheel command in the checkpoint
history. If the Chapter W and jrec_chapterw pitch wheel values do not
match, one or more commands have been lost.

To recover from this loss, the NMP receiver immediately executes a MIDI
Pitch Wheel command on the channel, using the data value coded in the
recovery journal. The receiver then updates the jrec_chapterw variables
to reflect the executed command.


7.2 Chapter N: MIDI NoteOn (0x8) and NoteOff (0x9)

Chapter N of the recovery journal protects against the loss of MIDI
NoteOn (0x9) and NoteOff (0x8) commands. In this section, we consider
NoteOn commands with a velocity value of 0 to be NoteOff commands. If an
unprotected NoteOn command is lost, a note is skipped. If an unprotected
NoteOff command is lost, a note may sound indefinitely.

The NMP receiver maintains the jrec_chaptern structure (Figure 10) for
each voice channel in jrec_stream, to code note-related state
information. State is kept for each of the 128 note numbers on a
channel, using three arrays of length 128 (vel[], seq[], and time[]).
The elements of these arrays are initialized to zero at the start of a
stream.

The vel[n] array element holds information about the most recent note
command for note number n. If this command is a NoteOn command, vel[n]
holds the velocity data for the command. If this command is a NoteOff
command, vel[n] is set to 0. The time[n] and extseq[n] array elements
code information about the most recently executed NoteOn command.



Lazzaro/Wawrzynek                                              [Page 33]


INTERNET-DRAFT                                             1 March 2003


The time[n] element holds the execution time of the command, referenced
to the local timebase of the receiver. The extseq[n] element holds the
RTP extended sequence number of the packet associated with the command.
For incoming stream commands, extseq[n] codes the packet of the
associated MIDI list. For recovery commands, extseq[n] codes the packet
of the associated recovery journal.

The NMP receiver uses the jrec_chaptern state information in its
recovery algorithm. The Chapter N recovery journal bitfield (Figure
A.4.1 in [1]) consists of two data structures: a bit array coding
recently-sent NoteOff commands that are vulnerable to packet loss, and a
note log list coding recently-sent NoteOn commands that are vulnerable
to packet loss.

Recovery processing begins with the NoteOff bit array. For each set bit
in the array, the receiver checks the corresponding vel[n] element in
jrec_chaptern. If vel[n] is non-zero, a NoteOff command, or a
NoteOff->NoteOn->NoteOff command sequence, has been lost. To recover
from this loss, the receiver immediately executes a NoteOff command for
the note number on the channel, and sets vel[n] to 0.

The receiver then parses the note log list. For each NoteOn log in the
list, the receiver checks the corresponding vel[n] element.

If vel[n] is zero, a NoteOn command, or a NoteOn->NoteOff->NoteOn
command sequence, has been lost. The receiver may execute the most
recent lost NoteOn (to play the note) or may take no action (to skip the
note), based on criteria we describe at the end of this section.
Whether the note is played or skipped, the receiver updates the vel[n],
time[n], and extseq[n] elements as if the NoteOn executed.

If vel[n] is non-zero, the receiver performs several checks to test if a
NoteOff->NoteOn sequence has been lost.

  o If vel[n] does not match the note log velocity, the note log
    must code a different NoteOn command, and thus a NoteOff->NoteOn
    sequence has been lost.

  o If extseq[n] is less than the (extended) checkpoint packet
    sequence numbed coded in the recovery journal header (Figure 7
    of [1]), the vel[n] NoteOn command is not in the checkpoint
    history, and thus a NoteOff->NoteOn sequence has been lost.

  o If the Y bit is set to 1, the NoteOn is musically "simultaneous"
    with the RTP timestamp of the packet. If time[n] codes a time value
    that is clearly not recent, a NoteOff->NoteOn sequence has been lost.





Lazzaro/Wawrzynek                                              [Page 34]


INTERNET-DRAFT                                             1 March 2003


If these tests indicate a lost NoteOff->NoteOn sequence, the receiver
immediately executes a NoteOff command.  The receiver decides if the
most graceful action is to play or to skip the lost NoteOn, using the
criteria we describe at the end of this section. Whether or not the
receiver issues a NoteOn command, the vel[n], time[n], and extseq[n]
arrays are updated as if it did.

Note that the tests above do not catch all lost NoteOff->NoteOn
commands. If a fast NoteOn->NoteOff->NoteOn sequence occurs on a note
number, with identical velocity values for both NoteOn commands, a lost
NoteOff->NoteOn does not result in the recovery algorithm generating a
NoteOff command. Instead, the first NoteOn continues to sound, to be
terminated by the future NoteOff command.  In practice, this (rare)
outcome is not musically objectionable.

Finally, we discuss how the receiver decides whether to play or to skip
a lost NoteOn command. The note log Y bit is set if the NoteOn is
"simultaneous" with the RTP timestamp of the packet holding the note
log. If Y is 0, the receiver does not execute a NoteOn command. If Y is
1, and if the packet has not arrived late, the receiver immediately
executes a NoteOn command for the note number, using the velocity coded
in the note log.


7.3 Chapter C: MIDI Control Change (0xB)

Chapter C (Appendix A.7 in [1]) protects against the loss of MIDI
Control Change commands.  A Control Change command alters the 7-bit
value of one of the 128 MIDI controllers.

Chapter C offers three tools for protecting a Control Change command:
the value tool (for graded controllers such as sliders) the toggle tool
(for on/off switches) and the count tool (for momentary-contact
switches). Senders choose a tool to encode recovery information for a
controller, and encode the tool type along with the data in the journal
(Figures A.7.2 and A.7.3 in [1]).

A few uses of Control Change commands are not solely protected by
Chapter C. The protection of controllers 0 and 32 (Bank Coarse and Bank
Fine) is shared between Chapter C and Chapter P (Section 7.4).

In addition, some controllers are used to implement a system for setting
secondary parameters (the Registered Parameter Number (RPN) and the Non-
Registered Parameter Number (NRPN) systems). Chapter M (Appendix A.8 of
[1]) protects the RPN and NRPN system. MPEG 4 Structured Audio [7] does
not use these systems, and so the NMP system does not use Chapter M.





Lazzaro/Wawrzynek                                              [Page 35]


INTERNET-DRAFT                                             1 March 2003


The NMP receiver maintains the jrec_chapterc structure (Figure 10) for
each voice channel in jrec_stream, to code Control Change state
information. The value[] array holds the most recent data values for
each controller number. At the start of the stream, value[] is
initialized to the SA default controller data values specified in [7].

The count[] and toggle[] arrays hold the count tool and toggle tool
state values. At the start of a stream, these arrays are initialized to
zero. Whenever a Control Command executes, the receiver updates the
count[] and toggle[] state values, using the algorithms described in
Appendix A.7 of [1].

The NMP receiver uses the jrec_chapterc state information in its
recovery algorithm. The Chapter C bitfield consists of a list of
controller logs. Each log codes the controller number, the tool type,
and the state value for the tool.

For the log for controller number n, the receiver determines the tool
type in use (value, toggle, or count), and compares the data in the log
to the associated jrec_chapterc array element (value[n], toggle[n], or
count[n]). If the data do not match, one or more Control Change commands
have been lost.

The method the NMP receiver uses to recover from this loss depends on
the tool type and the controller number. For graded controllers
protected by the value tool, the receiver executes a Control Change
command using the new data value.

For the toggle and count tools, the recovery action is more complex.
For example, the Hold Pedal (64) controller is typically used as a
sustain pedal for piano-like sounds, and is typically coded using the
toggle tool. If Hold Pedal Control Change command(s) are lost, the NMP
receiver takes different actions depending on the starting and ending
state of the lost sequence, to ensure "ringing" piano notes are "damped"
to silence.

After recovering from the loss, the receiver updates the value[],
toggle[], and count[] arrays to reflect the Chapter C data and the
executed commands.


7.4 Chapter P: MIDI Program Change (0xC)

Chapter P of the recovery journal protects against the loss of MIDI
Program Change (0xC) commands. A common use for Program Change commands
is to select the timbre of a channel. The 7-bit data value of the
command selects one of 128 possible timbres. The binding of data values
to instrument timbres is managed by the rendering algorithm in use.



Lazzaro/Wawrzynek                                              [Page 36]


INTERNET-DRAFT                                             1 March 2003


To increase the number of possible timbres, MIDI Control Change (0xB)
commands may be issued prior to the Program Change command, to select
which "bank" of programs is in use. The Bank Coarse (controller number
0) and Bank Fine (controller number 32) Control Change commands may be
used together, to specify the 14-bit bank number that subsequent Program
Change commands reference. Alternatively, the Bank Coarse controller
number may be used alone to specify a 7-bit bank number.

The NMP receiver maintains the jrec_chapterp structure (Figure 10) for
each voice channel in jrec_stream, to code Program Change state
information. The prognum variable of jrec_chapterp holds the data value
for the most recent Program Change command that has arrived on the
stream.

The coarse and fine variables of jrec_chapterp code the Bank Coarse and
Bank Fine Control Change data values that were in effect when that
Program Change command arrived. The prognum_qual, coarse_qual and
fine_qual variables are initialized to 0, and are set to 1 upon the
receipt of the first Program Change, Bank Coarse Control Change, and
Bank Fine Control Change command, respectively.

The NMP receiver uses jrec_chapterp in its recovery algorithm. While
parsing the recovery journal, it may find a Chapter P (Appendix A.2 in
[1]) bitfield in a channel journal. Fields in Chapter P code the data
value for the most recent Program Change command, and the coarse and
fine bank values in effect for that Program Change command (if any).

The receiver checks to see if these recovery journal fields match the
data stored in jrec_chapterp. If these checks fail, one or more Program
Change commands have been lost.

To recover from this loss, the receiver takes the following steps.  If
the C (coarse) or F (fine) bits in Chapter P are set (Figure A.2.1 in
[1]), Control Change bank command(s) have preceded the Program Change
command. The receiver compares the bank data coded by Chapter P with the
current bank data for the channel (coded in jrec_channelc).

If the bank data do not agree, the receiver issues Control Change
command(s) to align the stream with Chapter P. The receiver then updates
jrec_channelp and jrec_channelc variables to reflect the executed
command(s). Finally, the receiver issues a Program Change command that
reflects the data in Chapter P, and updates the prognum and qual_prognum
fields in jrec_channelp.

Note that this method relies on Chapter P recovery to precede Chapter C
recovery during channel journal processing. This ordering ensures that
lost bank select Control Change that occur after a lost Program Change
command in a stream are handled correctly during Chapter C parsing.



Lazzaro/Wawrzynek                                              [Page 37]


INTERNET-DRAFT                                             1 March 2003


8. Congestion Control

Congestion control issues for MWPP implementations are discussed in
detail in Section 4.3 of this memo. Also see Section 8 of [1].


9. Security Considerations

General security considerations for MWPP are discussed in detail in
Section 7 of [1]. Supplemental discussion on MWPP implementation
security issues is presented in Section 3 of this memo.


10. Acknowledgments

See the Acknowledgments section of [1].



































Lazzaro/Wawrzynek                                              [Page 38]


INTERNET-DRAFT                                             1 March 2003


Appendix A. Content Streaming with MWPP

In this Appendix, we show how to use a media server to distribute MIDI
performances to one or more clients. We refer to applications of this
type as content streaming applications. The content source may be a live
MIDI concert, a pre-recorded MIDI file, or a dynamically-generated MIDI
stream that slowly changes in response to client user activity (such as
clicks on web links).

Interactive and content-streaming MWPP applications differ in the role
of latency in the application. Interactive applications place the
network in the sensory-motor loop of a single musician, or in the
performance loop between several musicians. To optimize the user
experience, these applications run at or near the underlying latency of
the network. Receivers use a minimal playout buffer (or no playout
buffer at all), and rely on the specialized methods for lost and late
packet recovery described in the main text.

In comparison, clients and servers in content-streaming applications
interact at a relatively slow time constant. As a consequence, MWPP
clients may use a playout buffer to smooth network jitter, without
impacting the user response time. Clients may also use the playout
buffer in conjunction with generic forward-error correction (FEC, [12])
or packet retransmission [21] in order to replace lost packets.

In the sections below, we describe how to set up MWPP content streaming
sessions (Appendix A.1), discuss baseline client and server streaming
algorithms (Appendix A.2), and show how to enhance MWPP content
streaming with an ancillary packet replacement stream (Appendix A.3).


A.1 Content Streaming: Session Management

In this Appendix, we show how content-streaming servers and clients set
up MWPP streams. We assume the participants use the Real Time Streaming
Protocol (RTSP, [6]) to manage the session.

Like HTTP, RTSP identifies a media stream with a URL. For example, the
RTSP URL rtsp://cs.example.net/ode_to_joy may identify a MWPP stream of
a Bach performance. By default, an RTSP URL implies that an RTSP server
may be accepting TCP connections at port 554 of the host name that
follows the double slash in the RTSP URL. However, note that RTSP may
also use UDP (rtspu://) or TLS (rtsps://) transport.

In a typical use, a client initiates contact with the server. For a TCP
RTSP URL, the client opens a TCP connection to the RTSP server named in
the URL, and sends the server a series of RTSP messages to set up the
session.



Lazzaro/Wawrzynek                                              [Page 39]


INTERNET-DRAFT                                             1 March 2003


v=0
o=server 2520644554 2838152170 IN IP4 server.example.net
s=Example
t=0 0
a=recvonly
m=audio 0 RTP/AVP 61
a=control:rtsp://cs.example.net/ode_to_joy/baseline
a=rtpmap: 61 mpeg4-generic/44100
a=fmtp: 61 streamtype=5; mode=mwpp; config=""; profile-level-id=76;
a=fmtp: 61 render=sasc; inline="e4"; compr=none;

         Figure A.1 -- RTSP session description



One RTSP method, DESCRIBE, returns the SDP session description
associated with the RTSP URL. Figure A.1 shows a session description
returned by RTSP for an example MWPP session. The session description
presents the session from the view of the client (note the recvonly
attribute).

Like the session descriptions for interactive applications shown in the
main text, the RTSP session description in Figure A.1 codes media
initialization information for the MWPP session. In this example, the
session uses an mpeg4-generic MWPP stream to specify a General MIDI
renderer.

However, in most cases, RTSP session descriptions do not code explicit
transport information in the session description (transport type,
network addresses, port numbers, etc). Instead, RTSP session
descriptions usually code RTSP URLs that may be used to negotiate
transport details for each stream. The URLs appear as control attributes
in the session description (for example, in Figure A.1
rtsp://cs.example.net/ode_to_joy/baseline).

Clients use the RTSP SETUP method to translate an RTSP control URL into
concrete transport information. This indirect approach supports flexible
transport setup, and is useful for working around network middleboxes
(such as NATs and firewalls).  To overcome stubborn network obstacles,
RTSP supports interleaving RTP and RTCP streams over the TCP connection
that carries RTSP message traffic.

Clients also use RTSP methods to start (the PLAY method) and stop (the
PAUSE method) media flow. The PLAY method supports parameters that
specify the starting point of the stream, and thus together with PAUSE
implements the full set of tape-deck remote control commands (rewind,
fast-forward, play, and pause). To end a session, the client uses the
TEARDOWN method.



Lazzaro/Wawrzynek                                              [Page 40]


INTERNET-DRAFT                                             1 March 2003


Once the RTSP server replies to the SETUP method, the client sets up the
RTP and RTCP stream(s) for the session. For UDP media streams, the
client may use the interactive setup algorithms described in Sections 2
and 3 of the main text. For TCP media streams that establish separate
connections for media flow, the client may follow the interactive MWPP
TCP guidance in Appendix C.1 and C.2. For media streams that interleave
RTP and RTCP into the RTSP TCP connection, the client may follow the
guidance in Appendix C.3.


A.2 Content Streaming: Baseline Algorithms

In this Appendix, we describe MWPP sending and receiving algorithms for
content-streaming sessions. We focus on MWPP streaming over unicast UDP
transport.

A client in an MWPP content-streaming application implements an MWPP
receiver. Unlike the interactive receiver described in Sections 6 and 7
of the main text, a content-streaming receiver implements a playout
buffer. Below, we present a brief sketch of a simple receiver design, to
introduce the first-order design issues.

The heart of the receiver design is the playout buffer. As the heart
pumps blood, the playout buffer pumps packets. Architecturally, an MWPP
playout buffer is a queue of pointers to MWPP packets, ordered by
sequence number (lowest numbers at the front of the queue, modulo 2^32).

At the start of a stream, the queue is empty. As packets arrive on the
RTP port, the receiver places them at the back of the queue. If the
sequence number of a new packet indicates a loss event, the receiver
adds empty slots to the back of the queue for the lost packets, and
places the new packet behind the empty slots. If a packet arrives out of
order, the receiver places the packet into the empty queue slot reserved
for it.

The receiver renders audio by removing MWPP packets from the front of
the queue. At the start of a stream, the receiver does not start
rendering the queue immediately. Instead, the receiver waits until the
stream time (as determined by the RTP timestamps) held in the queue
matches the desired buffer latency. Receivers choose the buffer latency
to match the requirements of the application, balancing user interaction
response time (aided by low buffer latency) and the fidelity of the
rendered stream (aided by high buffer latency).

We now describe how the receiver removes packets from the front of the
queue. To simplify the explanation, we assume that the MIDI command
section of each packet holds one MIDI command, whose execution time is
coded by RTP timestamp of the packet. In practice, well-encoded packets



Lazzaro/Wawrzynek                                              [Page 41]


INTERNET-DRAFT                                             1 March 2003


in content-streaming applications will hold several (or perhaps many)
MIDI commands. However, the extension of the rendering algorithm we
present to multi-command packets is straightforward.

To start rendering the queue, a receiver takes the first packet off the
queue, and initializes a variable time_pointer to the RTP timestamp
value of the packet. The receiver extracts the MIDI command from the
packet, and passes the command to the MIDI rendering system for
immediate execution.

At regular intervals thereafter (say, once every 250 microseconds), the
receiver increments time_pointer by the interval value, and checks if
the packet closest to the front of the queue has an RTP timestamp that
is less than or equal to time_pointer. If so, and if the packet is at
the very front of the queue, the receiver takes the packet off the
queue, and extracts the MIDI command from the packet.

However, if an empty slot is at the front of the queue, a packet loss
event has occurred. The receiver removes the empty queue slot(s),
removes the packet that follows the queue slots, and uses the recovery
journal techniques described in Section 7 of the main text to restore
stream integrity. Then, the receiver extracts the MIDI command from the
MIDI command section of the packet,

Finally, the receiver checks to see if the command timestamp of the
extracted MIDI command is reasonably close in time to time_pointer. If
so, the receiver passes the command to the rendering system for
immediate execution. If the timestamp check reveals a late command, the
receiver uses the heuristics described in Section 6.1 of the main text
to decide whether to execute or skip the command.

We now turn our attention to server design issues. A server in an MWPP
content-streaming application implements an MWPP sender. However, a
content-streaming MWPP sender differs in several ways from the
interactive sender described in Sections 4 and 5 in the main text.

Unlike interactive receivers, content-streaming receivers use a playout
buffer, and rely on RTP timestamps to schedule MIDI command execution.
Receivers with these characteristics work best if senders (1) generate
MWPP packets that code a constant interval of media time and (2)
transfer these packets at a relatively stable rate.

Senders may use the SDP ptime [5] parameter to indicate the approximate
duration of the MWPP packets it sends. Note that the exact duration may
vary from packet to packet, due to the event-based nature of MIDI.






Lazzaro/Wawrzynek                                              [Page 42]


INTERNET-DRAFT                                             1 March 2003


In choosing the packet duration, senders balance several issues.  A
longer duration improves efficiency, as header overhead is amortized
over longer time periods. However, a shorter duration reduces the
perceptual impact of single packet loss. In addition, a shorter duration
extends the range of possible playout buffer latencies to smaller
values.


A.3 Content Streaming: Packet Replacement Streams

In this Appendix, we describe how applications may replace lost packets
in an MWPP stream, by using redundant data sent on an ancillary stream
in the session. Applications use this technique to improve the fidelity
of a rendered performance, by avoiding the use of the recovery journal
system for minor loss events.

As described in Appendix A.2, the playout buffer of a receiver is
organized as a queue of pointers to incoming RTP packets. If a loss
event occurs, the receiver leaves empty slots in the queue for the lost
packets.

In this Appendix, we describe how receivers may use an ancillary stream
to fill the empty queue slots with replacement packets. For packet
replacement to be effective, empty slots must be filled before they
reach the front of the queue (and playout occurs). Our examples use
payload-independent tools [12] [21] for RTP packet replacement, as MWPP
does not define MIDI-specific redundancy tools.

One RTP tool for MWPP packet replacement is generic forward error
correction (FEC, [12]). [12] describes a feed-forward system that does
not use receiver feedback. In this system, an ancillary stream carries
an encoded redundant copy of the primary MWPP stream. If a packet loss
occurs on the primary stream, the receiver attempts to reconstruct the
lost packet by processing the ancillary stream.

Figure A.2 shows an MWPP session description that adds an FEC stream to
the MWPP primary stream. The RTP stream with PTYPE number 62 carries the
FEC stream, using the ulpfec format defined in [12].  If a client wishes
to receive the FEC stream, it uses the RTSP URL
rtsp://cs.example.net/ode_to_joy/fec to set up the stream.











Lazzaro/Wawrzynek                                              [Page 43]


INTERNET-DRAFT                                             1 March 2003


v=0
o=server 2520644554 2838152170 IN IP4 server.example.net
s=Example
t=0 0
a=recvonly
m=audio 0 RTP/AVP 61 62
a=control:rtsp://cs.example.net/ode_to_joy/baseline
a=rtpmap: 61 mpeg4-generic/44100
a=fmtp: 61 streamtype=5; mode=mwpp; config=""; profile-level-id=76;
a=fmtp: 61 render=sasc; inline="e4"; compr=none;
a=rtpmap 62 ulpfec/44100
a=fmtp: 62 rtsp://cs.example.net/ode_to_joy/fec

   Figure A.2 -- MWPP session description with generic FEC



We refer the reader to [12] for detailed ulpfec implementation guidance.
Here, we note that ulpfec supports partial packet reconstruction. This
feature reduces the bandwidth of the FEC stream, but limits receivers to
reconstructing only the first N octets of a lost packet. In MWPP
sessions, this feature may permit receivers to reconstruct the RTP
header and MIDI command section, but not the recovery journal section,
of a lost MWPP packet.

However, a receiver may not always be able to use a packet that does not
contain a recovery journal. In particular, the recovery journal is a
vital part of an MWPP packet that ends a loss event, as the receiver
uses the journal to restore the integrity of the MIDI stream.

To close this section, we briefly describe a second way to replace lost
packets in an MWPP session. RTP defines an active tool for packet
replacement, called packet retransmission [21]. In one version of packet
retransmission, a receiver reports packet losses to the sender, using
special RTCP receiver reports. In reply, senders supply replacement
packets to the receiver, using an ancillary stream. See [21] for
implementation guidance for packet retransmission systems.














Lazzaro/Wawrzynek                                              [Page 44]


INTERNET-DRAFT                                             1 March 2003


Appendix B. Multi-party MWPP Sessions

The interactive application described in the main text supports two-
party sessions. In this Appendix, we modify the application to support
sessions with more than two participants. We refer to these sessions as
multi-party sessions.

In a multi-party session, a party receives an RTP stream from each of
the other parties. The application we describe uses a multicast group
address to carry these RTP streams. If the session does not have access
to a multicast network, the application simulates multicast with a mesh
of unicast flows.

In this Appendix, we show how to set up multi-party sessions, for
simulated (Appendix B.1) and true (Appendix B.2) multicast scenarios.
We describe sender (Appendix B.3) and receiver (Appendix B.4)
modifications for multi-party sessions, and discuss scaling issues for
sessions with a large number of participants (Appendix B.5). Readers
should also consult Appendix A of [2] for a more detailed review of RTP
and RTCP algorithms for multicast sessions.


B.1 Multi-party MWPP: Session Management (simulated multicast)

In this section, we describe how to set up sessions that simulate
multicast transport with unicast flows. We begin with an explanation of
simulated multicast (rather than true multicast) to more clearly show
the link to the two-party unicast example described in the main text.
Appendix B.2 covers true multicast session setup.

In a simulated multicast session with N parties, each party sends its
RTP and RTCP streams to N-1 other parties.  Thus, a mesh of 2*N*(N-1)
unicast flows acts to simulate a true multicast network. If certain
conditions are met, N session descriptions may be used to define this N-
party session, just as two session descriptions define the two-party
session in the main text (Figures 1 and 2). However, in the general
case, an N-party session requires N*(N-1) session descriptions.

To show how N session descriptions may define an N-party session, we
consider how the N-th party joins an existing N-1 party session. The N-
th participant prepares a session description, which specifies the
unicast network address and port on which it accepts RTP streams. The N-
th party distributes this session description to the other parties,
perhaps using a SIP conference server [14]. In return, the N-th party
receives a copy of the session description for each of the other N-1
parties. In total, N unique session descriptions define the session.





Lazzaro/Wawrzynek                                              [Page 45]


INTERNET-DRAFT                                             1 March 2003


The SDP render parameter (Appendix C.5 of [1]) may be used to define the
rendering method for the session. If so, all session descriptions
specify the same MIDI renderer. For example, the renderer may specify a
library of SAOL instrument models (Appendix C.5.1 of [1]). A party
selects timbres from the library in-band, by sending MIDI Program Change
(0xB) commands.

We now discuss software implementation issues for simulated multicast
sessions. At the start of a session, the application chooses its
synchronized source ID (ssrc), using the method described in [2]. The
application also prepares a single pair of socket descriptors (rtp_fd
and rtcp_fd) that it uses to send and receive MWPP streams. Figures 3
and 4 in the main text shows code for initializing rtp_fd and rtcp_fd.

Whenever the application receives a session description from a new
party, it creates an address_info structure (Figure B.1) for the party.
The application initializes the rtp_addr and rtcp_addr address_info
fields to match the RTP and RTCP destination addresses coded in the
session description. Figure 6 in the main text shows code for
initializing rtp_addr and rtcp_addr.

The application maintains a list of active address_info structures.  To
send an RTP packet, the application sends a copy of the packet to the
rtp_addr field of every address_info structure in the list, using the
code shown in Figure 7 in the main text.

The application also maintains a second data structure about each active
party, the party_info structure (Figure B.2). Unlike address_info, a new
party_info structure is not created in response to a new session
description.  Instead, party_info structures are created dynamically in
response to the RTP stream, as we describe below.

The primary identifier for a party_info structure is the synchronized
source ID (ssrc) for this party. The ssrc is encoded in the header of
RTP and RTCP packets sent by the party. The application stores active
party_info structures in a hash table that is indexed by the ssrc.  At
the start of a session, this table is empty.


  typedef struct address_info {

  struct sockaddr_in * rtp_addr;      /* where to send RTP stream  */
  struct sockaddr_in * rtcp_addr;     /* where to send RTCP stream */

  } address_info;


    Figure B.1 -- Addresses receivers maintain for an active party



Lazzaro/Wawrzynek                                              [Page 46]


INTERNET-DRAFT                                             1 March 2003


  typedef unsigned long  uint32;   /* must be 4 octets              */

  typedef struct party_info {

  uint32 ssrc;                     /* SSRC (synchronized source ID) */
  char * cname;                    /* RTCP canonical name           */

  /* How well does this party is receive our stream? */

  uint32 last_ehsnr;     /* most recent EHSNR (Appendix B.3), or 0  */

  /* How well we are receiving this party's stream? */

  uint32 hi_seq_ext;      /* highest received RTP seqnum (extended) */
  struct jrec_stream * jrecv[CSYS_MIDI_NUMCHAN];   /* see Figure 10 */

  /* other party-specific state (such as RTCP statistics) not shown */

  } party_info;


      Figure B.2 -- State receivers maintain on an active stream


To receive RTP packets, the application checks the rtp_fd socket
descriptor, using the code shown in Figure 5 of the main text.  If an
RTP packet has arrived, the application examines the SSRC field of the
RTP header, and uses it to locate the party_info structure for the
packet in the hash table.

If a party_info structure is not found, the application creates a new
party_info structure and adds it to the table.  The ssrc variable is set
to the SSRC header value, and the hi_seq_ext field is set to the RTP
sequence number header value. The new party is considered to be on
probation, until future RTP packets indicate correct RTP behavior (see
Appendix A of [2]).

Once a new party is added to a session, the application performs session
management tasks for the party, as described in Section 3 of the main
text.

The application transmits an RTCP stream to the parties in the
address_info list, using rtcp_fd to send RTCP packets to each rtcp_addr
address. The application also accepts RTCP packets on rtcp_fd, and uses
the RTCP SSRC header field to locate the correct party_info structure
for the packet. If no party_info exists for the SSRC value, the
application creates a party_info structure for the party, following the
methods described earlier in the section.



Lazzaro/Wawrzynek                                              [Page 47]


INTERNET-DRAFT                                             1 March 2003


The application also checks to see if a party has changed its ssrc
value. RTCP SDES CNAME (canonical name) packets are used to perform this
check, as described in [2]. Note that the cname string in party_info
codes the canonical name (username@address) of each party.

The application also checks for the exit of the party, as signalled by
RTCP BYE packets, session management transactions (such as SIP BYE
methods), or other means. If the party leaves a session, its party_info
structure is removed from the hash table, and the rendering of its MIDI
stream is gracefully ended.

The deletion of the address_info structure for a departed party is a
more complex issue. If a session management transaction ends the
session, the transaction contains information to identify the
address_info structure to delete.

Otherwise, the application must associate the ssrc value of party with
its network address. In many cases, the source address of RTP and RTCP
packets from the party correlates with the fields of the address_info
structure. In other situations, data in the origin line (o=) of the
session description correlates with the canonical name of the party
(stored in the party_info cname variable).

Finally, we note that the association between party_info and
address_info structures may be of use throughout the session. For
example, RTCP receiver packets may be unicast to the interested party
instead of multicast to all parties, improving efficiency. Note that
this optimization affects the calculation of the RTCP transmission
interval [2].


B.2 Multi-party MWPP: Session Management (true multicast)

In this section, we describe how to set up a multi-party session that
uses a multicast group address. We refer to these sessions as true
multicast sessions.

A single session description is sufficient to define a true multicast
session that hosts an arbitrary number of parties. The format of the
session description is similar to the format of a two-party session
description (Figure 1 in the main text), except that the connection (c=)
line defines a multicast group address. [5] describes the SDP syntax for
multicast group addresses in detail.








Lazzaro/Wawrzynek                                              [Page 48]


INTERNET-DRAFT                                             1 March 2003


The session description may use the SDP render parameter (Appendix C.5
of [1]) to define the rendering method for the session. If so, all
parties use this MIDI rendering method. For example, the renderer may
specify a library of SAOL instrument models (Appendix C.5.1 of [1]). A
party selects timbres from the library in-band, by sending MIDI Program
Change (0xB) commands.

We now discuss software implementation issues for true multicast
sessions. Because multicast coding techniques vary by operating system,
we do not include code fragments in this section.

At the start of a session, the application chooses its synchronized
source ID (ssrc), using the method described in [2]. The application
also prepares a single pair of socket descriptors (rtp_fd and rtcp_fd)
that it uses to send and receive MWPP streams.

When an application joins a session (perhaps via a SIP conference server
[14]), it receives a session description. The application prepares data
structures to code the RTP (rtp_addr) and RTCP (rtcp_addr) multicast
group address and port information defined in the session description.
Once address preparation is complete, the application starts sending its
RTP stream on rtp_fd, using rtp_addr.

As in the simulated multicast case, the application maintains a
party_info structure (Figure B.2) for each party in the session. The
discussion of party_info hash table management in Appendix B.1 also
holds for true multicast sessions.

In an ongoing session, the application performs the session management
tasks described in Section 3 of the main text. The discussion of these
tasks for simulated multicast sessions (Appendix B.1) also holds for
true multicast sessions.

Finally, in some situations, an application may require a custom SDP
render parameter for each sender. In this case, N true multicast session
descriptions are necessary for an N party session. Each session
description defines the same multicast group address.


B.3 Multi-party MWPP: Sender Issues

In this section, we modify the sender implementation described in
Sections 4 and 5 to support multi-party sessions. We modify the recovery
journal trimming algorithm (Section 5.4) to handle RTCP receiver reports
from several parties. We also discuss how senders handle parties that
join a session mid-stream. Apart from these issues, the sender described
in the main text is compatible with multi-party sessions.




Lazzaro/Wawrzynek                                              [Page 49]


INTERNET-DRAFT                                             1 March 2003


Section 5.4 describes an algorithm for trimming the Recovery Journal
Sending Structure (RJSS) encoding of the checkpoint history. This
algorithm assumes a single receiver listens to the sent stream. To trim
the RJSS, the sender examines the RTCP receiver reports from the
receiver, and extracts the extended highest sequence number (EHSNR)
field from the report. The sender adjusts the EHSNR to reflect its own
sequence number prefix, and uses the adjusted EHSNR to trim irrelevant
data from the RJSS.

This trimming algorithm relies on the following observation: if the
EHSNR indicates that a packet with sequence number K has been received,
MIDI commands sent in packets with sequence numbers I <= K may be
removed from the RJSS without violating the recovery journal mandate
defined in Section 4 of [1].

This observation does not hold for multi-party sessions, as several
receivers may be listening to the stream. We modify this observation to
be valid for multi-party sessions, in the following way. We examine the
most recent ENSHR values reported by each receiver, and determine the
lowest adjusted ENSHR value. If this value indicates that a packet with
sequence number K has been received, MIDI commands sent in packets with
sequence numbers I <= K may be removed from the RJSS without violating
the recovery journal mandate defined in [1].

We now describe a multi-party RJSS trimming algorithm, that is based on
the above observation. When a sender receives an RTCP receiver report,
it determines the EHSNR coded by the report, using the algorithm
described in Section 5.4. The sender also extracts the SSRC field of the
receiver report, and locates the party_info structure (Figure B.2)
associated with the ssrc.

If the adjusted EHSNR matches the last adjusted EHSNR value received for
this party (stored in last_ehsnr in party_info), the algorithm ends.
Otherwise, last_ehsnr in party_info is updated with the adjusted EHSNR.
If the first RTCP receiver report has not yet arrived for a new party,
the RJSS may not be trimmed, and the algorithm ends.

Otherwise, the sender loops through all party_info structures, and
locates the lowest last_ehsnr value. The sender uses this last_ehsnr
value to trim the RJSS, using the procedure described in Section 5.4.

Note that for multi-party sessions that use a true multicast network,
senders may not be aware that a new party has joined the session until
the first RTP or RTCP packet has arrived from the party, or until a
session management tool notifies the sender of the new party. During
this interval, the sender is not able to satisfy the recovery journal
mandate (Section 4 of [1]) for the new party. We discuss precautions
receivers should take during this interval in Section B.4.



Lazzaro/Wawrzynek                                              [Page 50]


INTERNET-DRAFT                                             1 March 2003


In addition, when a new party joins a session, the party needs to become
aware of the current state of the MIDI streams it receives. For example,
a Control Change (0xB) command for the channel volume controller (0x07)
may have been sent on a stream before the new party joined the session,
and the checkpoint history of the stream may no longer contain the
command.  Senders may bring a new receiver up to date in several ways,
depending on the type of multi-party session.

For simulated multicast sessions, senders may temporarily add state to
the recovery journal for the benefit of a new party. Senders add this
state to the journal when it becomes aware of a new party, and remove
this state once it receives an RTCP receiver report from the party.
This method works because a receiver is obligated to parse the recovery
journal of the first RTP packet received, and a sender in a simulated-
multicast session is able to ensure that this first packet contains the
temporary state.

However, in a true multicast session a new party may accept its first
RTP packet from a sender before the sender is aware of the new party. In
this case, the new party may never parse the temporary state data
encoded in the recovery journal for its benefit.

To solve this problem, senders insert commands into the MIDI command
stream to inform a new party of the current state of the stream. If the
new party has already joined the session, the new party sees the state
data in the command stream. If the new party joins the session late, the
new party sees the state data in the checkpoint history coded in the
recovery journal.


B.4 Multi-party MWPP: Receiver Issues

In this section, we describe receiver implementation issues for multi-
party sessions. Upon receipt of an RTP packet, the receiver uses the RTP
SSRC header field to locate the party_info structure for the stream. The
party_info structure contains state variables for the received stream
(jrecv[] and hi_seq_ext) that are used in the receiver algorithms in
Sections 6 and 7 in the main text. In most respects, these two-party
algorithms are compatible with multi-party sessions.

However, one multi-party incompatibility does occur in true multicast
sessions. When a new party joins a multicast group, the party may begin
processing RTP packets from senders that are not aware that the new
party is listening. If packet loss occurs on these streams, the recovery
journal of the packet that ends the loss event may not cover the loss
experienced by the new party. This problem occurs because the sender has
not yet seen an RTCP receiver report from the new party.




Lazzaro/Wawrzynek                                              [Page 51]


INTERNET-DRAFT                                             1 March 2003


To handle this issue, a new party in a true multicast session should
handle packet loss events with caution, until it has positive evidence
that senders are aware of its presence.  This evidence may take the form
of RTCP receiver reports from the sender concerning the RTP stream sent
by the new party. Alternatively, this evidence may take the form of
information from a session management tool. Before such evidence is
available, the new party should implement the precautions described in
Appendix D.3, the Appendix that shows how receivers implement MWPP
without RTCP.


B.5 Multi-party MWPP: Scaling Issues

In this section, we discuss how multi-party MWPP sessions scale to large
numbers of participants. We consider sender and receiver scaling
separately, as some sessions may consist of a small number of
send/receive parties and a large number of receive-only parties.

The memory requirements for MWPP multi-party senders scale linearly with
the number of receivers listening to the stream. For a true multicast
implementation, a sender uses 8 octets per listener (the ssrc and
last_ehsnr party_info fields in Figure B.2). A simulated multicast
implementation uses more memory per listener, to store the network
addresses for each party (the rtp_addr and rtcp_addr addr_info fields in
Figure B.2).

Sender processor requirements also scale linearly with the number of
listeners, for the simple sender algorithms we describe in this
Appendix. Linear algorithms occur in several places. For example, the
multi-party recovery journal trimming algorithm (Appendix B.3) uses a
linear search to locate the lowest outstanding last_ehsnr value. As a
second example, the all-to-all mesh method of multicast simulation
(Appendix B.1) in inherently linear with the number of parties. If sub-
linear scaling is required, these algorithms may be replaced with more
efficient alternatives.

Another type of sender scaling concerns the RTP stream bandwidth.
Stream bandwidth is not constant with the number of receivers of the
stream, because the operation of the recovery journal trimming algorithm
is affected by number of receivers. As described in [2], a receiver
sends RTCP receiver reports at a slower rate as the number of session
participants increases. As a result, the recovery journal is trimmed at
a slower rate, and so the RTP bandwidth increases. The rate of bandwidth
growth depends on the nature of the MIDI stream, as we discuss in
Appendix A.4 of [13].






Lazzaro/Wawrzynek                                              [Page 52]


INTERNET-DRAFT                                             1 March 2003


Next, we discuss how MWPP receivers scale to handle a large number of
senders. True multicast sessions with a large number of parties may find
use in experimental musical performances and multi-player games.

The dominant type of receiver scaling is MIDI renderer scaling. As the
number of senders increase, so do the number of simultaneous notes that
sound. However, most MIDI rendering systems fail in a graceful way when
asked to render too many MIDI notes at once. For example, older notes
may be ended prematurely, or new note events may be selectively dropped
depending on a priority scheme. Network-related processing also scales
with the number of notes, not the number of senders.

The memory requirements for MWPP multi-party receivers scale linearly
with the number of senders. The jrecv[] array in party_info, which holds
state for recovery journal processing, dominates the memory footprint.
In practice, an implementation may be able to significantly reduce the
size of jrecv[], by using the SDP parameters defined in Appendix C.1 to
restrict the use of the journal.

































Lazzaro/Wawrzynek                                              [Page 53]


INTERNET-DRAFT                                             1 March 2003


Appendix C. MWPP and Reliable Transport

RTP and MWPP may be carried over a variety of transport protocols. In
the main text, we describe how to send MWPP streams over UDP, an
unreliable transport protocol. In this Appendix, we describe how to send
MWPP streams over reliable byte-stream protocols (such as TCP).

An MWPP application chooses a transport type based on several factors.
Factors that favor using UDP include:

  o  UDP may exhibit lower latency than TCP. If the packet loss on a
     network is non-negligible, head-of-line blocking degrades TCP
     latency performance relative to UDP. However, if a network link
     is nearly lossless, UDP and TCP exhibit the same latency.

  o  Multi-party MWPP applications may choose UDP in order to use
     multicast transport (Appendix B.2). MWPP applications that
     use TCP are limited to simulating multicast (Appendix B.1).

  o  UDP may be the only transport option in low-cost embedded
     environments.

Factors that favor using reliable transport (such as TCP) include:

  o  TCP MWPP applications may omit recovery journal support, thus
     saving development costs, because TCP is reliable. UDP MWPP
     requires recovery journal support when used on lossy networks.

  o  TCP may be able to pass through network middleboxes (such as
     firewalls). These middleboxes sometimes block all UDP traffic.

  o  The reliable nature of TCP supports archival applications. In
     an archival application, the receiver intends to save an exact
     record of the MIDI stream to long-term storage. If TCP latency
     issues are a concern, applications may send two copies of the
     stream: a UDP copy for real-time monitoring and a TCP copy for
     archiving. A two-stream approach may be simpler than adding
     archival support to a UDP stream via retransmission [21].

In this Appendix, we describe how to use TCP in two-party interactive
applications. We discuss how to define session descriptions that specify
TCP streams (Appendix C.1) and describe methods for sending and
receiving RTP streams (Appendix C.2). These two-party methods may also
be useful in multi-party sessions that simulate multicast with a mesh of
unicast flows (Appendix B.1).






Lazzaro/Wawrzynek                                              [Page 54]


INTERNET-DRAFT                                             1 March 2003


The interactive TCP techniques in Appendices C.1-2 may also be applied
to content streaming. However, the Real Time Streaming Protocol (RTSP),
a session management tool for streaming media, supports interleaving RTP
and RTCP streams in the RTSP TCP control stream. We discuss interleaving
MWPP over RTSP in Appendix C.3.


C.1 MWPP over TCP: Session Management

In this section, we show how to set up two-party interactive sessions
that use TCP. As in two-party UDP example in the main text, two session
descriptions define a two-party TCP session. Figures C.1 and C.2 show
these session descriptions. These descriptions use the methods defined
in [9] and [10] for TCP session setup.

The session descriptions indicate that the first party (Figure C.1)
intends to initiate a TCP connection to port 16112 of 192.0.2.94 for the
RTP stream, and a TCP connection to port 16113 of 192.0.2.94 for the
RTCP stream. The second party intends to accept these TCP connections.
Once the connections are established, RTP and RTCP streams flow
bidirectionally through them.


v=0
o=first 2520644554 2838152170 IN IP4 first.example.net
s=Example
t=0 0
c=IN IP4 192.0.2.105
m=audio 9 TCP RTP/AVP 101
a=rtpmap: 101 mwpp/44100
a=direction:active

       Figure C.1 -- TCP session description for first participant.


v=0
o=second 2520644554 2838152170 IN IP4 second.example.net
s=Example
t=0 0
c=IN IP4 192.0.2.94
m=audio 16112 TCP RTP/AVP 96
a=rtpmap: 96 mwpp/44100
a=direction:passive

       Figure C.2 -- TCP session description for second participant.






Lazzaro/Wawrzynek                                              [Page 55]


INTERNET-DRAFT                                             1 March 2003


Note that the direction attribute (a=) lines code the role each party
plays in establishing a connection. Also note that the connection (c=)
and media (m=) lines in TCP sessions indicate the network address and
port on which a party accepts connections. A party that does not accept
connections places the discard port (9) in its media line.

This session description example shows one approach to setting up a two-
party TCP session. See [9] [10] for enhancements and alternatives to
this example, that may be more robust in the presence of network
middleboxes (such as firewalls and network address translators).

We now discuss software implementation issues for the session described
in Figures C.1 and C.2. Unlike the UDP example in the main text, session
setup for this TCP example is asymmetrical, as the first and second
parties play different roles in the TCP connections.

The first party initializes SOCK_STREAM socket descriptors for the RTP
(rtp_fd) and RTCP (rtcp_fd) sessions, using the socket() system call.
The first party also prepares sockaddr_in structures for the RTP
(192.0.2.94:16112) and RTCP (192.0.2.94:16113) network destinations for
the second party. Then, the first party establishes the RTP and RTCP
connections, using the connect() system call. Once the connections are
established, RTP and RTCP streams flow in both directions over rtp_fd
and rtcp_fd.

The second party initializes SOCK_STREAM socket descriptions for the RTP
(rtp_init) and RTCP (rtcp_init) sessions, using the socket() command,
and binds these sockets to the RTP (192.0.2.94:16112) and RTCP
(192.0.2.94:16113) addresses listed in its own session description
(Figure C.2). Then, the second party calls listen() on rtp_init and
rtcp_init, and awaits connections from the first party.  Once the first
party responds, the second party uses accept() to assign the RTP stream
to the rtp_fd socket descriptor, and the RTCP stream to the rtcp_fd
socket descriptor. Once this assignment occurs, RTP and RTCP streams
flow in both directions over rtp_fd and rtcp_fd.

Once a session begins, the parties exchange RTCP traffic over rtcp_fd.
Each RTCP packet is preceded by a two-octet unsigned integer value, sent
in network byte order (big-endian), that specifies the number of octets
in the RTCP packet that follows. This framing method follows [10].

Apart from RTCP packet framing, the parties perform session housekeeping
duties using the methods described in Section 3 of the main text.








Lazzaro/Wawrzynek                                              [Page 56]


INTERNET-DRAFT                                             1 March 2003


C.2 MWPP over TCP: Sending and Receiving

In this section, we describe how parties send and receive MWPP RTP
packets over TCP connections. Each RTP packet sent over a TCP connection
is preceded by a two-octet unsigned integer value, in network byte
order, that declares the number of octets in the RTCP packet that
follows. This framing method follows [10].

By default, TCP sessions do not use the journalling system. The j_sec
parameter overrides this default (Section C.1.1 of [1]). In the default
no-journal case, senders have a responsibility to send an RTP packet
stream in sequence-number order, without packet loss or reordering, and
receivers may assume a perfect packet stream.

In this default case, the recovery journal sending (Section 5) and
receiving (Section 7) algorithms described in the main text are
irrelevant. However, the core sending and receiving algorithms described
in Sections 4 and 6 are quite relevant to TCP sessions, as these
algorithms center on latency, bandwidth, and congestion issues.

Although TCP provides congestion control, interactive performance may
benefit if TCP senders use the congestion control methods described in
Section 4.3. Senders may use the interarrival jitter field [2] of RTCP
receiver reports to sense network congestion.

Finally, we note that if the j_sec parameter configures a TCP stream to
use the recovery journal, the RTP packet stream is not guaranteed to
arrive in sequence-number order [1]. This mode of operation may be used
to tunnel a UDP MWPP stream through a network barrier via TCP. In this
case, senders and receivers implement the recovery journal system as
described in Sections 5 and 7.


C.3 MWPP over TCP: RTSP Interleaving

The TCP methods used in the interactive example in Appendices C.1-2 may
also be used in content streaming applications. However, the Real Time
Streaming Protocol (RTSP, [6]) provides more convenient methods for TCP
streaming.

In normal RTSP usage, a receiver (or in RTSP terminology, a client) may
contact an RTSP server to engage in session control transactions.  Using
RTSP, a client may request a session description for a media stream,
whose format is coded with the Session Description Protocol.  However,
these session descriptions usually omit session transport details
(network addresses, ports, etc).





Lazzaro/Wawrzynek                                              [Page 57]


INTERNET-DRAFT                                             1 March 2003


Instead, RTSP clients and servers exchange transport details in the
Transport header lines of RTSP methods and responses. In most cases, the
net result of the transaction is identical to what would happen if the
transport information was carried in the session description.  Based on
the Transport header line data, servers and clients set up unicast or
multicast flows, and RTP and RTCP streams are carried on these flows.

However, this observation is not true for RTSP interleaved mode.
Interleaved mode, signalled by the use of the interleaved parameter on
the RTSP transport line, does not result in the creation or use of
separate transport connections for media. Instead, RTP and RTSP packets
are interleaved onto the TCP stream that carries the RTSP control
transactions. This method has the advantage that the media blocking by
network middleboxes is not possible, because the RTSP TCP connection
already exists.

Section 10.13 of [6] describes RTSP interleaving in detail, including
packet framing methods. Here, we note one MWPP RTSP issue (as
normatively specified in [1]). If an MWPP stream is interleaved over a
TCP RTSP control stream, by default the MWPP payload does not use the
recovery journal. In this case, the server has the duty to send an RTP
packet stream in sequence-number order, without packet loss or
reordering, and client may assume a perfect packet stream.

To set up an interleaved MWPP stream that uses the recovery journal, use
the j_sec parameter (as defined in Section C.1.1 of [1]) in the session
description. Note that if an interleaved MWPP stream uses the recovery
journal, the server is not obliged to send an RTP stream free from
packet loss or reordering events, and the client must be prepared to
handle these events.





















Lazzaro/Wawrzynek                                              [Page 58]


INTERNET-DRAFT                                             1 March 2003


Appendix D. Using MWPP without RTCP

MWPP works best with RTCP. MWPP implementations use RTCP in several
ways. MWPP senders use RTCP receiver reports as a feedback signal for
congestion control (Section 4.3). MWPP senders also use receiver reports
to trim the checkpoint history of the recovery journal (Section 5.4).
MWPP receivers use RTCP sender reports for multi-stream synchronization
(Appendix E).

However, MWPP does not require RTCP, and session descriptions may
specify MWPP streams that do not use RTCP. Embedded devices may choose
to only support MWPP without RTCP, to reduce memory requirements.

In this Appendix, we describe how MWPP senders and receivers perform
checkpoint history management, congestion control, and playback
synchronization without the use of RTCP. In the sections below, we
modify the application described in Sections 2-7 to work without RTCP.

D.1 MWPP without RTCP: Session Management

Section 2 shows how to start a two-party interactive session that uses
RTCP. Figures 1 and 2 show the session descriptions that define the
session. Figures D.1 and D.2 show modified versions of these session
descriptions, that specify a session without RTCP. Figure D.1 defines
how the first party wishes to receive its stream; Figure D.2 defines how
the second party wishes to receive its stream.

The modified session descriptions disable RTCP by using bandwidth (b=)
lines to set the RTCP session bandwidth to zero [8]. The session
descriptions also use the SDP j_update parameter to define a sending
policy (Appendix C.1.2 of [1]) that is suitable for sessions that do not
use RTCP.

In the modified session descriptions, the first party (Figure D.1)
receives a stream that uses the anchor sending policy. In the anchor
policy, the checkpoint packet identity is fixed for the entire session.
This policy works well for streams that use a few MIDI command types
(Appendix A.4 of [13]). The ch_unused parameter specifies the MIDI
commands that the sender does not use (Appendix C.1.3 in [1])

The second party (Figure D.2) receives a stream that uses the open-loop
sending policy. In this policy, the sender updates the checkpoint packet
at regular intervals, dropping older commands from the checkpoint
history. After a packet loss, receivers determine if the checkpoint
history covers the loss event, by using the checkpoint sequence number
coded in the recovery journal header (Figure 7 in [1]). If the loss is
not covered, the receiver executes MIDI commands to restore the
integrity of the stream.



Lazzaro/Wawrzynek                                              [Page 59]


INTERNET-DRAFT                                             1 March 2003


Note that for certain MIDI command types, receivers are not able to
recover from an uncovered loss event. For example, if a Control Change
(0xB) command for the channel volume controller (0x07) is prematurely
dropped from the checkpoint history, a receiver has no way to ascertain
the correct volume. To address this issue, the modified session
descriptions use the ch_anchor parameter to protect fragile chapters
(Appendix C.1.3 in [1]).  Open-loop senders never drop ch_anchor
chapters from the checkpoint history.

The session setup algorithms defined in Figures 3-7 may be used for
sessions that do not use RTCP, by deleting the code referencing the RTCP
socket descriptor rtcp_fd and the RTCP address rtcp_addr.  Once the
session begins, session housekeeping tasks are identical to those
described in Section 3, except that tasks related to RTCP are not
performed.


v=0
o=first 2520644554 2838152170 IN IP4 first.example.net
s=Example
t=0 0
c=IN IP4 192.0.2.94
m=audio 16112 RTP/AVP 96
a=rtpmap: 96 mwpp/44100
b=RS:0
b=RR:0
a=fmtp: 101 j_update=anchor; ch_unused=ATMDVQEX;


         Figure D.1 -- Session description for first participant.


v=0
o=second 2520644554 2838152170 IN IP4 second.example.net
s=Example
t=0 0
c=IN IP4 192.0.2.105
m=audio 5004 RTP/AVP 101
a=rtpmap: 101 mwpp/44100
b=RS:0
b=RR:0
a=fmtp: 101 j_update=open-loop; ch_unused=ATMDVQEX; ch_anchor=PWC;


         Figure D.2 -- Session description for second participant.






Lazzaro/Wawrzynek                                              [Page 60]


INTERNET-DRAFT                                             1 March 2003


D.2 MWPP without RTCP: Sender Issues

In this section, we modify the sender implementation described in
Sections 4 and 5 to support sessions that do not use RTCP. The
modifications affect congestion control, multi-stream synchronization,
and checkpoint history management.

Senders use RTCP receiver reports as feedback signals for congestion
control (Section 4.3). If RTCP is not in use, other congestion measures
may be available. For example, session management may take place via
peer-to-peer UDP SIP [14] transactions. In this case, the loss rate of
SIP response or ACK packets measures the combined congestion of the
forward and reverse paths. Note that this method is inferior to RTCP
receiver reports in several ways: SIP transactions may occur
infrequently and SIP proxies in the network path may degrade the loss
data.

RTCP also plays a role in multi-stream synchronization. RTCP sender
reports link the RTP timestamp clock to an absolute time clock.
Receivers may use this absolute reference to synchronize multiple MWPP
streams. However, MWPP also supports multi-stream synchronization
without RTCP, using the SDP zerosync parameter (Appendix C.4.2 of [1]).
The zerosync parameter defines sender behaviors for RTP timestamp
generation, as we discuss in detail in Appendix E.

Finally, sender modifications are necessary for checkpoint history
management. As the two parties defined in Figure D.1 and D.2 use
different checkpoint management policies, we describe two separate
modifications of the sender implementation.

The first party (Figure D.1) uses the anchored sending policy. In this
policy, the checkpoint packet identity is fixed for the entire session.
To implement this policy, we delete the trimming algorithm described in
Section 5.4 from the sender.

The second party (Figure D.2) uses the open-loop sending policy.  In
this policy, the sender updates the checkpoint packet at regular
intervals, dropping older commands from the checkpoint history.

To implement this policy, we replace the RTCP-oriented trimming
algorithm described in Section 5.4. The new algorithm implements a
control system to maintain the RTP stream bandwidth below a pre-defined
limit. Whenever the stream bandwidth exceeds the limit, the sender
reduces the size of the recovery journal.

To perform a trimming operation, the sender reduces the size of the
checkpoint history of the journal. The sender trims or deletes journal
chapters to match the shortened history, and updates the recovery



Lazzaro/Wawrzynek                                              [Page 61]


INTERNET-DRAFT                                             1 March 2003


journal header to code the new checkpoint packet. Chapters protected by
the ch_anchor parameter are never trimmed.

This algorithm is conservative, in that it aims to approximate the
behavior of the anchored checkpoint algorithm, subject to bandwidth
limits. Other open-loop trimming algorithms are possible. For example, a
trimming algorithm may aim to minimize the bandwidth of the stream, for
a given level of protection against uncovered packet loss.


D.3 MWPP without RTCP: Receiver Issues

In this section, we modify the receiver described in Sections 6 and 7 to
support sessions that do not use RTCP. The modifications affect
algorithms for multi-stream synchronization and packet loss recovery.

Receivers use RTCP sender reports for multi-stream synchronization.
However, MWPP also supports multi-stream synchronization without RTCP,
using the SDP zerosync parameter (Appendix C.4.2 of [1]). The zerosync
parameter codes the relative timing of MWPP streams in a session.
Appendix E describes how receivers may use relative timing information
to synchronize multiple MWPP streams.

The packet loss recovery algorithm described Section 7 is incompatible
with the open-loop sending policy (Appendix C.1.2.3 of [1]). The open-
loop policy is often used in sessions that do not use RTCP, such as the
session defined in Figure D.2.

We now describe a modified version of the loss recovery algorithm, that
fixes this incompatibility. In the modified algorithm, upon the
detection of a loss event, the receiver compares the checkpoint packet
sequence number coded in the recovery journal header to the highest RTP
sequence number previously seen in the stream. This comparison is
performed modulo 2^16, and uses standard methods (described in [2]) to
avoid tracking errors during rollover.

If the checkpoint packet sequence number is less than the highest RTP
sequence number, the recovery journal may not code complete recovery
information for the packet loss event. We refer to this condition as an
uncovered loss event.

When an uncovered loss occurs, the chapter-specific recovery algorithms
use a modified recovery strategy, that takes the incomplete nature of
the chapter data into account. Note that for the session defined in
Figure D.2, only Chapter N is vulnerable to uncovered losses, as the
ch_anchor parameter protects Chapters W, P, and C, and the ch_unused
parameter excludes all other chapters from the journal.




Lazzaro/Wawrzynek                                              [Page 62]


INTERNET-DRAFT                                             1 March 2003


In the case of an uncovered loss event, the Chapter N recovery procedure
described in Section 7.2 performs in the following way.  If data for a
note number appears in Chapter N, the algorithms described in Section
7.2 are executed as normal.

If data for a note number does not appear in Chapter N, and the vel[]
array in jrec_chaptern indicates the note is currently on, we assume a
NoteOff command was lost for the note number. Thus, we execute the
recovery procedure that would occur if the bit for note number in the
journal NoteOff bit array is set, as described in Section 7.2.









































Lazzaro/Wawrzynek                                              [Page 63]


INTERNET-DRAFT                                             1 March 2003


Appendix E. Multi-stream MWPP Sessions

The session descriptions shown in the main text (Figure 1 and 2 in
Section 2) use one media (m=) line. This media line specifies the
delivery of a single MIDI name space (16 voice channels + systems).

In this Appendix, we show how to use several MWPP media lines in a
session description, and discuss why applications may wish to do so. We
use the term multi-stream session to describe sessions of this type.

Multi-stream session descriptions may use grouping lines [11] to specify
the synchronized playback of media streams.  For example, a session
description may group an audio and a video media stream, to specify a
lip-synced presentation. In a similar manner, grouping lines may serve
to synchronize MIDI flows in multi-stream MWPP sessions.

Multi-stream MWPP session descriptions may also specify:

  o Name space relationships. The SDP MWPP parameter midiport
    may be used to code name space relationships between MWPP
    streams. For example, one stream may code voice channels
    1-8 of a MIDI name space, and a second stream may code
    voice channels 9-16 of the same MIDI name space.

  o Synchronization mechanics. MWPP streams code discrete
    events, not a continuous media flow. The MWPP SDP parameter
    zerosync codes information to improve synchronization
    lock-in time for event streams.

In this Appendix, we describe common uses for multi-stream MWPP sessions
(Appendix E.1). We also discuss MWPP sender and receiver synchronization
(Appendix E.2) and name space (Appendix E.3) issues.


E.1 Multi-Stream Session Scenarios

In this section, we describe several applications of multi-stream
sessions, and show session descriptions for each application. We also
discuss how applications set up multi-stream sessions.

A simple form of multi-stream session specifies two or more independent
MWPP streams. If MWPP streams are independent, the MIDI name spaces of
the streams are unrelated, and the streams are rendered independently.
However, grouping lines may be used to specify synchronized rendering of
the streams (the LS grouping semantics [11]). Grouping lines may also be
used to specify that several MWPP streams represent the same data flow
(the FID grouping semantics [11]), and thus are copies of each other.




Lazzaro/Wawrzynek                                              [Page 64]


INTERNET-DRAFT                                             1 March 2003


Figure E.1 and E.2 show an independent multi-stream session. One party
(Figure E.1) is a musician, who has two keyboard controllers in her rig.
The session description maps each keyboard to a separate MWPP stream,
and uses the sendonly attribute to code that the controllers do not
accept MIDI input. The NTP timestamps coded in the RTCP sender reports
for the two streams share a common clock source.


v=0
o=first 2520644554 2838152170 IN IP4 first.example.net
s=Two keyboards driving independent synths.
t=0 0
c=IN IP4 192.0.2.105
a=sendonly
a=group: LS top bottom
m=audio 5004 RTP/AVP 96
i=Keyboard top
a=rtpmap: 96 mwpp/44100
a=mid:top
a=fmtp: 96 zerosync=18293
m=audio 5006 RTP/AVP 96
i=Keyboard bottom
a=rtpmap: 96 mwpp/44100
a=mid:bottom
a=fmtp: 96 zerosync=23893

  Figure E.1 -- Player definition for the two-keyboard example.


v=0
o=second 102902938 9837465 IN IP4 second.example.net
s=Two keyboards driving independent synths.
t=0 0
c=IN IP4 192.0.2.94
a=recvonly
a=group: LS top bottom
m=audio 16112 RTP/AVP 96
i=Synth for top keyboard
a=rtpmap: 96 mwpp/44100
a=mid:top
m=audio 16114 RTP/AVP 96
i=Synth for bot keyboard
a=rtpmap: 96 mwpp/44100
a=mid:bottom

  Figure E.2 -- Synth definition for the two-keyboard example.





Lazzaro/Wawrzynek                                              [Page 65]


INTERNET-DRAFT                                             1 March 2003


The second party (Figure E.2) is a computer that runs two separate music
synthesizers, one for each stream. The recvonly attribute in the session
description of the second party codes that the synths do not send MIDI
back to the keyboards. The first party sends its MWPP streams to the
network addresses defined in the session description of the second
party.

In these session descriptions, the LS grouping parameter codes that the
streams are to be synchronized on playback. Apart from this
synchronization, rendering proceeds independently for the two streams.
To aid synchronization lock at the start of the session, the session
description for the musician uses the MWPP SDP parameter zerosync. In
Appendix E.2, we describe how senders and receivers use data coded by
the zerosync parameter.

Figures E.3 and E.4 show another independent multi-stream session.  In
this example, the first party (Figure E.3) is a musician with a single
keyboard controller in her rig. Identical copies of the MIDI commands
from the controller are sent over both streams. The grouping parameter
FID codes that both streams represent the same flow of data.


v=0
o=first 2520644554 2838152170 IN IP4 first.example.net
s=One keyboard sent to a synth and a recorder
t=0 0
c=IN IP4 192.0.2.105
a=sendonly
a=group: FID synth recorder
m=audio 5004 RTP/AVP 96
i=The synth stream
a=rtpmap: 96 mwpp/44100
a=mid:synth
m=audio 9 TCP RTP/AVP 96
i=The recorder stream
a=rtpmap: 96 mwpp/44100
a=mid:recorder
a=direction:active

   Figure E.3 -- Player definition for the one-keyboard example.











Lazzaro/Wawrzynek                                              [Page 66]


INTERNET-DRAFT                                             1 March 2003


v=0
o=second 2520644554 2838152170 IN IP4 second.example.net
s=One keyboard sent to a synth and a recorder
t=0 0
a=recvonly
a=group: FID synth recorder
m=audio 16112 RTP/AVP 96
i=The synth stream
c=IN IP4 192.0.2.94
a=rtpmap: 96 mwpp/44100
a=mid:synth
m=audio 16114 TCP RTP/AVP 96
i=The recorder stream
c=IN IP4 192.0.2.21
a=rtpmap: 96 mwpp/44100
a=mid:recorder
a=direction:passive

   Figure E.4 -- Target definition for the one-keyboard example.


The second party (Figure E.4) renders the first MWPP stream into audio,
and records the second MWPP stream to disk. The rendered stream uses UDP
transport, for lowest latency. The archived stream uses TCP transport,
to ensure an accurate recording. As the two applications run on
different machines, the two streams have different network addresses
(192.0.2.94 and 192.0.2.21).

We now show examples of sessions do not use independent streams.
Instead, the streams in the session are in a relationship (Appendix C.4
of [1]). Streams in a relationship share a common MIDI name space. For
some MWPP renderers (such as sasc, defined in Appendix C.5 of [1]), the
streams in a relationship also share the same renderer instance (a
property defined in Appendix C.4.1. of [1]).

One type of MWPP relationship is the identity relationship. All streams
in an identity relationship target the same MIDI name space (16 voice
channels + systems). Two MWPP streams share an identity relationship if
the same value is assigned to the midiport parameter in each stream
description (Appendix C.4.1 of [1]).











Lazzaro/Wawrzynek                                              [Page 67]


INTERNET-DRAFT                                             1 March 2003


v=0
o=first 2520644554 2838152170 IN IP4 first.example.net
s=Keyboard and librarian controlling a synth.
t=0 0
c=IN IP4 192.0.2.105
a=group: LS keyboard librarian
m=audio 5004 RTP/AVP 96
i=The keyboard stream
a=rtpmap: 96 mwpp/44100
a=mid:keyboard
a=sendonly
a=fmtp 96 midiport=12; zerosync = 0;
a=fmtp 96 ch_unused=X;
m=audio 9 TCP RTP/AVP 96
i=The librarian stream
a=rtpmap: 96 mwpp/44100
a=direction:active
a=mid:librarian
a=fmtp 96 midiport=12; zerosync = 0;
a=fmtp 96 ch_unused=PWNATCMDVQE;
a=sendrecv

      Figure E.5 -- Player for the keyboard/librarian example.


v=0
o=second 2520644554 2838152170 IN IP4 second.example.net
s=Keyboard and librarian controlling a synth.
t=0 0
c=IN IP4 192.0.2.94
a=group: LS keyboard librarian
m=audio 16112 RTP/AVP 96
i=The keyboard stream
a=rtpmap: 96 mwpp/44100
a=mid:keyboard
a=recvonly
a=fmtp 96 midiport=12;
m=audio 16114 TCP RTP/AVP 96
i=The librarian stream
a=rtpmap: 96 mwpp/44100
a=mid:librarian
a=direction:passive
a=fmtp 96 midiport=12;
a=fmtp 96 ch_unused=PWNATCMDVQE;
a=sendrecv

    Figure E.6. -- Synth for the keyboard/librarian example.




Lazzaro/Wawrzynek                                              [Page 68]


INTERNET-DRAFT                                             1 March 2003


Figure E.5 and E.6 define a session that contain an identity
relationship. In this example, the first party (Figure E.5) is a
musician, and the second party (Figure E.6) is a synthesizer.

The musician uses a MIDI controller keyboard to play the synthesizer.
Occasionally, the musician also uses a MIDI librarian program to update
the timbre memory of the synthesizer. The librarian generates and
receives MIDI System Exclusive commands, that coordinate large data
transfers between the librarian and the synthesizer.

The session uses two streams, in order to optimize the network transport
for each type of data. The controller keyboard stream uses UDP
transport, for good latency performance. The librarian stream uses TCP
transport, for reliable transfer of timbre data.

The streams share an identity relationship, so that both the keyboard
and the librarian may issue commands in the MIDI name space of the
synthesizer. The midiport parameter in each stream establishes the
identity relationship. The ch_unused parameter in the librarian stream
specifies that the stream uses only MIDI System Exclusive commands.

The streams are synchronized using the LS grouping semantics and the
zerosync parameter, to ensure correct temporal ordering of MIDI events
in the two streams. The librarian stream uses the sendrecv attribute, as
the MIDI handshaking protocols used by the librarian generate
bidirectional traffic.

Our final example uses a session with an ordered relationship. Ordered
relationships accommodate applications that group MIDI streams into an
extended name space (32 voice channels, 48 voice channels, etc).

The first party (Figure E.7) is a sequencer program, that is playing
back a 32-channel MIDI performance. Channels 1-16 are sent on the first
stream, channels 17-32 are sent on the second stream. The adjacent
midiport values (5 and 6) for the streams establish the ordering.

The second party (Figure E.8) is an MPEG 4 Structured Audio [7] renderer
(one of the sasc renderers). Structured Audio supports the concept of an
extended MIDI channel number space.  As normatively specified in
Appendix C.4.1 of [1], both streams in this ordered relationship target
a single instance of this sasc renderer.

We now discuss session management issues for multi-stream sessions.  At
the network layer, each stream in the session is an independent entity.
Multi-stream applications implement the session setup and management
algorithms described in Sections 2 and 3 for each stream.  Each stream
uses a unique pair of network ports, accessed by separate instances of
the rtp_fd and rtcp_fd socket descriptors (Sections 2).



Lazzaro/Wawrzynek                                              [Page 69]


INTERNET-DRAFT                                             1 March 2003


v=0
o=first 2520644554 2838152170 IN IP4 first.example.net
s=32-channel sequencer driving a Structured Audio renderer
t=0 0
c=IN IP4 192.0.2.64
a=group: LS upper lower
m=audio 5004 RTP/AVP 61
a=rtpmap: 61 mpeg4-generic/44100
a=fmtp: 61 streamtype=5; mode=mwpp; config=""; profile-level-id=74;
a=fmtp: 61 midiport=5;zerosync=0;
a=mid:lower
a=sendonly
m=audio 5006 RTP/AVP 62
a=rtpmap: 62 mpeg4-generic/44100
a=fmtp: 62 streamtype=5; mode=mwpp; config=""; profile-level-id=74;
a=fmtp: 62 midiport=6;zerosync=0;
a=fmtp: 62 render=sasc; url="http://www.example.com/cardinal.sasc";
a=fmtp: 62 cid="azsldkaslkdjqpwojdkmsldkfpe";
a=mid:upper
a=sendonly

   Figure E.7 -- Sequencer for the Structured Audio example.



v=0
o=second 2520644554 2838152170 IN IP4 second.example.net
s=32-channel sequencer driving a Structured Audio renderer
t=0 0
c=IN IP4 192.0.2.69
a=group: LS upper lower
m=audio 10000 RTP/AVP 61
a=rtpmap: 61 mpeg4-generic/44100
a=fmtp: 61 streamtype=5; mode=mwpp; config=""; profile-level-id=74;
a=fmtp: 61 midiport=5;zerosync=0;
a=mid:lower
a=recvonly
m=audio 10002 RTP/AVP 64
a=rtpmap: 62 mpeg4-generic/44100
a=fmtp: 62 streamtype=5; mode=mwpp; config=""; profile-level-id=74;
a=fmtp: 62 midiport=6;zerosync=0;
a=fmtp: 62 render=sasc; url="http://www.example.com/cardinal.sasc";
a=fmtp: 62 cid="azsldkaslkdjqpwojdkmsldkfpe";
a=mid:upper
a=recvonly

   Figure E.8. -- Renderer for the Structured Audio example.




Lazzaro/Wawrzynek                                              [Page 70]


INTERNET-DRAFT                                             1 March 2003


E.2 Synchronization Issues

In this section, we discuss how senders and receivers synchronize
multiple MWPP streams. We begin with a review of RTP multi-stream
synchronization methods. We describe how to apply these methods to MWPP
streams, and discuss synchronization issues that are unique to MWPP.

A common RTP synchronization task is lip-syncing audio and video
streams. In a typical situation, a session begins with a multimedia
server sending audio and video RTP streams to a receiver. The first
packets sent in the audio and the video streams represent the same
moment in media time. The receiver assumes the first packet for both
streams arrived without loss, and begins to buffer the two streams.
Once the buffers are sufficiently full, audio and video playback begins.

As the session progresses, drift between the four clocks in the system
(audio capture, video capture, audio playback, video playback) results
in the audio and video falling out of sync. The receiver is unable to
use the RTP timestamps of the two streams to restore sync, because these
timestamps embody the capture drift. Even determining the numerical
relationship between the audio and video timestamps is not trivial, as
each RTP stream uses a random timestamp offset [2].

To solve this problem, the receiver uses the RTCP sender reports of the
two streams. These reports are sent at frequent intervals (for a two-
party session that follows the guidelines in [2], about once every 5
seconds). Each sender report codes a 64-bit Network Time Protocol wall-
clock timestamp together with its associated RTP timestamp for the
stream. By mapping the RTP timestamps for each stream to NTP absolute
time, the receiver is able to sync the audio and video stream to the
accuracy limits of the NTP clock sources.

We now consider how to apply these synchronization methods to multi-
stream MWPP sessions. We begin by considering MWPP receivers that use a
playout buffer, such as the content streaming applications described in
Appendix A. Some types of interactive receivers also use playout buffers
(Section 6 of the main text).

For these receivers, the standard RTP synchronizations work well once
RTCP reports for all streams have arrived. However, before initial
mappings of RTP timestamps to NTP timestamps for the streams are
delivered by RTCP, the receiver may render the MWPP streams wildly out
of sync.








Lazzaro/Wawrzynek                                              [Page 71]


INTERNET-DRAFT                                             1 March 2003


Stream startup is a problem because the first packet in an MWPP stream
codes the first MIDI command in the stream. Unless the first MIDI
command in all MWPP streams in a session happen at the same moment in
time, playout buffers that render the first packet of all streams
simultaneously will not produced a synchronized output.

The MWPP zerosync parameter, defined in Appendix C.4.1 of [1], codes
start up timing information in the session descriptions. Receivers may
use this information to synchronize the start up multiple MWPP streams.
As normatively stated in [1], if the MWPP streams in a LS grouping use
the zerosync parameter, the srate values for all streams MUST be
identical.

The zerosync parameter may be used in two different ways. One use of
zerosync, shown in the session descriptions in Figures E.1 and E.2, uses
the zerosync parameter to encode the RTP timestamp offset for each
stream (in the example, 18293 for the top stream and 23893 for the
bottom stream). By subtracting (modulo 2^32) the stream offset from the
RTP timestamp for each packet, the receiver may recover common stream
timestamps for use during the startup period. Note that if the session
description transport occurs in a secure manner, this use of zerosync
does not degrade RTP security.

A second use of zerosync, shown in the session descriptions in Figures
E.5 and E.6, sets the zerosync parameter for each stream to the special
value of 0. In this use of zerosync, all MWPP streams in the session
whose zerosync value is zero use the same RTP timestamp offset, and so
the RTP timestamps of the streams may be directly compared. This use of
zerosync weakens the security of an encrypted RTP stream, and should be
avoided in secure sessions.

Finally, we consider MWPP applications that do not use a playout buffer,
such as the simpler interactive receiver designs described in Section 6
of the main text.

In these applications, the temporal integrity of the performance is
based on the assumption that the underlying network has low nominal
jitter. These methods rely on careful implementations of sender and
receiver algorithms, to minimize the introduction of processing jitter
at the endpoints. Multi-stream versions of this architecture must be
doubly careful in this regard, to avoid adding temporal offsets or
jitter across streams.

Receivers that do not use a playout buffer use RTP timestamps to
identify packets that arrive late, as described in Section 6.1 of the
main text. However, these algorithms use timestamps in a differential
way, and so multi-stream versions of these receivers do not need to
synchronize the timestamps of the streams.



Lazzaro/Wawrzynek                                              [Page 72]


INTERNET-DRAFT                                             1 March 2003


E.3 Name Space Issues

In this Appendix, we discuss implementation issues for streams that
target the same MIDI name space (16 voice channels + systems). Streams
that target the same MIDI name space share an identity relationship.
Figures E.5 and E.6 of Appendix E.1 show an example session that
includes an identity relationship.

We focus on identity relationships because MWPP streams that target the
same MIDI name space run the risk of integrity loss. The other type of
stream relationship, the ordered relationship (Figures E.7 and E.8 of
Appendix E.1), poses no unusual implementation issues.

Identity relationships are prone to integrity loss, because an arbitrary
partitioning of a MIDI name space between several streams may circumvent
the recovery journal system. For example, consider an identity
relationship that placed all NoteOn (0x9) commands on one stream and all
NoteOff (0x8) commands on a second stream.

If the two streams are not perfectly synchronized, the NoteOff pattern
may slip ahead of the NoteOn pattern, and stuck notes may occur.  Packet
losses are also problematic in this partitioning scheme, as the recovery
journal mechanism for MIDI Note commands (Chapter N, as defined in
Appendix A.4 in [1]) assumes that all MIDI note commands for a channel
are present in the stream.

Reference [1] describes the multi-stream integrity issue, and defines
normative guidelines to prevent it from occurring (Appendix C.4.1 in
[1]). We summarize these guidelines below:

  o  Session participants MUST choose a MIDI name space partitioning
     that does not result in rendered performances that contain
     indefinite artifacts.

  o  If an artifact-free performance requires a specific temporal
     sequencing of commands across streams, senders MUST guarantee
     this sequencing.

  o  Receivers MUST maintain the structural integrity of the MIDI
     name space as it merges incoming streams. This requirement
     includes transaction-oriented MIDI commands, such as the
     Registered and Non-Registered Parameter MIDI Control Change
     (0xB) commands. In this case, receivers assume that a
     transaction occurs within a single stream.

The safest way to partition a MIDI name space is to place all commands
affecting a voice channel, including System Exclusive commands that are
associated with the channel, into one stream. In addition, Systems



Lazzaro/Wawrzynek                                              [Page 73]


INTERNET-DRAFT                                             1 March 2003


commands with related functionality, such as the MIDI sequencer
commands, should also be grouped together in a stream.

However, applications requirements may conflict with these simple
partitioning rules, and a more nuanced approach may be required. For
example, a player may wish to route two MIDI controllers, such as a
keyboard controller (generating Note commands) and a continuous
controller (generating Control Change (0xB) commands), to the same
synthesizer. In situations of this nature, it is safe to split one MIDI
voice channel between streams that share a MIDI name space.

The MIDI librarian session in Figures 5 and 6 in Appendix E.1 shows
another example of nuanced multi-stream partitioning. In this session,
bulk-data System Exclusive commands related to a voice channel are sent
on a separate stream from interactive voice channel commands.  This
stream partitioning optimizes the network transport type for real-time
(sent on a UDP stream) and bulk-data (sent on a TCP stream) MIDI
commands.

This librarian example shows the rationale for the sender and receiver
responsibilities for multi-stream systems defined in [1]. Senders are
responsible for correct intra-stream sequencing, because (in this
example) careless sender RTP timestamps may place real-time MIDI
commands on the wrong side of a bulk-data transfer. Likewise, a careless
receiver implementation that did not respect MIDI merging semantics
might attempt to interleave commands from the real-time stream into an
ongoing bulk-data download.
























Lazzaro/Wawrzynek                                              [Page 74]


INTERNET-DRAFT                                             1 March 2003


Appendix F. References


F.1 Normative References

[1] John Lazzaro and John Wawrzynek. The MIDI Wire Protocol
Packetization (MWPP). draft-ietf-avt-mwpp-midi-rtp-06.txt.

[2] H. Schulzrinne, S. Casner, R. Frederick, and V. Jacobson. RTP: A
transport protocol for real-time applications. Work in progress,
draft-ietf-avt-rtp-new-11.txt.

[3] H. Schulzrinne and S. Casner. RTP Profile for Audio and Video
Conferences with Minimal Control. Work in progress,
draft-ietf-avt-profile-new-12.txt.

[4] MIDI Manufacturers Association. The complete MIDI 1.0 detailed
specification, 1996. http://www.midi.org

[5] M. Handley, V. Jacobson and C. Perkins. SDP: Session Description
Protocol. Work in progress, draft-ietf-mmusic-sdp-new-10.txt.

[6] H. Schulzrinne, A. Rao, and R. Lanphier. Real Time Streaming
Protocol (RTSP). Work in progress,
draft-ietf-mmusic-rfc2326bis-00.txt.

[7] International Standards Organization. ISO 14496 MPEG-4, Part 3
(Audio) Subpart 5 (Structured Audio) 1999.

[8] S. Casner. SDP Bandwidth Modifiers for RTCP Bandwidth.
draft-ietf-avt-rtcp-bw-05.txt

[9] D. Yon. Connection-Oriented Media Transport in SDP.
<draft-ietf-mmusic-sdp-comedia-04.txt>.

[10] The forthcoming I-D to bring back the old RTP RFC TCP framing
method. If a new framing method is chosen by the WG instead, the text
attached to this reference will change to describe the new framing
method.

[11] G. Camarillo, G. Eriksson, J. Holler, H. Schulzrinne. Grouping of
Media Lines in the Session Description Protocol (SDP).  RFC 3388.

[12] A. Li, F. Liu, J. Villasenor, J.H. Park, D.S. Park, Y.L. Lee,
J. Rosenberg, and H. Shulzrinne. An RTP Payload Format for Generic FEC
with Uneven Level Protection. draft-ietf-avt-ulp-07.txt.





Lazzaro/Wawrzynek                                              [Page 75]


INTERNET-DRAFT                                             1 March 2003


F.2 Informative References

[13] John Lazzaro and John Wawrzynek. A Case for Network Musical
Performance. The 11th International Workshop on Network and Operating
Systems Support for Digital Audio and Video (NOSSDAV 2001) June 25-26,
2001, Port Jefferson, New York.
http://www.cs.berkeley.edu/~lazzaro/sa/pubs/pdf/nossdav01.pdf

[14] J. Rosenberg, H. Schulzrinne, G. Camarillo, A. Johnston,
J. Peterson, R. Sparks, M. Handley, and E. Schooler. SIP: Session
Initiation Protocol. Internet Engineering Task Force, RFC 3261.

[15] J. Rosenberg, R. Mahy, and S. Sen. NAT and Firewall Scenarios and
Solutions for SIP. draft-ietf-sipping-nat-scenarios-00.txt.

[16] Baugher, McGrew, Oran, Blom, Carrara, Naslund, and Norrman.  The
Secure Real-time Transport Protocol. Work in progress,
draft-ietf-avt-srtp-05.txt.

[17] Dominique Fober, Yann Orlarey, Stephane Letz. Real Time Musical
Events Streaming over Internet. Proceedings of the International
Conference on WEB Delivering of Music 2001, pages 147-154
http://www.grame.fr/~fober/RTESP-Wedel.pdf

[18] C. Bormann et al. Robust Header Compression (ROHC). Internet
Engineering Task Force, RFC 3095. Also see related work at
http://www.ietf.org/html.charters/rohc-charter.html.

[19] Sfront source code release, includes a Linux networking
client that implements the MIDI RTP packetization.
http://www.cs.berkeley.edu/~lazzaro/sa/

[20] The SoundWire group,
http://ccrma-www.stanford.edu/groups/soundwire/

[21] Joerg Ott, Uni Bremen, Stephan Wenger, Noriyuki Sato, Carsten
Burmeister, and Jose Rey. Extended RTP Profile for RTCP-based Feedback
(RTP/AVPF). draft-ietf-avt-rtcp-feedback-04.txt.













Lazzaro/Wawrzynek                                              [Page 76]


INTERNET-DRAFT                                             1 March 2003


Appendix G. Author Addresses

John Lazzaro (corresponding author)
UC Berkeley
CS Division
315 Soda Hall
Berkeley CA 94720-1776
Email: lazzaro@cs.berkeley.edu

John Wawrzynek
UC Berkeley
CS Division
631 Soda Hall
Berkeley CA 94720-1776
Email: johnw@cs.berkeley.edu




































Lazzaro/Wawrzynek                                              [Page 77]