PPSP                                                               Y. Gu
Internet-Draft                                                   N. Zong
Intended status: Standards Track                                  Huawei
Expires: September 12, 2011                                   Hui. Zhang
                                                       NEC Labs America.
                                                           Yunfei. Zhang
                                                            China Mobile
                                                                  J. Lei
                                                University of Goettingen
                                                      Gonzalo. Camarillo
                                                                Ericsson
                                                               Yong. Liu
                                                  Polytechnic University
                                                         Delfin. Montuno
                                                                Lei. Xie
                                                                  Huawei
                                                          March 11, 2011


                  Survey of P2P Streaming Applications
                       draft-ietf-ppsp-survey-01

Abstract

   This document presents a survey of popular Peer-to-Peer streaming
   applications on the Internet.  We focus on the Architecture and Peer
   Protocol/Tracker Signaling Protocol description in the presentation,
   and study a selection of well-known P2P streaming systems, including
   Joost, PPlive, andother popular existing systems.  Through the
   survey, we summarize a common P2P streaming process model and the
   correspondent signaling process for P2P Streaming Protocol
   standardization.

Status of this Memo

   This Internet-Draft is submitted in full conformance with the
   provisions of BCP 78 and BCP 79.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF).  Note that other groups may also distribute
   working documents as Internet-Drafts.  The list of current Internet-
   Drafts is at http://datatracker.ietf.org/drafts/current/.

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."




Gu, et al.             Expires September 12, 2011               [Page 1]


Internet-Draft    Survey of P2P Streaming Applications        March 2011


   This Internet-Draft will expire on September 12, 2011.

Copyright Notice

   Copyright (c) 2011 IETF Trust and the persons identified as the
   document authors.  All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents
   (http://trustee.ietf.org/license-info) in effect on the date of
   publication of this document.  Please review these documents
   carefully, as they describe your rights and restrictions with respect
   to this document.  Code Components extracted from this document must
   include Simplified BSD License text as described in Section 4.e of
   the Trust Legal Provisions and are provided without warranty as
   described in the Simplified BSD License.



































Gu, et al.             Expires September 12, 2011               [Page 2]


Internet-Draft    Survey of P2P Streaming Applications        March 2011


Table of Contents

   1.  Introduction . . . . . . . . . . . . . . . . . . . . . . . . .  4
   2.  Terminologies and concepts . . . . . . . . . . . . . . . . . .  4
   3.  Survey of P2P streaming system . . . . . . . . . . . . . . . .  5
     3.1.  Mesh-based P2P streaming systems . . . . . . . . . . . . .  5
       3.1.1.  Joost  . . . . . . . . . . . . . . . . . . . . . . . .  6
       3.1.2.  Octoshape  . . . . . . . . . . . . . . . . . . . . . .  9
       3.1.3.  PPLive . . . . . . . . . . . . . . . . . . . . . . . . 12
       3.1.4.  Zattoo . . . . . . . . . . . . . . . . . . . . . . . . 14
       3.1.5.  PPStream . . . . . . . . . . . . . . . . . . . . . . . 17
       3.1.6.  SopCast  . . . . . . . . . . . . . . . . . . . . . . . 18
       3.1.7.  TVants . . . . . . . . . . . . . . . . . . . . . . . . 19
     3.2.  Tree-based P2P streaming systems . . . . . . . . . . . . . 21
       3.2.1.  PeerCast . . . . . . . . . . . . . . . . . . . . . . . 21
       3.2.2.  Conviva  . . . . . . . . . . . . . . . . . . . . . . . 23
     3.3.  Hybrid P2P streaming system  . . . . . . . . . . . . . . . 26
       3.3.1.  New Coolstreaming  . . . . . . . . . . . . . . . . . . 26
   4.  A common P2P Streaming Process Model . . . . . . . . . . . . . 29
   5.  Security Considerations  . . . . . . . . . . . . . . . . . . . 30
   6.  Acknowledgments  . . . . . . . . . . . . . . . . . . . . . . . 30
   7.  Informative References . . . . . . . . . . . . . . . . . . . . 30
   Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 33




























Gu, et al.             Expires September 12, 2011               [Page 3]


Internet-Draft    Survey of P2P Streaming Applications        March 2011


1.  Introduction

   Toward standardizing the signaling protocols used in today's Peer-to-
   Peer (P2P) streaming applications, we surveyed several popular P2P
   streaming systems regarding their architectures and signaling
   protocols between peers, as well as, between peers and trackers.  The
   studied P2P streaming systems, running worldwide or domestically,
   include such as PPLive, Joost, Cybersky-TV, and Octoshape.  This
   document does not intend to cover all design options of P2P streaming
   applications.  Instead, we choose a representative set of
   applications and focus on the respective signaling characteristics of
   each kind.  Through the survey, we generalize a common streaming
   process model from those P2P streaming systems, and summarize the
   companion signaling process as the base for P2P Streaming Protocol
   (PPSP) standardization.


2.  Terminologies and concepts

   Chunk: A chunk is a basic unit of partitioned streaming media, which
   is used by a peer for the purpose of storage, advertisement and
   exchange among peers [Sigcomm:P2P streaming].

   Content Distribution Network (CDN) node: A CDN node refers to a
   network entity that usually is deployed at the network edge to store
   content provided by the original servers, and serves content to the
   clients located nearby topologically.

   Live streaming: The scenario where all clients receive streaming
   content for the same ongoing event.  The lags between the play points
   of the clients and that of the streaming source are small..

   P2P cache: A P2P cache refers to a network entity that caches P2P
   traffic in the network, and either transparently or explicitly
   distributes content to other peers.

   P2P streaming protocols: P2P streaming protocols refer to multiple
   protocols such as streaming control, resource discovery, streaming
   data transport, etc. which are needed to build a P2P streaming
   system.

   Peer/PPSP peer: A peer/PPSP peer refers to a participant in a P2P
   streaming system.  The participant not only receives streaming
   content, but also stores and uploads streaming content to other
   participants.

   PPSP protocols: PPSP protocols refer to the key signaling protocols
   among various P2P streaming system components, including the tracker



Gu, et al.             Expires September 12, 2011               [Page 4]


Internet-Draft    Survey of P2P Streaming Applications        March 2011


   and peers.

   Swarm: A swarm refers to a group of clients (i.e. peers) sharing the
   same content (e.g. video/audio program, digital file, etc) at a given
   time.

   Tracker/PPSP tracker: A tracker/PPSP tracker refers to a directory
   service which maintains the lists of peers/PPSP peers storing chunks
   for a specific channel or streaming file, and answers queries from
   peers/PPSP peers.

   Video-on-demand (VoD): A kind of application that allows users to
   select and watch video content on demand


3.  Survey of P2P streaming system

   In this section, we summarize some existing P2P streaming systems.
   The construction techniques used in these systems can be largely
   classified into two categories: tree-based and mesh-based structures.

   Tree-based structure: Group members self-organize into a tree
   structure, based on which group management and data delivery is
   performed.  Such structure and push-based content delivery have small
   maintenance cost and good scalability and low delay in retrieving the
   content(associated with startup delay) and can be easily implemented.
   However, it may result in low bandwidth usage and less reliability.

   Mesh-based structure: In contrast to tree-based structure, a mesh
   uses multiple links between any two nodes.  Thus, the reliability of
   data transmission is relatively high.  Besides, multiple links
   results in high bandwidth usage.  Nevertheless, the cost of
   maintaining such mesh is much larger than that of a tree, and pull-
   based content delivery lead to high overhead associated each video
   block transmission, in particular the delay in retrieving the
   content.

   Hybrid structure: Combine tree-based and mesh-based structure,
   combine pull-based and push-based content delivery to utilize the
   advantages of two structures.  It has high reliability as much as
   mesh-based structure, lower delay than mesh-based structure, lower
   overhead associated each video block transmission and high topology
   maintenance cost as much as mesh-based structure.

3.1.  Mesh-based P2P streaming systems






Gu, et al.             Expires September 12, 2011               [Page 5]


Internet-Draft    Survey of P2P Streaming Applications        March 2011


3.1.1.  Joost

   Joost announced to give up P2P technology on its desktop version last
   year, though it introduced a flash version for browsers and iPhone
   application.  The key reason why Joost shut down its desktop version
   is probably the legal issues of provided media content.  However, as
   one of the most popular P2P VoD application in the past years, it's
   worthwhile to understand how Joost works.  The peer management and
   data transmission in Joost mainly relies on mesh-based structure.

   The three key components of Joost are servers, super nodes and peers.
   There are five types of servers: Tracker server, Version server,
   Backend server, Content server and Graphics server.  The architecture
   of Joost system is shown in Figure 1.

   First, we introduce the functionalities of Joost's key components
   through three basic phases.  Then we will discuss the Peer protocol
   and Tracker protocol of Joost.

   Installation: Backend server is involved in the installation phase.
   Backend server provides peer with an initial channel list in a SQLite
   file.  No other parameters, such as local cache, node ID, or
   listening port, are configured in this file.

   Bootstrapping: In case of a newcomer, Tracker server provides several
   super node addresses and possibly some content server addresses.
   Then the peer connects Version server for the latest software
   version.  Later, the peer starts to connect some super nodes to
   obtain the list of other available peers and begins streaming video
   contents.  Different from Skype [skype], super nodes in Joost only
   deal with control and peer management traffic.  They do not relay/
   forward any media data.

   Channel switching: Super nodes are responsible for redirecting
   clients to content server or peers.

   Peers communicate with servers over HTTP/HTTPs and with super nodes/
   other peers over UDP.

   Tracker Protocol: Because super nodes here are responsible for
   providing the peerlist/content servers to peers, protocol used
   between tracker server and peers is rather simple.  Peers get the
   addresses of super nodes and content servers from Tracker Server over
   HTTP.  After that, Tracker sever will not appear in any stage, e.g.
   channel switching, VoD interaction.  In fact, the protocol spoken
   between peers and super nodes is more like what we normally called
   "Tracker Protocol".  It enables super nodes to check peer status,
   maintain peer lists for several, if not all, channels.  It provides



Gu, et al.             Expires September 12, 2011               [Page 6]


Internet-Draft    Survey of P2P Streaming Applications        March 2011


   peer list/content servers to peers.  Thus, in the rest of this
   section, when we mention Tracker Protocol, we mean the one used
   between peers and super nodes.

   Peers will communicate with super nodes in some scenarios using
   Tracker Protocol.

   1.  When a peer starts Joost software, after the installation and
   bootstrapping, the peer will communicate with one or several super
   nodes to get a list of available peers/content servers.

   2.  For on-demand video functions, super nodes periodically exchange
   small UDP packets for peer management purpose.

   3.  When switching between channels, peers contact super nodes and
   the latter help the peers find available peers to fetch the requested
   media data.

   Peer Protocol: The following investigations are mainly motivated from
   [Joost- experiment ], in which a data-driven reverse-engineer
   experiments are performed.  We omitted the analysis process and
   directly show the conclusion.  Media data in Joost is split into
   chunks and then encrypted.  Each chunk is packetized with about 5-10
   seconds of video data.  After receiving peer list from super nodes, a
   peer negotiates with some or, if necessary, all of the peers in the
   list to find out what chunks they have.  Then the peer makes decision
   about from which peers to get the chunks.  No peer capability
   information is exchanged in the Peer Protocol.























Gu, et al.             Expires September 12, 2011               [Page 7]


Internet-Draft    Survey of P2P Streaming Applications        March 2011


                   +---------------+       +-------------------+
                   | Version Server|       |   Tracker Server  |
                   +---------------+       +-------------------+
                             \                       |
                              \                      |
                               \                     | +---------------+
                                \                    | |Graphics Server|
                                 \                   | +---------------+
                                  \                  |     |
   +--------------+        +-------------+        +--------------+
   |Content Server|--------|    Peer1    |--------|Backend Server|
   +--------------+        +-------------+        +--------------+
                                     |
                                     |
                                     |
                                     |
                              +------------+       +---------+
                              | Super Node |-------|  Peer2  |
                              +------------+       +---------+

   Figure 1, Architecture of Joost system

   The following sections describe Joost QoS related features, extracted
   mostly from [Joost- experiment], [JO2-Moreira] and [JO7-Joost Network
   Architecture].

   For peer selection, Host Cache of a peer, which is refreshed
   periodically, stores a list of Joost super nodes IP addresses and
   ports.  The selection strategy is influenced by the number of peers
   accessing the same content.  Specifically, the number of candidate
   peers made available is proportional to the number of active peers.
   If there are a few of them, then Joost content server is made
   available to assist in the data delivery.  Although there is no
   explicit consideration for peer heterogeneities in peer selection,
   low capacity peers tend to partner with low capacity peer.  Peers
   under the same NAT also tend to serve each other preferentially [JO2-
   Moreira].  It may consider geographical locality but not have AS-
   level awareness or exploit topological locality and thus may have
   impact on the efficiency of video distribution.

   To maintain the overlay networks, super nodes probe clients, clients
   probe clients and super nodes, and super nodes communicate with super
   nodes and servers.  To make up for inadequate bandwidth and to be
   scalable, Joost forms groups of Joost Server Islands, each island
   consisting of one streaming control server controlling ten streaming
   servers.  Moreover, STUN protocol enables a client to discover
   whether it is behind a NAT or firewall and the type of the NAT or
   firewall.



Gu, et al.             Expires September 12, 2011               [Page 8]


Internet-Draft    Survey of P2P Streaming Applications        March 2011


   For data delivery, audio and video traffic are streamed separately to
   allow for multi-lingual programming.  Content comes mostly from peers
   and occasionally from content server for !olong-tail!+/- content.  As
   peers are assumed to contribute in a best-effort manner,
   infrastructures are needed to make up for insufficient bandwidth,
   including in the asymmetric scenario.  However, super nodes are not
   part of the bandwidth supplying infrastructures as they only relay
   control traffic but not data traffic to clients.  To support the P2P
   media distribution services, Joost uses an agent based peer-to-peer
   system called Anthill.  Joost also employs Local Video Cache for
   later viewing and to avoid reloading but will still require
   authorization from Joost server when accessing the video file at a
   later time.

   Joost provides large buffering and thus causes longer start-up delay
   for VoD traffic than for live media streaming traffic.  It affords
   more FEC for VoD traffic but gives higher priority in delivery to
   live media streaming traffic.

   For Joost, load-balancing and fault-tolerance is shifted directly
   into the client and all is done natively in the p2p code.

   To enhance user viewing experience, Joost provides chat capability
   between viewers and user program rating mechanisms.

3.1.2.  Octoshape

   CNN has been working with a P2P Plug-in, from a Denmark-based company
   Octoshape, to broadcast its living streaming.  Octoshape helps CNN
   serve a peak of more than a million simultaneous viewers.  It has
   also provided several innovative delivery technologies such as loss
   resilient transport, adaptive bit rate, adaptive path optimization
   and adaptive proximity delivery.  Figure 2 depicts the architecture
   of the Octoshape system.

   Octoshape maintains a mesh overlay topology.  Its overlay topology
   maintenance scheme is similar to that of P2P file-sharing
   applications, such as BitTorrent.  There is no Tracker server in
   Octoshape, thus no Tracker Protocol is required.  Peers obtain live
   streaming from content servers and peers over Octoshape Protocol.
   Several data streams are constructed from live stream.  No data
   streams are identical and any number K of data streams can
   reconstruct the original live stream.  The number K is based on the
   original media playback rate and the playback rate of each data
   stream.  For example, a 400Kbit/s media is split into four 100Kbit/s
   data streams, and then k = 4.  Data streams are constructed in peers,
   instead of Broadcast server, which release server from large burden.
   The number of data streams constructed in a particular peer equals



Gu, et al.             Expires September 12, 2011               [Page 9]


Internet-Draft    Survey of P2P Streaming Applications        March 2011


   the number of peers downloading data from the particular peer, which
   is constrained by the upload capacity of the particular peer.  To get
   the best performance, the upload capacity of a peer should be larger
   than the playback rate of the live stream.  If not, an artificial
   peer may be added to deliver extra bandwidth.

   Each single peer has an address book of other peers who is watching
   the same channel.  A Standby list is set up based on the address
   book.  The peer periodically probes/asks the peers in the standby
   list to be sure that they are ready to take over if one of the
   current senders stops or gets congested.  [Octoshape]

   Peer Protocol: The live stream is firstly sent to a few peers in the
   network and then spread to the rest of the network.  When a peer
   joins a channel, it notifies all the other peers about its presence
   using Peer Protocol, which will drive the others to add it into their
   address books.  Although [Octoshape] declares that each peer records
   all the peers joining the channel, we suspect that not all the peers
   are recorded, considering the notification traffic will be large and
   peers will be busy with recording when a popular program starts in a
   channel and lots of peers switch to this channel.  Maybe some
   geographic or topological neighbors are notified and the peer gets
   its address book from these nearby neighbors.

   The peer sends requests to some selected peers for the live stream
   and the receivers answers OK or not according to their upload
   capacity.  The peer continues sending requests to peers until it
   finds enough peers to provide the needed data streams to redisplay
   the original live stream.  The details of Octoshape are (not?)
   disclosed yet, we hope someone else can provide much specific
   information.




















Gu, et al.             Expires September 12, 2011              [Page 10]


Internet-Draft    Survey of P2P Streaming Applications        March 2011


            +------------+   +--------+
            |   Peer 1   |---| Peer 2 |
            +------------+   +--------+
                 |    \    /      |
                 |     \  /       |
                 |      \         |
                 |     / \        |
                 |    /   \       |
                 |  /      \      |
      +--------------+    +-------------+
      |     Peer 4   |----|    Peer3    |
      +--------------+    +-------------+

      *****************************************
                         |
                         |
                 +---------------+
                 | Content Server|
                 +---------------+

      Figure 2, Architecture of Octoshape system

   The following sections describe Octoshape QoS related features,
   extracted mostly from [OctoshapeWeb], [OC2-Alstrup] and [OC3-
   Alstrup].  As it is a closed system, the details of how the features
   are implemented are not available.

   To spread the burden of data distribution across several peers and
   thus limiting the impact of peer loss, Octoshape splits a live stream
   into a number of smaller equal-sized sub-streams.  For example, a
   400kbit/s live stream is split and coded into 12 distinct 100kbit/s
   sub-streams.  Only a subset of these sub-streams needs to reach a
   user for it to reconstruct the !(R)original!_ live stream.  The
   number of distinct sub-streams could be as many as the number of
   active peers.

   Therefore, even if the upload capacity of a peer is smaller than its
   download capacity, it would now be easier to contribute a sub-stream
   than a whole live stream.  An Octoshape peer can then receive from
   each neighboring peer at least a distinct sub-stream.  To make up for
   the bandwidth asymmetry, artificial end users are used to deliver
   additional bandwidth.  Multi OctoServers are also available to
   guarantee no single point of failure [OC3-Alstrup].

   Octoshape keeps peer!_s availability information in an address book.
   Each peer keeps a periodically updated stand-by list and passes it
   along with its transmitted sub-stream.  With constant monitoring of
   the quality and consistency of each content source, the peer can



Gu, et al.             Expires September 12, 2011              [Page 11]


Internet-Draft    Survey of P2P Streaming Applications        March 2011


   switch partners in case of bottleneck and congestion to a better
   source.

   Octoshape provides operator to control who should and should not
   receive certain video signal due to copyright restriction, to control
   access based in part on IP numbers, and to obtain real time
   statistics during any live events.

   To optimize bandwidth utilization, Octoshape leverages computers
   within a network to minimize external bandwidth usage and to select
   the most reliable and !oclosest!+/- source to each viewer.  It also
   chooses the best matching available codecs and players and scales bit
   rate up and down according to available internet connection.

   Octoshape [OctoshapeWeb] claims to have patented resiliency and
   throughput technologies to deliver quality streams to the mobile and
   wireless edge networks.  This throughput optimization technology also
   cleans up latent and lossy network connections between the encoder
   and the distribution point, providing a stable, high quality, stream
   for distribution.  Octoshape also claims to be able to deliver true
   HD, 1280x720 30fps (720p) video over the Internet and to have
   advanced DVR functionalities such as allowing users to move
   seamlessly forward and back through the streams with almost no
   waiting time.

3.1.3.  PPLive

   PPLive is one of the most popular P2P streaming software in China.
   It has two major communication protocols.  One is Registration and
   peer discovery protocol, i.e.  Tracker Protocol, and the other is P2P
   chunk distribution protocol, i.e.  Peer Protocol.  Figure 3 shows the
   architecture of PPLive.

   Tracker Protocol: First, a peer gets the channel list from the
   Channel server, in a way similar to that of Joost.  Then the peer
   chooses a channel and asks the Tracker server for the peerlist of
   this channel.

   Peer Protocol: The peer contacts the peers in its peerlist to get
   additional peerlists, which are aggregated with its existing list.
   Through this list, peers can maintain a mesh for peer management and
   data delivery.

   For the video-on-demand (VoD) operation, because different peers
   watch different parts of the channel, a peer buffers up to a few
   minutes worth of chunks within a sliding window to share with each
   others.  Some of these chunks may be chunks that have been recently
   played; the remaining chunks are chunks scheduled to be played in the



Gu, et al.             Expires September 12, 2011              [Page 12]


Internet-Draft    Survey of P2P Streaming Applications        March 2011


   next few minutes.  Peers upload chunks to each other.  To this end,
   peers send to each other "buffer-map" messages; a buffer-map message
   indicates which chunks a peer currently has buffered and can share.
   The buffer-map message includes the offset (the ID of the first
   chunk), the length of the buffer map, and a string of zeroes and ones
   indicating which chunks are available (starting with the chunk
   designated by the offset).  PPlive transfer Data over UDP.

   Video Download Policy of PPLive

      1 Top ten peers contribute to a major part of the download
      traffic.  Meanwhile, the top peer session is quite short compared
      with the video session duration.  This would suggest that PPLive
      gets video from only a few peers at any given time, and switches
      periodically from one peer to another;

      2 PPLive can send multiple chunk requests for different chunks to
      one peer at one time;

   PPLive maintains a constant peer list with relatively small number of
   peers.  [P2PIPTV-measuring]
            +------------+    +--------+
            |   Peer 2   |----| Peer 3 |
            +------------+    +--------+
                     |          |
                     |          |
                    +--------------+
                    |    Peer 1    |
                    +--------------+
                            |
                            |
                            |
                    +---------------+
                    | Tracker Server|
                    +---------------+

      Figure 3, Architecture of PPlive system

   The following sections describe PPLive QoS related features,
   extracted mostly from [PL3-Hei], [PL5-Vu], [PL6-Liu], and [PL7-Liu].

   After obtaining an initial peer list from the member server, a peer
   periodically updates its peer list by querying both member server and
   partner peers.  New peers are aggressively contacted at a fixed rate.
   In selecting peers as partners, a peer considers their upload-
   bandwidth and in part, their location information [PL6-Horvath] in
   selecting on a FCFS basis those that have responded [PL7-Liu].




Gu, et al.             Expires September 12, 2011              [Page 13]


Internet-Draft    Survey of P2P Streaming Applications        March 2011


   For data distribution, PPLive, a data-driven or mesh-pull scheme
   [PL3-Hei], divides the media content into small portions called
   chunks uses and TCP for video streaming.  Neighbor peers use a
   gossip-like protocol to exchange their buffer map that indicates
   chunks available for sharing.  Peers obtain one or more their missing
   chunks from one or more peers having them.  Available chunks may also
   be downloaded from the original channel server.

   PPLive uses a double buffering mechanism consisting of TV Engine and
   Media Player for its stream reassembly and display [PL3-Hei].  The TV
   Engine is responsible for downloading video chunks from the PPLive
   network and streaming the downloaded video to the Media Player, which
   in turns displays the content to the user, after each buffer is
   filled up to its respective predetermined threshold.

   PPLive is observed to have the download scheduling policy of giving
   higher priority to rare chunks and to chunks closer to play out
   deadline and to be using a sliding window mechanism to regulate the
   buffering of chunks.

   To utilize available peer resources, peers in one subscribed overlay
   may also be harnessed to support peers in other subscribed overlays
   [PL5-Vu].

3.1.4.  Zattoo

   Zattoo is P2P live streaming system which serves over 3 million
   registered users over European countries [Zattoo].The system delivers
   live streaming using a receiver-based, peer-division multiplexing
   scheme.  Zattoo reliably streams media among peers using the mesh
   structure.

   Figure 4 depicts a typical procedure of single TV channel carried
   over Zattoo network.  First, Zattoo system broadcasts live TV,
   captured from satellites, onto the Internet.  Each TV channel is
   delivered through a separate P2P network.















Gu, et al.             Expires September 12, 2011              [Page 14]


Internet-Draft    Survey of P2P Streaming Applications        March 2011


      -------------------------------
      |   ------------------        |         --------
      |   |  Broadcast     |        |---------|Peer1 |-----------
      |   |  Servers       |        |         --------          |
      |   Administrative Servers    |                      -------------
      |   ------------------------  |                      | Super Node|
      |   | Authentication Server | |                      -------------
      |   | Rendezvous Server     | |                           |
      |   | Feedback Server       | |         --------          |
      |   | Other Servers         | |---------|Peer2 |----------|
      |   ------------------------| |         --------
      ------------------------------|
 Figure 4, Basic architecture of Zattoo system

   Tracker(Rendezvous Server) Protocol: In order to receive the signal
   the requested channel, registered users are required to be
   authenticated through Zattoo Authentication Server.  Upon
   authentication, users obtain a ticket with specific lifetime.  Then,
   users contact Rendezvous Server with the ticket and identify of
   interested TV channel.  In return, the Rendezvous Server sends back a
   list joined peers carrying the channel.

   Peer Protocol: Similar to aforementioned procedures in Joost, PPLive,
   a new Zattoo peer requests to join an existing peer among the peer
   list.  Upon the availability of bandwidth, requested peer decides how
   to multiplex a stream onto its set of neighboring peers.  When
   packets arrive at the peer, sub-streams are stored for reassembly
   constructing the full stream.

   Note Zattoo relies on Bandwidth Estimation Server to initially
   estimate the amount of available uplink bandwidth at a peer.  Once a
   peer starts to forward substream to other peers, it receives QoS
   feedback from other receivers if the quality of sub-stream drops
   below a threshold.

   The following sections describe Zattoo QoS related features,
   extracted mostly from [ZT1-Chang].

   For reliable data delivery, each live stream is partitioned into
   video segments.  Each video segment is coded for forward error
   correction with Reed-Solomon error correcting code into n sub-stream
   packets such that having obtained k correct packets of a segment is
   sufficient to reconstruct the remaining n-k packets of the same video
   segment.  To receive a video segment, each peer then specifies the
   sub-stream(s) of the video segment it would like to receive from the
   neighboring peers.

   Zattoo uses Peer-Division Multiplexing (PDM) scheme for its data



Gu, et al.             Expires September 12, 2011              [Page 15]


Internet-Draft    Survey of P2P Streaming Applications        March 2011


   delivery topology setup.  In this scheme, each new peer independently
   executes the Search and Join phases.  In the Search Phase, a peer
   queries the members of the peer list for sub-streams availability; in
   response, receives additional prospective peers, sub-streams
   availability, quality indications, and sub-stream sequence numbers;
   and then selects, among the responses, partnering peers or quits
   after failing two search attempts.

   In the Join Phase, a joining peer, having selected the candidate
   peers, requests to partner with some of them, spreading the load
   among them and preferring topologically close-by peers, if these
   peers have less capacity or carry lower quality sub-streams.  Barring
   departure or performance degradation of neighboring peers, the
   established connections stay and the specified sub-stream packet of
   every segment continues to be forwarded without further per-packet
   handshaking between peers.

   To manage stream efficiently for incoming and outgoing destinations,
   each peer has a packet buffer, called IOB (Input-Output Buffer).  The
   IOB is referenced by an input pointer, a repair pointer, and one or
   more output pointers, one for each forwarding destination such as
   player, file, and other peer.  The input pointer points to the slot
   in the IOB where the next incoming packet with sequence number higher
   than the highest sequence number received so far will be stored, and
   the repair pointer always points to one slot beyond the last packet
   received in order and is used to regulate packet retransmission and
   adaptive PDM (to be described later).  A packet map and forwarding
   discipline is associated with each output pointer to accommodate the
   different forwarding rates and regimes required by the destinations.
   Note that retransmission requests are sent to random peers and not to
   partnering peers and they are honoured only if the requested packets
   are still in IOB and there is sufficient left-over capacity to
   transmit all the requested packets.  To avoid buffer overrun, a set
   of two buffers is used in the IOB instead of a circular buffer.

   Zattoo uses Adaptive Peer-Division Multiplexing scheme to handle
   longer term bandwidth fluctuations.  In this scheme, each peer
   determines how many sub-streams to transmit and when to switch
   partners.  Specifically, each peer continually estimates the amount
   of available uplink bandwidth based initially on probe packets to the
   Zattoo Bandwidth Estimation Server and later, based on peer QoS
   feedbacks, using different algorithms depending on the underlying
   transport protocol.  A peer increases its estimated available uplink
   bandwidth, if the current estimate is below some threshold and if
   there has been no bad quality feedback from neighboring peers for a
   period of time, according to some algorithm similar to how TCP
   maintains its congestion window size.  Each peer then admits
   neighbors based on the currently estimated available uplink



Gu, et al.             Expires September 12, 2011              [Page 16]


Internet-Draft    Survey of P2P Streaming Applications        March 2011


   bandwidth.  In case a new estimate indicates insufficient bandwidth
   to support the existing number of peer connections, one connection at
   a time, preferably starting with the one requiring the least
   bandwidth, is closed.  On the other hand, if loss rate of packets
   from a peer!_s neighbor reaches a certain threshold, the peer will
   attempt to shift the degraded neighboring peer load to other existing
   peers, while looking for replacement peer.  When a replacement is
   found, the load is shifted to it and the degraded neighbor is
   dropped.  As expected if a peer!_s neighbor is lost due to departure,
   the peer initiates the process to replace the lost peer.  To optimize
   the PDM configuration, a peer may occasionally initiate switching
   existing partnering peers to topologically closer peers.

3.1.5.  PPStream

   The system architecture and working flows of PPStream is similar to
   PPLive.  PPStream transfers data using mostly TCP, only occasionally
   UDP.

   Video Download Policy of PPStream

      1 Top ten peers do not contribute to a large part of the download
      traffic.  This would suggest that PPStream gets the video from
      many peers simultaneously, and its peers have long session
      duration;

      2 PPStream does not send multiple chunk requests for different
      chunks to one peer at one time;

   PPStream maintains a constant peer list with relatively large number
   of peers.  [P2PIPTV-measuring]

   The following sections describe PPStream QoS related features,
   extracted mostly from [PS3-Li], [PS4-Jia] and [PS5-Wei].

   PPStream is mainly mesh-based but to some extent it is layered in its
   data distribution topology.  It uses geographic clustering to some
   extent based on geographic longitude and latitude of the IP addresses
   [PS4-Jia].

   To ensure data availability, some form of chunk retransmission
   request mechanism is used and the buffer map is shared at high rate,
   although concurrent requests for the same data chunk is rare.  Each
   data chunk, identified by the play time offset encoded by the program
   source, is divided into 128 sub-chunks of 8KB size each.  The chunk
   id is used to ensure sequential ordering of received data chunk.

   The buffer map consists of one or more 128-bit flags denoting the



Gu, et al.             Expires September 12, 2011              [Page 17]


Internet-Draft    Survey of P2P Streaming Applications        March 2011


   availability of sub-chunks and having a corresponding time offset.
   Usually a buffer map contains only one data chunk at a time and is
   thus smaller than that of PPLive.  It also contains sending peer!_s
   playback status to the other peers because as soon as a data chunk is
   played back, the chunk is deleted or replaced by the next data chunk
   [PS5-Wei].

   At the initiating stage, a peer can use up to 4 data chunks and on a
   stabilized stage, a peer uses usually one data chunk.  However, in
   transient stage, a peer uses variable number of chunks.  Although,
   sub-chunks within each data chunks are fetched nearly in random
   without using rarest or greedy policy, the same fetching pattern for
   one data chunk seems to repeat in the following data chunks [PS3-Li].
   Moreover, high bandwidth PPStream peers tend to receive chunks
   earlier and contributes more than lower bandwidth peers.

3.1.6.  SopCast

   The system architecture and working flows of SopCast is similar to
   PPLive.  SOPCast transfer data mainly using UDP, occasionally TCP;

   Top ten peers contribute to about half of the total download traffic.
   SOPCast's download policy is similar to PPLive's policy in that it
   switches periodically between provider peers.  However, SOPCast seems
   to always need more than one peer to get the video, while in PPLive a
   single peer could be the only video provider;

   SOPCast's peer list can be as large as PPStream's peer list.  But
   SOPCast's peer list varies over time.  [P2PIPTV-measuring]

   The following sections describe SopCast QoS related features,
   extracted mostly from [SC1-Ali], [SC2-Ciullo], [SC4-Fallica], [SC5-
   Sentinelli], [SC6-Silverston], and [SC7-Tang].

   SopCast allows for software update through (HTTP) a centralized web
   server and makes available channel list through (HTTP) another
   centralized server.

   SopCast traffic is encoded and SopCast TV content is divided into
   video chunks or blocks with equal sizes of 10KB [SC7-Tang].  Sixty
   percent of its traffic is signaling packets and 40% is actual video
   data packets [SC4-Fallica].  SopCast produces more signaling traffic
   compared to PPLive, PPStream, and TVAnts, whereas PPLive produces the
   least [SC6-Silverston].  Its traffic is also noted to have long-range
   dependency [SC6-Silverston], indicating that mitigating it with QoS
   mechanisms may be difficult.  [SC1-Ali] reported that SopCast
   communication mechanism starts with UDP for the exchange of control
   messages among its peers using a gossip-like protocol and then moves



Gu, et al.             Expires September 12, 2011              [Page 18]


Internet-Draft    Survey of P2P Streaming Applications        March 2011


   to TCP for the transfer of video segments.  This use of TCP for data
   transfer seems to contradict others findings [SC4-Fallica, SC6-
   Silverston].

   To discover candidate peers, a peer requests peer list from Tracker,
   or from neighboring peer using a gossip-like protocol.  To retrieve
   content [SC4-Fallica], a new peer contacts peers selected randomly
   from the peer list it obtained from having queried the root servers
   (trackers).  The process of contacting peers slows down after the
   initial bootstrap phase [SC3-Horvath, SC2-Ciullo].  The number of
   peers a node typically connects to for download is about 2 to 5 [SC5-
   Sentinelli] and there is no observed preference for peers with
   shorter paths [SC2-Ciullo].  Partner peers periodically advertise
   content availability and exchange sought content.  In forming
   multiple parent and children relationships, a peer does not exploit
   peer location information [SC3-Horvath].  In general, parents are
   chosen solely based on performance; however, lower capacity nodes
   seem to be choosing parents that are closer to improve performance
   and to compensate for its bandwidth constraints [SC1-Ali].  When
   needed, a peer can download video streams directly from the Source
   Provide, a node that broadcasts the entire video [SC7-Tang].  In the
   process of data exchange, there is no enforcement of tit-for-tat like
   mechanisms [SC2-Ciullo].

   Similar to PPLive, SopCast uses a double-buffering mechanism.  The
   SopCast buffer downloads video chunks from the network, storing them,
   and upon exceeding a predetermined number of stored chunks, launches
   the Media player.  The Media player buffer then downloads video
   content from the local web server listening port and upon receiving
   sufficient amount of content, starts video playback.

3.1.7.  TVants

   The system architecture and working flows of TVants is similar to
   PPLive.  TVAnts is more balanced between TCP and UDP in data
   transmission;

   The system architecture and working flows of TVants is similar to
   PPLive.  TVAnts is more balanced between TCP and UDP in data
   transmission;

   TVAnts' peer list is also large and varies over time.  [P2PIPTV-
   measuring]

   We extract the common Main components and steps of PPLive, PPStream,
   SopCast and TVants, which is shown in Figure 5.





Gu, et al.             Expires September 12, 2011              [Page 19]


Internet-Draft    Survey of P2P Streaming Applications        March 2011


                        +------------+
                        |   Tracker  |
                       /+------------+
                      /
                     /    +------+
                1,2/     /|Peer 1|
                  /     / +------+
                 /     /3,4,6
           +---------+/              +------+
           |New Peer |---------------|Peer 2|
           +---------+\     4,6      +------+
           |5  |       \
           |---|        \ +------+
                   3,4,6 \|Peer 3|
                          +------+

   Figure 5, Main components and steps of PPLive, PPStream, SopCast and Tvants

   The main steps are:

      (1) A new peer registers with tracker / distributed hash table
      (DHT) to join the peer group which shares a same channel / media
      content;

      (2) Tracker / DHT returns an initial peer list to the new peer;

      (3) The new peer harvests peer lists by gossiping (i.e. exchange
      peer list) with the peers in the initial peer list to aggregate
      more peers sharing the channel / media content;

      (4) The new peer randomly (or with some guide) selects some peers
      from its peer list to connect and exchange peer information (e.g.
      buffer map, peer status, etc) with connected peers to know where
      to get what data;

      (5) The new peer decides what data should be requested in which
      order / priority using some scheduling algorithm and the peer
      information obtained in Step (4);

      (6) The new peer requests the data from some connected peers.

   The following sections describe TVAnts QoS related features,
   extracted mostly from [TV1-Alessandria], [TV2-Ciullo], and [TV3-
   Horvath].

   TVAnts peer discovery mechanism is very greedy during the first part
   of a peer life and stabilizes afterwards [TV2-Ciullo].




Gu, et al.             Expires September 12, 2011              [Page 20]


Internet-Draft    Survey of P2P Streaming Applications        March 2011


   For data delivery, peers exhibit mild preference to exchange data
   among themselves in the same Autonomous System and also among peers
   in the same subnet.  TVAnts peer also exhibits some preference to
   download from closer peers.  According to [TV3-Horvath], TVAnts peer
   exploits location information and download mostly from high-bandwidth
   peers.  However, it does not seem to enforce any tit-for-tat
   mechanisms in the data delivery.

   TVAnts [TV1-Alessandria] seems to be sensitive to network impairments
   such as changes in network capacity, packet loss, and delay.  For
   capacity loss, a peer will always seek for more peers to download.
   In the process of trying to avoid bad paths and selecting good peers
   to continue downloading data, aggressive and potentially harmful
   behavior for both application and the network results when bottleneck
   is affecting all potential peers.

   When limited access capacity is experienced, a peer reacts by
   increasing redundancy (with FEC or ARQ mechanism) as if reacting to
   loss and thus causes higher download rate.  To recover from packet
   losses, some kind of ARQ mechanism is also used.  Although network
   conditions do impact video stream distribution such as the network
   delay impacting the start-up phase, they seem to have little impact
   on the network topology discovery and maintenance process.

3.2.  Tree-based P2P streaming systems

3.2.1.  PeerCast

   PeerCast adopts a Tree structure.  The architecture of PeerCast is
   shown in Figure 6.

   Peers in one channel construct the Broadcast Tree and the Broadcast
   server is the root of the Tree.  A Tracker can be implemented
   independently or merged in the Broadcast server.  Tracker in Tree
   based P2P streaming application selects the parent nodes for those
   new peers who join in the Tree.  A Transfer node in the Tree receives
   and transfers data simultaneously.

   Peer Protocol: The peer joins a channel and gets the broadcast server
   address.  First of all, the peer sends a request to the server, and
   the server answers OK or not according to its idle capability.  If
   the broadcast server has enough idle capability, it will include the
   peer in its child-list.  Otherwise, the broadcast server will choose
   at most eight nodes of its children and answer the peer.  The peer
   records the nodes and contacts one of them, until it finds a node
   that can server it.

   In stead of requesting the channel by the peer, a Transfer node



Gu, et al.             Expires September 12, 2011              [Page 21]


Internet-Draft    Survey of P2P Streaming Applications        March 2011


   pushes live stream to its children, which can be a transfer node or a
   receiver.  A node in the tree will notify its status to its parent
   periodically, and the latter will update its child-list according to
   the received notifications.
               ------------------------------
               |            +---------+      |
               |            | Tracker |      |
               |            +---------+      |
               |                  |          |
               |                  |          |
               |   +---------------------+   |
               |   |   Broadcast server  |   |
               |   +---------------------+   |
               |------------------------------
                     /                     \
                    /                       \
                   /                         \
                  /                           \
            +---------+                  +---------+
            |Transfer1|                  |Transfer2|
            +---------+                  +---------+
             /      \                       /      \
            /        \                     /        \
           /          \                   /          \
      +---------+  +---------+     +---------+  +---------+
      |Receiver1|  |Receiver2|     |Receiver3|  |Receiver4|
      +---------+  +---------+     +---------+  +---------+

      Figure 6, Architecture of PeerCast system

   The following sections describe PeerCast QoS related features,
   extracted mostly from [CVV1-Zhang], [CVV4-Chu], [CVV5-Chu], and
   [CVV6-Chu].

   Each PeerCast node has a peering layer which is a layer between the
   application layer and the transport layer.  The peering layer of each
   node coordinates among similar nodes to establish and maintain a
   multicast tree.  Moreover, the peering layer also supports simple,
   lightweight redirect primitive.  This primitive allows a peer p to
   direct another peer c which is either opening a data-transfer session
   with p, or has a session already established with p to a target peer
   t to try to establish a data-transfer session.  Peer discovery starts
   at the root (source) or some selected sub-tree root and goes
   recursively down the tree structure.  When a peer leaves normally, it
   informs its parent who then releases the peer, and it also redirects
   all its immediate children to find new parents starting at some
   target t.




Gu, et al.             Expires September 12, 2011              [Page 22]


Internet-Draft    Survey of P2P Streaming Applications        March 2011


   The peering layer allows for different policies of topology
   maintenance.  In choosing a parent from among the children of a given
   peer, a child can be chosen randomly, one at a time in some fixed
   order, or based on least access latency with respect to the choosing
   peer.  There are also many choices of peers to start and limit the
   search.  The different combinations are all the descendants of a
   leaving peer have to start searching from the root [root-All (RTA)];
   only the children of a leaving peer have to start searching from the
   root [Root (RT)]; all the descendants of a leaving peer have to start
   searching from the parent of the leaving peer [Grandfather-All
   (GFA)]; and only the children of the leaving peer have to start
   searching from the parent of the leaving peer [Grandfather (GF)].

   A heart-beat mechanism at the peer is available to handle failed
   peer.  With this mechanism, a peer sends keep-alive messages to its
   parent and children.  If a parent peer detects that a child has
   skipped a specified number of heart-beats, it deems the child as lost
   and tidies up.  Similarly, a child peer starts its search for new
   parent once its current parent is deemed to have left.

   PeerCast also proposes but has not evaluated a number of algorithms
   that use some cost function to optimize the overlay.  Some of them
   are described next.  If a parent is already saturated, a newly
   arrived peer replaces one of the costlier children than the newly
   arrived peer and the replaced peer tries to reconnect somewhere else
   [Knock-Down].  Newly arrived peer replaces the target peer and the
   target peer becomes its child [Join-Flip].  Unstable peers are pushed
   down to the bottom of the tree [Leaf-Sink].  Existing child and
   parent relationship is flipped [Maintain-Flip].

3.2.2.  Conviva

   Conviva[TM][conviva] is a real-time media control platform for
   Internet multimedia broadcasting.  For its early prototype, End
   System Multicast (ESM) [ESM04] is the underlying networking
   technology on organizing and maintaining an overlay broadcasting
   topology.  Next we present the overview of ESM.  ESM adopts a Tree
   structure.  The architecture of ESM is shown in Figure 7.

   ESM has two versions of protocols: one for smaller scale conferencing
   apps with multiple sources, and the other for larger scale
   broadcasting apps with Single source.  We focus on the latter version
   in this survey.

   ESM maintains a single tree for its overlay topology.  Its basic
   functional components include two parts: a bootstrap protocol, a
   parent selection algorithm, and a light-weight probing protocol for
   tree topology construction and maintenance; a separate control



Gu, et al.             Expires September 12, 2011              [Page 23]


Internet-Draft    Survey of P2P Streaming Applications        March 2011


   structure decoupled from tree, where a gossip-like algorithm is used
   for each member to know a small random subset of group members;
   members also maintain pathes from source.

   Upon joining, a node gets a subset of group membership from the
   source (the root node); it then finds parent using a parent selection
   algorithm.  The node uses light-weight probing heuristics to a subset
   of members it knows, and evaluates remote nodes and chooses a
   candidate parent.  It also uses the parent selection algorithm to
   deal with performance degradation due to node and network churns.

   ESM Supports for NATs.  It allows NATs to be parents of public hosts,
   and public hosts can be parents of all hosts including NATs as
   children.
               ------------------------------
               |            +---------+      |
               |            | Tracker |      |
               |            +---------+      |
               |                  |          |
               |                  |          |
               |   +---------------------+   |
               |   |    Broadcast server |   |
               |   +---------------------+   |
               |------------------------------
                     /                     \
                    /                       \
                   /                         \
                  /                           \
            +---------+                   +---------+
            |  Peer1   |                  |  Peer2  |
            +---------+                   +---------+
             /      \                       /      \
            /        \                     /        \
           /          \                   /          \
      +---------+  +---------+     +---------+  +---------+
      |  Peer3  |  |  Peer4  |     |  Peer5  |  |  Peer6  |
      +---------+  +---------+     +---------+  +---------+

      Figure 7, Architecture of ESM system

   The following sections describe ESM QoS related features, extracted
   mostly from [CVV1-Zhang], [CVV4-Chu], [CVV5-Chu], and [CVV6-Chu], and
   the details of Conviva are not publicly available.

   ESM constructs the multicast tree in a two-step process.  It
   constructs first a mesh of the participating peers; the mesh having
   the following properties:




Gu, et al.             Expires September 12, 2011              [Page 24]


Internet-Draft    Survey of P2P Streaming Applications        March 2011


   o  The shortest path delay between any pair of peers in the mesh is
      at most K times the unicast delay between them, where K is a small
      constant.

   o  Each peer has a limited number of neighbors in the mesh which does
      not exceed a given (per-member) bound chosen to reflect the
      bandwidth of the peer!_s connection to the Internet.

   It then constructs a (reverse) shortest path spanning trees of the
   mesh with the root being the source.

   Therefore a peer participates in two types of topology management: a
   control structure in which peers make sure they are always connected
   in a mesh and a data delivery structure in which peers make sure data
   gets delivered to them in a tree structure.

   To keep connected, each peer maintains communication with a small
   number of random neighbors and a complete list of members through a
   gossip-like algorithm.  When a new node joins, it gets a list of
   group members from the source.  To look for a parent, it sends probe
   request to a subset of the group members it obtained; evaluates them
   with respect to delay to the source, application throughput and link
   bandwidth; and then chooses from among them a candidate parent that
   is not a descendant and is not saturated.  In addition to using RTT-
   probes, consisting of 1-Kbyte transfers to detect bottleneck
   bandwidth, performance history of previously chosen parent is also
   considered.  The peer also avoids probing hosts with low bottleneck
   bandwidth.

   When a peer leaves normally, it notifies its neighboring peers and
   the neighboring peers propagate the departing peer info.  At the same
   time, the departing peer continues to forward packets for some time
   to minimize transient packet loss.  When a peer leaves due to
   failure, active peers detect the departure of the peer through its
   non-responsiveness to their probe messages.  Active peers that
   detected the loss then propagate the departed peer info.  A departed
   peer list that is flushed after a sufficient amount of time has
   passed keeps track of leaving and failed peers.  The list enables
   refreshes from an active peer and a leaving/failed peer to be
   distinguished.

   Departing peers and failing peers could in some instance partition a
   mesh into two or more components.  Mesh repair algorithm detects such
   occurrences by noticing split in the membership list and tries to
   repair by virtually linking between active members to one of the non-
   active members, trying one non-active member at a time.

   To improve mesh/tree structural and operating quality, each peer



Gu, et al.             Expires September 12, 2011              [Page 25]


Internet-Draft    Survey of P2P Streaming Applications        March 2011


   randomly probes one another to add new links that have perceived gain
   in utility; and each peer continually monitors existing links to drop
   those links that have perceived drop in utility.  Switching parent
   occurs if a peer leaves or fails; if there is a persistent congestion
   or low bandwidth condition; or if there is a better clustering
   configuration.  To allow for more public hosts to be available for
   becoming parents of NATs, public hosts preferentially choose NATs as
   parents.

   The data delivery structure, obtained from running a distance vector
   protocol on top of the mesh using latency between neighbors as the
   routing metric, is maintained using various mechanisms.  Each peer
   maintains and keeps up to date the routing cost to every other
   member, together with the path that leads to such cost.  To ensure
   routing table stability, data continues to be forwarded along the old
   routes for sufficient time until the routing tables converge.  The
   time is set to be larger than the cost of any path with a valid
   route, but smaller than infinite cost.  To make better use of the
   path bandwidth, streams of different bit-rates are forwarded
   according to the following priority scheme: audio being higher than
   video streams and lower quality video being higher than quality
   video.  Moreover, bit-rates of stream are adapted to the peer
   performance capability.

3.3.  Hybrid P2P streaming system

3.3.1.  New Coolstreaming

   The Coolstreaming, first released in summer 2004 with a mesh-based
   structure, arguably represented the first successful large-scale P2P
   live streaming.  As the above analysis, it has poor delay performance
   and high overhead associated each video block transmission.  After
   that, New coolstreaming[New CoolStreaming] adopts a hybrid mesh and
   tree structure with hybrid pull and push mechanism.  All the peers
   are organized into mesh-based topology in the similar way like pplive
   to ensure high reliability.

   Besides, content delivery mechanism is the most important part of New
   Coolstreaming.  Fig.8 is the content delivery architecture.  The
   video stream is divided into blocks with equal size, in which each
   block is assigned a sequence number to represent its playback order
   in the stream.  We divide each video stream into multiple sub-streams
   without any coding, in which each node can retrieve any sub-stream
   independently from different parent nodes.  This subsequently reduces
   the impact to content delivery due to a parent departure or failure.
   The details of hybrid push and pull content delivery scheme are shown
   in the following:




Gu, et al.             Expires September 12, 2011              [Page 26]


Internet-Draft    Survey of P2P Streaming Applications        March 2011


   (1) A node first subscribes to a sub-stream by connecting to one of
   its partners via a single request (pull) in BM, the requested
   partner, i.e., the parent node.( The node can subscribe more sub-
   streams to its partners in this way to obtain higher play quality.)

   (2) The selected parent node will continue pushing all blocks in need
   of the sub-stream to the requested node.

   This not only reduces the overhead associated with each video block
   transfer, but more importantly, significantly reduces the timing
   involved in retrieving video content.
                   ------------------------------
                  |            +---------+      |
                  |            | Tracker |      |
                  |            +---------+      |
                  |                  |          |
                  |                  |          |
                  |   +---------------------+   |
                  |   |    Content server   |   |
                  |   +---------------------+   |
                  |------------------------------
                        /                     \
                       /                       \
                      /                         \
                     /                           \
               +---------+                   +---------+
               |  Peer1  |                   |  Peer2  |
               +---------+                   +---------+
                /      \                       /      \
               /        \                     /        \
              /          \                   /          \
         +---------+  +---------+     +---------+  +---------+
         |  Peer2  |  |  Peer3  |     |  Peer1  |  |  Peer3  |
         +---------+  +---------+     +---------+  +---------+
                Figure 8 Content Delivery Architecture

   The following sections describe Coolstreaming QoS related features,
   extracted mostly from [CS1-Bo] and [CS2-Xie].

   The basic components of Coolstreaming consist of the source,
   bootstrap node, web server, log server, media servers, and peers.
   Three basic modules in a peer help it maintain a partial view of the
   overlay (Membership Manager); establish and maintain partnership with
   other peers with which Buffer Maps indicating available video
   content, are exchanged (Partnership Manager),; and manage data
   delivery, retrieval, and play out (Stream Manager).

   In building the overlay topology, a newly arrived peer contacts the



Gu, et al.             Expires September 12, 2011              [Page 27]


Internet-Draft    Survey of P2P Streaming Applications        March 2011


   bootstrap node for a list of nodes and stores it in its own mCache.
   From the stored list, it selects nodes randomly to forms partnership
   and then parent-children relationship, where a partnership between
   two nodes exists when only block availability information is
   exchanged between them, and a parent-children relationship exists
   when, in addition to being partner, video content is also exchanged.

   Video content is processed for ease of delivery, retrieval, storage,
   and play out.  To manage content delivery, a video stream is divided
   into blocks with equal size, each of which is assigned a sequence
   number to represent its playback order in the stream.  Each block is
   further divided into K sub-blocks and the set of ith sub-blocks of
   all blocks constitutes the ith sub-stream of the video stream, where
   i is the value bigger than 0 and less than K+1.  To retrieve video
   content, a node receives at most K distinct sub-streams from its
   parent nodes.  To store retrieved sub-streams, a node uses a double
   buffering scheme having a synchronization buffer and a cache buffer.
   The synchronization buffer stores the received sub-blocks of each
   sub-stream according to the associated block sequence number of the
   video stream.  The cache buffer then picks up the sub-blocks
   according to the associated sub-stream index of each ordered block.
   To advertise the availability of the latest block of different sub-
   streams in its buffer, a node uses a Buffer Map which is represented
   by two vectors of K elements each.  Each entry of the first vector
   indicates the block sequence number of the latest received sub-
   stream, and each bit entry of the second vector if set indicates the
   index of the sub-stream that is being requested.

   For data delivery, a node uses a hybrid push and pull scheme with
   randomly selected partners.  A node having requested one or more
   distinct sub-streams from a partner as indicated in its first Buffer
   Map will continue to receive the sub-streams of all subsequent blocks
   from the same partner until future conditions cause the partner to do
   otherwise.  Moreover, users retrieve video indirectly from the source
   through a number of strategically located servers.

   To keep the parent-children relationship above a certain level of
   quality, each node constantly monitors the status of the on-going
   sub-stream reception and re-selects parents according to sub-stream
   availability patterns.  Specifically, if a node observes that the
   block sequence number of the sub-stream of a parent is much smaller
   than any of its other partners!_ by a predetermined amount, then the
   node concludes that the parent is lagging sufficiently behind and
   needs to be replaced.  Furthermore, a node also evaluates the maximum
   and minimum of the block sequence numbers in its synchronization
   buffer to determine if any parent is lagging behind the rest of its
   parents and thus needs also to be replaced.




Gu, et al.             Expires September 12, 2011              [Page 28]


Internet-Draft    Survey of P2P Streaming Applications        March 2011


4.  A common P2P Streaming Process Model

   As shown in Figure 8, a common P2P streaming process can be
   summarized based on Section 3:

      1) When a peer wants to receive streaming content:

         1.1) Peer acquires a list of peers/parent nodes from the
         tracker.

         1.2) Peer exchanges its content availability with the peers on
         the obtained peer list, or requests to be adopted by the parent
         nodes.

         1.3) Peer identifies the peers with desired content, or the
         available parent node.

         1.4) Peer requests for the content from the identified peers,
         or receives the content from its parent node.

      2) When a peer wants to share streaming content with others:

         2.1) Peer sends information to the tracker about the swarms it
         belongs to, plus streaming status and/or content availability.

                  +---------------------------------------------------------+
                  |   +--------------------------------+                    |
                  |   |              Tracker           |                    |
                  |   +--------------------------------+                    |
                  |        ^  |                    ^                        |
                  |        |  |                    |                        |
                  |  query |  | peer list/         |streaming Status/       |
                  |        |  | Parent nodes       |Content availability/   |
                  |        |  |                    |node capability         |
                  |        |  |                    |                        |
                  |        |  V                    |                        |
                  |   +-------------+         +------------+                |
                  |   |    Peer1    |<------->|  Peer 2    |                |
                  |   +-------------+ content/+------------+                |
                  |                   join requests                         |
                  +---------------------------------------------------------+
   Figure 8, A common P2P streaming process model

   The functionality of Tracker and data transfer in Mesh-based
   application and Tree-based is a little different.  In the Mesh-based
   applications, such as Joost and PPLive, Tracker maintains the lists
   of peers storing chunks for a specific channel or streaming file.  It
   provides peer list for peers to download from, as well as upload to,



Gu, et al.             Expires September 12, 2011              [Page 29]


Internet-Draft    Survey of P2P Streaming Applications        March 2011


   each other.  In the Tree-based applications, such as PeerCast and
   Canviva, Tracker directs new peers to find parent nodes and the data
   flows from parent to child only.


5.  Security Considerations

   This document does not consider security issues.  It follows the
   security consideration in [draft-zhang-ppsp-problem-statement].


6.  Acknowledgments

   We would like to acknowledge Jiang xingfeng for providing good ideas
   for this document.


7.  Informative References

   [PPLive]   "www.pplive.com".

   [PPStream]
              "www.ppstream.com".

   [CNN]      "www.cnn.com".

   [OctoshapeWeb]
              "www.octoshape.com".

   [Joost-Experiment]
              Lei, Jun, et al., "An Experimental Analysis of Joost Peer-
              to-Peer VoD Service".

   [Sigcomm_P2P_Streaming]
              Huang, Yan, et al., "Challenges, Design and Analysis of a
              Large-scale P2P-VoD System", 2008.

   [Octoshape]
              Alstrup, Stephen, et al., "Introducing Octoshape-a new
              technology for large-scale streaming over the Internet".

   [Zattoo]   "http: //zattoo.com/".

   [Conviva]  "http://www.rinera.com/".

   [ESM04]    Zhang, Hui., "End System Multicast,
              http://www.cs.cmu.edu/~hzhang/Talks/ESMPrinceton.pdf",
              May .



Gu, et al.             Expires September 12, 2011              [Page 30]


Internet-Draft    Survey of P2P Streaming Applications        March 2011


   [Survey]   Liu, Yong, et al., "A survey on peer-to-peer video
              streaming systems", 2008.

   [draft-zhang-alto-traceroute-00]
              "www.ietf.org/internet-draft/
              draft-zhang-alto-traceroute-00.txt".

   [P2PStreamingSurvey]
              Zong, Ning, et al., "Survey of P2P Streaming", Nov. 2008.

   [P2PIPTV_measuring]
              Silverston, Thomas, et al., "Measuring P2P IPTV Systems".

   [Challenge]
              Li, Bo, et al., "Peer-to-Peer Live Video Streaming on the
              Internet: Issues, Existing Approaches, and Challenges",
              June 2007.

   [NewCoolstreaming]
              Li, Bo, et al., "Inside the New Coolstreaming:
              Principles,Measurements and Performance Implications",
              Apr. 2008.

   [JO2-Moreira]
              Moreira, J, et al., "IEEE Network Operations and
              Management Symposium", Apr. 2008.

   [JO7-Joost Network Architecture]
              "Joost Network Architecture,
              http://scaryideas.com/content/2362/".

   [OC2-Alstrup]
              Alstrup, S, et al., "Octoshape "C a new technology for
              large-scale streaming over the Internet", 2005.

   [OC3-Alstrup]
              Alstrup, S, et al., "Grid live streaming to millions",
              2006.

   [PL3-Hei]  Hei, X, et al., "Insights into PPLive: A measurement study
              of a large-scale P2P IPTV system", May 2006.

   [PL5-Vu]   Vu, L, et al., "Understanding Overlay Characteristics of a
              Large-Scale Peer-to-Peer IPTV System", November 2010.

   [PL6-Horvath]
              "".




Gu, et al.             Expires September 12, 2011              [Page 31]


Internet-Draft    Survey of P2P Streaming Applications        March 2011


   [SC3-Horvath]
              "".

   [TV3-Horvath]
              Horvath, A, et al., "Dissecting PPLive, SopCast, TVAnt".

   [PL7-Liu]  Liu, Y, et al., "A Case Study of Traffic Locality in
              Internet P2P Live Streaming Systems".

   [PS3-Li]   Li, C, et al., "Measurement Based PPStream client behavior
              analysis", 2009.

   [PS4-Jia]  Jia, J, et al., "Characterizing PPStream across Internet",
              2007.

   [PS5-Wei]  Wei, T, et al., "Study of PPStream Based on Measurement",
              2008.

   [SC1-Ali]  Ali, S, et al., "Measurement of Commercial Peer-to-Peer
              Live Video Streaming", Aug 2006.

   [SC2-Ciullo]
              "".

   [TV2]      Ciullo, D, et al., "Network Awareness of P2P Live
              Streaming Applications: A Measurement Study", Aug 2010.

   [SC4-Fallica]
              Fallica, B, et al., "On the Quality of Experience of
              SopCast", Aug 2008.

   [SC5-Sentinelli]
              Sentinelli, A, et al., "Will IPTV Ride the Peer-to-Peer
              Stream?", June 2007.

   [SC6-Silverston]
              Silverston, T, et al., "Traffic analysis of peer-to-peer
              IPTV communities", 2009.

   [SC7-Tang]
              Tang, S, et al., "Topology dynamics in a P2PTV network",
              2009.

   [TV1-Alessandria]
              Alessandria, E, et al., "P2P-TV Systems under Adverse
              Network Conditions: a Measurement Study", 2009.

   [ZT1-Chang]



Gu, et al.             Expires September 12, 2011              [Page 32]


Internet-Draft    Survey of P2P Streaming Applications        March 2011


              Chang, H, et al., "Live streaming performance of the
              Zattoo network", 2009.

   [PC1-Deshpande]
              Deshpande, H, et al., "Streaming Live Media over a Peer-
              to-Peer Network", August 2001.

   [PC2-http]
              "http://arbor.ee.ntu.edu.tw/archive/p2p/p2p/showDoc2.pdf".

   [PC3-http]
              "http://ilpubs.stanford.edu:8090/863/".

   [CVV1-Zhang]
              Zhang, H, et al., "End System Multicast", May 2004.

   [CVV4-Chu]
              Chu, Y, et al., "A Case for End System Multicast",
              June 2000.

   [CVV5-Chu]
              Chu, Y, et al., "Early Experience with an Internet
              Broadcast System Based on Overlay Multicast", June 2004.

   [CVV6-Chu]
              Chu, Y, et al., "Narada is a self-organizing, overlay-
              based protocol for achieving multicast without network
              support", Aug 2001.

   [CS1-Bo]   Li, B, et al., "Inside the New Coolstreaming: Principles,
              Measurements and Performance Implications", 2008.

   [CS2-Xie]  Xie, S, et al., "Coolstreaming: Design, Theory, and
              Practice", 2007.


Authors' Addresses

   Gu Yingjie
   Huawei
   Baixia Road No. 91
   Nanjing, Jiangsu Province  210001
   P.R.China

   Phone: +86-25-56624760
   Fax:   +86-25-56624702
   Email: guyingjie@huawei.com




Gu, et al.             Expires September 12, 2011              [Page 33]


Internet-Draft    Survey of P2P Streaming Applications        March 2011


   Zong Ning
   Huawei
   Baixia Road No. 91
   Nanjing, Jiangsu Province  210001
   P.R.China

   Phone: +86-25-56624760
   Fax:   +86-25-56624702
   Email: zongning@huawei.com


   Hui Zhang
   NEC Labs America.

   Email: huizhang@nec-labs.com


   Zhang Yunfei
   China Mobile

   Email: zhangyunfei@chinamobile.com


   Lei Jun
   University of Goettingen

   Phone: +49 (551) 39172032
   Email: lei@cs.uni-goettingen.de


   Gonzalo Camarillo
   Ericsson

   Email: Gonzalo.Camarillo@ericsson.com


   Liu Yong
   Polytechnic University

   Email: yongliu@poly.edu


   Delfin Montuno
   Huawei

   Email: delfin.montuno@huawei.com





Gu, et al.             Expires September 12, 2011              [Page 34]


Internet-Draft    Survey of P2P Streaming Applications        March 2011


   Xie Lei
   Huawei

   Email: xielei57471@huawei.com















































Gu, et al.             Expires September 12, 2011              [Page 35]