Network Working Group                          Rahul Aggarwal (Editor)
Internet Draft                                 Juniper Networks
Expiration Date: February 2005


                  Multicast in BGP/MPLS VPNs and VPLS

              draft-raggarwa-l3vpn-mvpn-vpls-mcast-00.txt


Status of this Memo

   This document is an Internet-Draft and is in full conformance with
   all provisions of Section 10 of RFC2026.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF), its areas, and its working groups.  Note that
   other groups may also distribute working documents as Internet-
   Drafts.

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as ``work in progress.''

   The list of current Internet-Drafts can be accessed at
   http://www.ietf.org/ietf/1id-abstracts.txt

   The list of Internet-Draft Shadow Directories can be accessed at
   http://www.ietf.org/shadow.html.


Abstract

   This document describes a solution framework for overcoming the
   limitations of existing Multicast VPN (MVPN) and VPLS multicast
   solutions.  It describes procedures for enhancing the scalability of
   multicast for BGP/MPLS VPNs. It also describes procedures for VPLS
   multicast that utilize multicast trees in the sevice provider (SP)
   network.  The procedures described here reduce the overhead of PIM
   neighbor relationships that a PE router needs to maintain for
   BGP/MPLS VPNs. They also reduce the state (and the overhead of
   maintaining the state) in the SP network by removing the need to
   maintain in the SP network at least one dedicated multicast tree per
   each VPN.






draft-raggarwa-l3vpn-mvpn-vpls-mcast-00.txt                     [Page 1]


Internet Draft draft-raggarwa-l3vpn-mvpn-vpls-mcast-00.txt   August 2004


Conventions used in this document

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED",  "MAY", and "OPTIONAL" in this
   document are to be interpreted as described in RFC-2119 [KEYWORDS].


1. Contributors


   Rahul Aggarwal
   Yakov Rekhter
   Anil Lohiya
   Tom Pusateri
   Lenny Giuliano
   Chaitanya Kodeboniya
   Juniper Networks



2. Terminology

   This document uses terminology described in [MVPN-PIM], [VPLS-BGP]
   and [VPLS-LDP].


3. Introduction

   [MVPN-PIM] describes the minimal set of procedures that are required
   to build multi-vendor inter-operable implementations of multicast for
   BGP/MPLS VPNs. However the solution described in [MVPN-PIM] has
   undesirable scaling properties. [ROSEN] describes additional
   procedures for multicast for BGP/MPLS VPNs and they too have
   undesirable scaling properties.

   [VPLS-BGP] and [VPLS-LDP] describe a solution for VPLS multicast that
   relies on ingress replication. This solution has certain limitations
   for some VPLS multicast traffic profiles.

   This document describes a solution framework to overcome the
   limitations of existing MVPN [MVPN-PIM, ROSEN] solutions. It also
   extends VPLS multicast to provide a solution that can utilize
   multicast trees in the SP network.








draft-raggarwa-l3vpn-mvpn-vpls-mcast-00.txt                     [Page 2]


Internet Draft draft-raggarwa-l3vpn-mvpn-vpls-mcast-00.txt   August 2004


4. Existing Scalability Issues in BGP/MPLS MVPNs

   The solution described in [MVPN-PIM] and [ROSEN] has three
   fundamental scalability issues.

4.1. PIM Neighbor Adjacencies Overhead

   The solution for unicast in BGP/MPLS VPNs [2547] requires a PE to
   maintain at most one BGP peering each with every other PE in the
   network that is participating in BGP/MPLS VPNs. The use of Route
   Reflectors further reduces the number of BGP adjacencies maintained
   by a PE.

   On the other hand for multicast in BGP/MPLS VPNs [MVPN-PIM, ROSEN],
   for a particular MVPN, a PE has to maintain PIM neighbor adjacencies
   with every other PE that has a site in that MVPN. Thus for a given
   PE-PE pair multiple PIM adjacencies are required, one per MVPN that
   the PEs have in common.  This implies that the number of PIM neighbor
   adjacencies that a PE has to maintain is equal to the product of the
   number of MVPNs the PE belongs to and the average number of sites in
   each of these MVPNs.

   For each such PIM neighbor adjacency the PE has to send and receive
   PIM Hello packets that are transmitted periodically at a default
   interval of 30 seconds. For example, on a PE router with 1000 VPNs
   and 100 sites per VPN, ascenario that is not uncommon in L3VPN
   deployments today, the PE router would have to maintain 100,000 PIM
   neighbors.  With a default of hello interval of 30s, this would
   result in an average of 3,333 hellos per second.

   It is highly desirable to reduce the overhead due to PIM adjacencies
   that a PE router needs to maintain in support of multicast with
   BGP/MPLS VPNs.

4.2. Periodic PIM Join/Prune Messages

   PIM [PIM-SM] is a soft state protocol. It requires PIM Join/Prune
   messages to be transmitted periodically. Hence each PE participating
   in MVPNs has to periodically refresh the PIM C-Join messages. It is
   desirable to reduce the overhead of the periodic PIM control
   messages. The overheard of PIM C-Join messages increases when PIM
   Join suppression is disabled.  There is a need to disable PIM Join
   suppression as described in section 6.5.2.  This in turn further
   justifies the need to reduce the overhead of periodic PIM C-Join
   messages.






draft-raggarwa-l3vpn-mvpn-vpls-mcast-00.txt                     [Page 3]


Internet Draft draft-raggarwa-l3vpn-mvpn-vpls-mcast-00.txt   August 2004


4.3. State in the SP Core

   Unicast in BGP/MPLS VPNs [2547] requires no per VPN state in the SP
   core.  The core maintains state for only PE to PE transport tunnels.
   VPN routing information is maintained only by the PEs participating
   in the VPN service.

   On the other hand [MVPN-PIM] specifies a solution that requires the
   SP core to maintain per MVPN state. This is because a RP rooted
   shared tree is setup using PIM-SM, by default, in the SP core for
   each MVPN. Based on configuration receiver PEs may also switch to a
   source rooted tree for a particular MVPN which further increases the
   number of multicast trees in the SP core.  [ROSEN] specifies the use
   of PIM-SSM for setting up SP multicast trees.  The use of PIM-SSM
   instead of PIM-SM increases the amount of per MVPN state maintained
   in the SP core. Use of Data MDT as specified in [ROSEN] further
   increases the overhead resulting from this state.

   It is desirable to remove the need to maintain per MVPN state in the
   SP core.


5. Existing Limitation of VPLS Multicast

   VPLS multicast solutions described in [VPLS-BGP] and [VPLS-LDP] rely
   on ingress replication. Thus the ingress PE replicates the multicast
   packet for each egress PE and sends it to the egress PE using a
   unicast tunnel.  By appropriate IGMP or PIM snooping it is possible
   to send the packet to only to the PEs that have the receivers for
   that traffic, rather than to all the PEs in the VPLS instance.

   This is a reasonable model when the bandwidth of the multicast
   traffic is low or/and the number of replications performed on an
   average on each outgoing interface for a particular customer VPLS
   multicast packet is small. If this is not the case it is desirable to
   utilize multicast trees in the SP core to transmit VPLS multicast
   packets.  Note that unicast packets that are flooded to each of the
   egress PEs, before the ingress PE performs learning for those unicast
   packets, will still use ingress replication.












draft-raggarwa-l3vpn-mvpn-vpls-mcast-00.txt                     [Page 4]


Internet Draft draft-raggarwa-l3vpn-mvpn-vpls-mcast-00.txt   August 2004


6. MVPN Solution Framework

   This section describes the framework for the MVPN solution. This
   framework makes it possible to overcome the existing scalability
   limitations described in section 4.

6.1. PIM Neighbor Maintenance using BGP

   This document proposes the use of BGP for discovering and maintaining
   PIM neighbors in a given MVPN. All the PE routers advertise their
   MVPN membership i.e. the VRFs configured for multicast, to other PE
   routers using BGP. This allows each PE router in the SP network to
   have a complete view of the MVPN membership of other PE routers. A PE
   that belongs to a MVPN considers all the other PEs that advertise
   membership for that MVPN to be PIM neighbors for that MVPN. However
   the PE does not have to perform PIM neighbor adjacency management as
   the PIM neighbor discovery is performed using BGP. This eliminates
   the PIM Hello processing required for maintaining the PIM neighbors.

6.2. PIM Refresh Reduction

   As described in section 4.2 PIM is a soft state protocol. To
   eliminate the need to peridically refresh PIM control messages there
   is a need to build a refresh reduction mechanism in PIM. The detailed
   procedures for this will be specified later.

6.3. Separation of Customer Control Messages and Data Traffic

   BGP/MPLS VPN unicast [2547] maintains a separation between the
   exchange of customer routing information and the transmission of
   customer data i.e.  VPN unicast traffic. VPN routing information is
   exchanged using BGP while VPN data traffic is encapsulated in PE-to-
   PE tunnels. This makes the exchange of VPN routing information
   agnostic of the unicast tunneling technology. This, in turn, provides
   flexibility of supporting various tunneling technologies, without
   impacting the procedures for exchange of VPN routing information.

   [MVPN-PIM] on the other hand uses Multicast Domain (MD) tunnels for
   sending both C-Join messages and C-Data traffic. This creates an
   undesirable dependency between the exchange of customer control
   information and the multicast transport technology.

   Procedures described in section 6.1 make the discovery and
   maintenance of PIM neighbors independent of the multicast transport
   technology in the SP network. The other piece is the exchange of
   customer multicast control information. This document proposes that a
   PE use a PE-to-PE tunnel to send the customer multicast control
   information to the upstream PE that is the PIM neighbor. The C-Join



draft-raggarwa-l3vpn-mvpn-vpls-mcast-00.txt                     [Page 5]


Internet Draft draft-raggarwa-l3vpn-mvpn-vpls-mcast-00.txt   August 2004


   packets are encapsulated in a MPLS label before being encapsulated in
   the PE-to-PE tunnel. This label specifies the context of the C-Join
   i.e. the MVPN the C-Join is intended for. Section 9 specifies how
   this label is learned. The destination address of the C-Join is still
   the ALL-PIM-ROUTERS multicast group address. Thus a C-Join packet is
   tunnelled to the PE which is the PIM neighbor for that packet. A
   beneficial side effect of this is that C-Join suppression is
   disabled. As described in section 6.5.2 it is desirable to disable C-
   Join suppression.

6.4. Transport of Customer Multicast Data Packets

   This document describes two mechanisms to transport customer
   multicast data packets over the SP network. One is ingress
   replication and the other is the use of multicast trees in the SP
   network.

6.4.1. Ingress Replication

   In this mechanism the ingress PE replicates a customer multicast data
   packet of a particular group and sends it to each egress PE which is
   on the path to a receiver of that group. The packet is sent to an
   egress PE using a unicast tunnel. This has the advantage of
   operational simplicity as the SP network doesn't need to run a
   multicast routing protocol. It also has the advantage of minimizing
   state in the SP network. With C-Join suppression disabled, it has an
   advantage of sending the traffic to only the PEs that have the
   receivers for that traffic. This is a reasonable model when the
   bandwidth of the multicast traffic is low or/and the number of
   replications performed by the ingress PE on each outgoing interface
   for a particular customer multicast data packet is small.

6.4.2. Multicast Trees in the SP Network

   This mechanism uses multicast trees in the SP network for
   transporting customer multicast data packets. MD trees described in
   [MVPN-PIM] are an example of such multicast trees. The use of
   multicast trees in the SP network can be beneficial when the
   bandwidith of the multicast traffic is high or when it is desirable
   to optimize the number of copies of a multicast packet transmitted by
   the ingress. This comes at a cost of operational overhead in the SP
   core to build multicast trees and state in the SP core. This document
   places no restrictions on the protocols used to build SP multicast
   trees.







draft-raggarwa-l3vpn-mvpn-vpls-mcast-00.txt                     [Page 6]


Internet Draft draft-raggarwa-l3vpn-mvpn-vpls-mcast-00.txt   August 2004


6.5. Sharing a Single SP Multicast Tree across Multiple MVPNs

   This document describes procedures for sharing a single SP multicast
   tree across multiple MVPNs.

6.5.1. Aggregate Trees

   An Aggregate Tree is a SP multicast tree that can be shared across
   multiple MPVNs and is setup by discovering the egress PEs i.e. the
   leaves of the tree, by using BGP.

   PIM neighbor discovery and maintenance using BGP allows a PE or a RP
   to learn the MVPN membership information of other PEs. This in turn
   allows the creation of one or more Aggregate Trees where each
   Aggregate tree is mapped to one or more MVPNs. The leaves of the
   Aggregate Tree are determined by the PEs that belong to all the MVPNs
   that are mapped onto the Aggregate Tree. Aggregate Trees remove the
   need to maintain per MVPN state in the SP core as a single SP
   multicast tree can be used across multiple VPNs.

   Note that like default MDTs described in [MVPN-PIM] Aggregate MDTs
   may result in a multicast data packet for a particular group being
   delivered to PE routers that do not have receivers for that multicast
   group.

6.5.2. Aggregate Data Trees

   An Aggregate Data Tree is a SP multicast tree that can be shared
   across multiple MVPNs and is setup by discovering the egress PEs i.e.
   the leaves of the tree, by using C-Join messages. The reason for
   having Aggregate Data Trees is to provide a PE to have the ability to
   create separate SP multicast trees for high bandwidth multicast
   groups. This allows traffic for these multicast groups to reach only
   those PE routers that have receivers in these groups. This avoids
   flooding other PE routers in the MVPN. More than one such multicast
   groups can be mapped on to the same SP multicast tree. The multicast
   groups that are mapped to this SP multicast tree may also belong to
   different MVPNs.

   The setting up of Aggregate Data Trees requires the ingress PE to
   know all the other PEs that have receivers for multicast groups that
   are mapped onto the Aggregate Data Trees. This is learned from the C-
   Joins received by the ingress PE. It requires that C-Join suppression
   be disabled. The procedures used for C-Join propagation as described
   in section 6.3 ensure that Join suppression is not enabled.

   Note that [ROSEN] describes a limited solution for building Data MDTs
   where a Data MDT cannot be shared across different VPNs.



draft-raggarwa-l3vpn-mvpn-vpls-mcast-00.txt                     [Page 7]


Internet Draft draft-raggarwa-l3vpn-mvpn-vpls-mcast-00.txt   August 2004


6.5.3. Setting up Aggregate Trees and Aggregate Data Trees

   This document does not place any restrictions on the multicast
   technology used to setup Aggregate Trees or Aggregate Data Trees.

   When PIM is used to setup multicast trees in the SP core, an
   Aggregate Tree is termed as the "Aggregate MDT" and an Aggregate Data
   Tree is termed as an "Aggregate Data MDT". The Aggregate MDT may be a
   shared tree, rooted at the RP, or a shortest path tree. Aggregate
   Data MDT is rooted at the PE that is connected to the multicast
   traffic source. The root of the Aggregate MDT or the Aggregate Data
   MDT has to advertise the P-Group address chosen by it for the MDT to
   the PEs that are leaves of the MDT. These other PEs can then Join
   this MDT. The announcement of this address is done as part of the
   discovery procedures described in section 6.5.5.

6.5.4. Demultiplexing Aggregate Tree and Aggregate Data Tree Multicast
   Traffic

   Aggregate Trees and Aggregate Data Trees require a mechanism for the
   egress PEs to demultiplex the multicast traffic received over the
   Aggregate Tree. This is because traffic belonging to multiple MVPNs
   can be carried over the same tree. Hence there is a need to identify
   the MVPN the packet belongs to. This is done by using an inner label
   that corresponds to the multicast VRF for which the packet is
   intended. The ingress PE uses this label as the inner label while
   encapsulating a customer multicast data packet. Each of the egress
   PEs must be able to associate this inner label with the same MVPN and
   use it to demultimplex the traffic received over the Aggregate Tree
   or the Aggregate Data Tree. If downstream label assignment were used
   this would require all the egress PEs in the MVPN to agree on a
   common label for the MVPN.

   We propose a solution that uses upstream label assignment by the
   ingress PE.  Hence the inner label is allocated by the ingress PE.
   Each egress PE has a separate label space for every Aggregate Tree or
   Aggregate Data Tree for which the egress PE is a leaf node. The inner
   VPN label allocated by the ingress PE can be programmed in this label
   space by the egress PEs. Hence when the egress PE receives a packet
   over an Aggregate Tree (or an Aggregate Data Tree), Aggregate Tree
   identifier (or Aggregate Datat Tree Identifier) specifies the label
   space to perform the inner label lookup. An implementation may create
   a logical interface corresponding to an Aggregate Tree (or an
   Aggregate Data Tree). In that case the label space to lookup the
   inner label in is an interface based label space where the interface
   corresponds to the tree.

   When Aggregate MDTs (or Aggregate Data MDTs) are used the roote PE



draft-raggarwa-l3vpn-mvpn-vpls-mcast-00.txt                     [Page 8]


Internet Draft draft-raggarwa-l3vpn-mvpn-vpls-mcast-00.txt   August 2004


   source address and the Aggregate MDT (or Aggregate Data MDT) P-group
   address identifies the MDT. The label space corresponding to the MDT
   interface is the label space to perform the inner label lookup in. A
   lookup in this label space identifies the multicast VRF in which the
   customer multicast lookup needs to be done.

   The ingress PE informs the egress PEs about the inner label as part
   of the discovery procedures described in the next section.

6.5.5. Aggregate Tree and Aggregate Data Tree Discovery

   Once a PE sets up an Aggregate Tree or an Aggregate Data Tree it
   needs to announce the customer multicast groups being mapped to this
   tree to other PEs in the network. This procedure is referred to as
   Aggregate Tree or Aggregate Data Tree discovery. For an Aggregate
   Tree this discovery implies announcing the mapping of all MVPNs
   mapped to the Aggregate Tree. The inner label allocated by the
   ingress PE for each MVPN is included along with the Aggregate Tree
   Identifier. For an Aggregate Data Tree this discovery implies
   announcing all the specific <C-Source, C-Group> entries mapped to
   this tree along with the Aggregate Data Tree Identifer. The inner
   label allocated for each <C-Source, C-Group> is included along with
   the Aggregate Data Tree Identifier.

   The egress PE creates a logical interface corresponding to the
   Aggregate Tree or the Aggregate Data Tree identifier. This interface
   is the RPF interface for all the <C-Source, C-Group> entries mapped
   to that tree.  An Aggregate Tree by definition maps to all the <C-
   Source, C-Group> entries belonging to all the MVPNs associated with
   the Aggregate Tree. An Aggregate Data Tree maps to the specific <C-
   Source, C-Group> associated with it.

   When PIM is used to setup SP multicast trees, the egress PE also
   Joins the P-Group Address corresponding to the Aggregate MDT or the
   Aggregate Data MDT. This results in setup of the PIM SP tree.
















draft-raggarwa-l3vpn-mvpn-vpls-mcast-00.txt                     [Page 9]


Internet Draft draft-raggarwa-l3vpn-mvpn-vpls-mcast-00.txt   August 2004


7. VPLS Multicast

   This document proposes the use of SP multicast trees for VPLS
   multicast.  This allows a SP to have an option when ingress
   replication as described in [VPLS-BGP] and [VPLS-LDP] is not the best
   fit for the customer multicast traffic profile.

   Aggregate Trees and Aggreagate Data Trees described in section 6 can
   be used as SP multicast trees for VPLS multicast. No resriction is
   placed on the protocols used for building SP Aggregate Trees for
   VPLS. VPLS auto-discovery as described in [VPLS-BGP] is used to map
   VPLS instances on Aggregate Trees. IGMP and PIM snooping is required
   for mapping multicast groups to Aggregate Data Trees. Detailed
   procedures for this will be specified in the next revision.


8. BGP Advertisements

   The procedures required in this document use BGP for MVPN membership
   discovery, for Aggregate Tree discovery and for Aggregate Data Tree
   discovery. A new Subsequence-Address Family (SAFI) called the MVPN
   SAFI is defined. Following is the format of the NLRI associated with
   this SAFI:

             +---------------------------------+
             |   Length (2 octets)             |
             +---------------------------------+
             |   MPLS Labels (variable)        |
             |---------------------------------+
             |    RD   (8 octets)              |
             +---------------------------------+
             |Multicast Source  (4 octets)     |
             +---------------------------------+
             |Multicast Group   (4 octets)     |
             +---------------------------------+


   The RD corresponds to the multicast enabled VRF or the VPLS instance.
   The BGP next-hop advertised with this NLRI contains an IPv4 address
   which is the same as the BGP next-hop advertised with the unicast VPN
   routes.

   When a PE distributes this NLRI via BGP, it must include a Route
   Target Extended Communities attribute. This RT must be an "Import RT"
   [2547] of each VRF in the MVPN or of each VSI in the VPLS.  The BGP
   distribution procedures used by [2547] will then ensure that the
   advertised information gets associated with the right VRFs or VSIs.




draft-raggarwa-l3vpn-mvpn-vpls-mcast-00.txt                    [Page 10]


Internet Draft draft-raggarwa-l3vpn-mvpn-vpls-mcast-00.txt   August 2004


   A new optional transitive attribute called the
   Multicast_Tree_Attribute is defined to signal the Aggregate Tree or
   the Aggregate Data Tree. This attribute is a TLV. Currently a single
   Tree Identifier is defined:
     1. PIM MDT.

   When the type is set to PIM MDT, the attribute contains a PIM P-
   Multicast Group address.

   Hence MP_REACH idenfies the set of VPN customer's multicast trees,
   the Multicast_Tree_Attribute identifies a particular SP tree (aka
   Aggregate tree or Aggregate Data Tree), and the advertisement of both
   in a single BGP Update creates a binding/mapping between the SP tree
   (the Aggregate Tree) and the set of VPN customer's trees.


9. MVPN Neighbor Discovery and Maintenance

   The BGP NLRI described in section 8 is used for MVPN neighbor
   discovery and maintenance. Each PE advertises its multicast VPN
   membership information using BGP. For the purpose of the MVPN
   membership distribution, the NRLI contains the Route-Distinguisher
   (RD), a MPLS label and the PE source address.  The group address is
   set to 0. The RD corresponds to the multicast enabled VRF.  The MPLS
   label is used by other PEs to send PIM Join/Prune messages to this
   PE. This label identifies the multicast VRF for which the Join/Prune
   is intended. When ingress replication is used, this label must also
   be present for sending customer multicast traffic.

   When a PE distributes this NLRI via BGP, it must include a Route
   Target Extended Communities attribute. This RT must be an "Import RT"
   [2547] of each VRF in the MVPN. The BGP distribution procedures used
   by [2547] will then ensure that each PE learns the other PEs in the
   MVPN, and that this information gets associated with the right VRFs.
   This allows the MVPN PIM instance in a PE to discover all the PIM
   neighbors in that MVPN.

   The advertisement of the NLRI described above by a PE implies that
   the PIM module on that PE that deals with the MVPN corresponding to
   the NLRI is fully functional. When such module becomes disfunctional
   (for whatever reason) the PE MUST withdraw the advertisement.

   The neighbor discovery described here is applicable only to BGP/MPLS
   VPNs, and is not applicable to VPLS.







draft-raggarwa-l3vpn-mvpn-vpls-mcast-00.txt                    [Page 11]


Internet Draft draft-raggarwa-l3vpn-mvpn-vpls-mcast-00.txt   August 2004


9.1. PIM Hello Options

   PIM Hellos allow PIM neighbors to exchange various optional
   capabilities.  The use of BGP for discovering and maintaining PIM
   neighbors may imply that some of these optional capabilities need to
   be supported in the BGP based discovery procedures. Exchanging these
   capabilities via BGP will be described if and when the need for
   supporting these optional capabilities will arise.


10. Aggregate MDT

   An Aggregate MDT can be created by a RP or an ingress PE. It results
   in the creation of a MD tree that can be shared by multiple MVPNs or
   VPLS intances. The MD group address associated with the Aggregate MDT
   is assigned by the router that creates the Aggregate MDT. This
   address along with the source address of the router forms the
   Aggregate MDT Identifier. Once the RP or an ingress PE maps one or
   more MVPNs or VPLS instances to an Aggregate MDT it needs to
   advertise this mapping to the egress PEs that belong to these MVPNs
   or VPLS instances. This requires advertising one or more MVPNs/VPLS
   instances and the corresponding Aggregate MDT Identifier. The MVPNs
   or VPLS instances can be advertised using the BGP procedures
   described in section 8.  The Aggregate MDT Identifer is encoded using
   a TLV in the Multicast_Tree_Attribute. Each NLRI also encodes the
   upstream label assigned by the Aggregate MDT root for that MVPN or
   VPLS instance.

   This information allows the egress PE to associate an Aggregate MDT
   with one or more MVPNs or VPLS instances. The Aggregate MDT Identifer
   identifies the label space to lookup the inner label. The inner label
   identifies the VRF or VSI to do the multicast lookup in after a
   packet is received from the Aggregate MDT.  The Aggregate MDT
   interface is used for the multicast RPF check for the customer
   packet.  On the receipt of this information each egress PE can Join
   the Aggregate MDT. This results in the setup of the Aggregate MDT in
   the SP network.














draft-raggarwa-l3vpn-mvpn-vpls-mcast-00.txt                    [Page 12]


Internet Draft draft-raggarwa-l3vpn-mvpn-vpls-mcast-00.txt   August 2004


11. Aggregate Data MDT

   Aggregate Data MDT is created by an ingress PE. It is created for one
   or more customer multicast groups that the PE wishes to move to a
   dedicated SP tree. These groups may belong to different MVPNs or VPLS
   instances. It may be desirable that the set of PEs that have
   receivers belonging to these groups be exactly the same. However the
   procedures for setting up Aggregate Data MDTs do not require this.
   The mapping of an Aggregate Data MDT Identifier to <C-Source, C-
   Group> entries requires a source PE to know the PE routers that have
   receievers in these groups. For MVPN this is learned using the C-Join
   information. For VPLS IGMP snooping or PIM snooping is required at
   the source PE.

   The mapping of the Aggregate Data MDT Identifier to the <C-Source, C-
   Group> entries is advertised by the ingress PE to the egress PEs
   using the procedures described in section 8. The source address in
   the NLRI is set to the C-Source Address and the group address is set
   to the C-Group address. The Aggregate Data MDT is encoded in the
   Multicast_Tree_Attribute. Each NLRI also encodes the upstream label
   assigned by the Aggregate Data MDT root for the MVPN or VPLS instance
   corresponding to the <C-Source, C-Group> encoded in the NLRI. A
   single BGP Update may carry multiple <C-Source, C-Group> addresses as
   long as they all belong to the same VPN.

   This information allows the egress PE to associate an Aggregate Data
   MDT with one or more <C-Source, C-Group>s. On the receipt of this
   information each egress PE can Join the Aggregate Data MDT. This
   results in the setup of the Aggregate Data MDT in the SP network. The
   inner label is used to identify the VRF or VSI to do the multicast
   lookup in after a packet is received from the Aggregate Data MDT. It
   is also needed for multicast RPF check for MVPNs.

   Note that the procedures for signaling Aggregate Data MDTs are the
   same as the procedures for signaling Aggregate MDTs describe in
   section 10.















draft-raggarwa-l3vpn-mvpn-vpls-mcast-00.txt                    [Page 13]


Internet Draft draft-raggarwa-l3vpn-mvpn-vpls-mcast-00.txt   August 2004


12. Data Forwarding

   The following diagram shows the progression of the packet as it
   enters and leaves the SP network when the Aggregate MDT or Aggregate
   Data MDTs are being used for multiple MVPNs or multiple VPLS
   instances. MPLS-in-GRE [MPLS-IP] encapsulation is used to encapsulate
   the customer multicast packets.


      Packets received        Packets in transit      Packets forwarded
      at ingress PE           in the service          by egress PEs
                              provider network

                              +---------------+
                              |  P-IP Header  |
                              +---------------+
                              |      GRE      |
                              +---------------+
                              | VPN Label     |
      ++=============++       ++=============++       ++=============++
      || C-IP Header ||       || C-IP Header ||       || C-IP Header ||
      ++=============++ >>>>> ++=============++ >>>>> ++=============++
      || C-Payload   ||       || C-Payload   ||       || C-Payload   ||
      ++=============++       ++=============++       ++=============++


   The P-IP header contains the Aggregate MDT (or Aggregate Data MDT) P-
   group address as the destination address and the root PE address as
   the source address. The receiver PE does a lookup on the P-IP header
   and determines the MPLS forwarding table in which to lookup the inner
   MPLS label. This table is specific to the Aggregate MDT (or Aggregate
   Data MDT) label space. The inner label is unique within the context
   of the root of the MDT (as it is assigned by the root of the MDT,
   without any coordination with any other nodes). Thus it is not unique
   across multiple roots.  So, to unambiguously identify a particular
   VPN one has to know the label, and the context within which that
   label is unique. The context is provided by the P-IP header.

   The P-IP header and the GRE header is stripped. The lookup of the
   resulting VPN MPLS label determines the VRF or the VSI in which the
   receiver PE needs to do the C-multicast data packet lookup. It then
   strips the inner MPLS label and sends the packet to the VRF/VSI for
   multicast data forwarding.








draft-raggarwa-l3vpn-mvpn-vpls-mcast-00.txt                    [Page 14]


Internet Draft draft-raggarwa-l3vpn-mvpn-vpls-mcast-00.txt   August 2004


13. Security Considerations

   Security considerations discussed in [2547], [MVPN-PIM], [VPLS-BGP]
   and [VPLS-LDP] apply to this document.


14. Acknowledgments

   TBD


15. Normative References

   [PIM-SM]  "Protocol Independent Multicast - Sparse Mode (PIM-SM)",
   Fenner, Handley, Holbrook, Kouvelas, October 2003, draft-ietf-pim-
   sm-v2-new-08.txt

   [2547] "BGP/MPLS VPNs", Rosen, Rekhter, et. al., September 2003,
   draft-ietf-l3vpn-rfc2547bis-01.txt

   [MVPN-PIM] R. Aggarwal, A. Lohiya, T. Pusateri, Y. Rekhter, "Base
   Specification for Multicast in MPLS/BGP VPNs", draft-raggarwa-
   l3vpn-2547-mvpn-00.txt

   [RFC2119] "Key words for use in RFCs to Indicate Requirement
   Levels.", Bradner, March 1997

   [RFC3107] Y. Rekhter, E. Rosen, "Carrying Label Information in
   BGP-4", RFC3107.

   [VPLS-BGP] K. Kompella, Y. Rekther, "Virtual Private LAN Service",
   draft-ietf-l2vpn-vpls-bgp-02.txt

   [VPLS-LDP] M. Lasserre, V. Kompella, "Virtual Private LAN Services
   over MPLS", draft-ietf-l2vpn-vpls-ldp-03.txt
















draft-raggarwa-l3vpn-mvpn-vpls-mcast-00.txt                    [Page 15]


Internet Draft draft-raggarwa-l3vpn-mvpn-vpls-mcast-00.txt   August 2004


16. Informative References

   [ROSEN] E. Rosen, Y. Cai, I. Wijnands, "Multicast in MPLS/BGP IP
   VPNs", draft-rosen-vpn-mcast-07.txt


17. Author Information

17.1. Editor Information


   Rahul Aggarwal
   Juniper Networks
   1194 North Mathilda Ave.
   Sunnyvale, CA 94089
   Email: rahul@juniper.net


17.2. Contributor Information


   Yakov Rekhter
   Juniper Networks
   1194 North Mathilda Ave.
   Sunnyvale, CA 94089
   Email: yakov@juniper.net

   Anil Lohiya
   Juniper Networks
   1194 North Mathilda Ave.
   Sunnyvale, CA 94089
   Email: alohiya@juniper.net

   Tom Pusateri
   Juniper Networks
   1194 North Mathilda Ave.
   Sunnyvale, CA 94089
   Email: pusateri@juniper.net

   Lenny Giuliano
   Juniper Networks
   1194 North Mathilda Ave.
   Sunnyvale, CA 94089
   Email: lenny@juniper.net

   Chaitanya Kodeboniya
   Juniper Networks
   1194 North Mathilda Ave.



draft-raggarwa-l3vpn-mvpn-vpls-mcast-00.txt                    [Page 16]


Internet Draft draft-raggarwa-l3vpn-mvpn-vpls-mcast-00.txt   August 2004


   Sunnyvale, CA 94089
   Email: ck@juniper.net



18. Intellectual Property

   The IETF takes no position regarding the validity or scope of any
   Intellectual Property Rights or other rights that might be claimed to
   pertain to the implementation or use of the technology described in
   this document or the extent to which any license under such rights
   might or might not be available; nor does it represent that it has
   made any independent effort to identify any such rights.  Information
   on the procedures with respect to rights in RFC documents can be
   found in BCP 78 and BCP 79.

   Copies of IPR disclosures made to the IETF Secretariat and any
   assurances of licenses to be made available, or the result of an
   attempt made to obtain a general license or permission for the use of
   such proprietary rights by implementers or users of this
   specification can be obtained from the IETF on-line IPR repository at
   http://www.ietf.org/ipr.

   The IETF invites any interested party to bring to its attention any
   copyrights, patents or patent applications, or other proprietary
   rights that may cover technology that may be required to implement
   this standard.  Please address the information to the IETF at ietf-
   ipr@ietf.org.


19. Full Copyright Statement

   Copyright (C) The Internet Society (2004). This document is subject
   to the rights, licenses and restrictions contained in BCP 78 and
   except as set forth therein, the authors retain all their rights.

   This document and the information contained herein is provided on an
   "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING
   TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING
   BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION
   HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF
   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.









draft-raggarwa-l3vpn-mvpn-vpls-mcast-00.txt                    [Page 17]


Internet Draft draft-raggarwa-l3vpn-mvpn-vpls-mcast-00.txt   August 2004


20. Acknowledgement

   Funding for the RFC Editor function is currently provided by the
   Internet Society.















































draft-raggarwa-l3vpn-mvpn-vpls-mcast-00.txt                    [Page 18]