BESS Working Group                                            A. Sajassi
Internet Draft                                                 S. Thoria
Category: Standard Track                                   N. Fazlollahi
                                                                   Cisco
                                                                A. Gupta
                                                            Avi Networks

Expires: January 2, 2017                                    July 2, 2017


     Seamless Multicast Interoperability between EVPN and MVPN PEs
          draft-sajassi-bess-evpn-mvpn-seamless-interop-00.txt

Abstract

   Ethernet Virtual Private Network (EVPN) solution is becoming
   pervasive for Network Virtualization Overlay (NVO) services in data
   center (DC) networks and as the next generation VPN services in
   service provider (SP) networks.

   As service providers transform their networks in their COs toward
   next generation data center with Software Defined Networking (SDN)
   based fabric and Network Function Virtualization (NFV), they want to
   be able to maintain their offered services including multicast VPN
   (MVPN) service between their existing network and their new SPDC
   network seamlessly without the use of gateway devices. They want to
   have such seamless interoperability between their new SPDCs and their
   existing networks for a) reducing cost, b) having optimum forwarding,
   and c) reducing provisioning. This document describes a unified
   solution based on RFC 6513 for seamless interoperability of multicast
   VPN between EVPN and MVPN PEs. Furthermore, it describes how the
   proposed solution can be used as a routed multicast solution in data
   centers with EVPN-IRB PEs per [EVPN-IRB].

Status of this Memo

   This Internet-Draft is submitted to IETF in full conformance with the
   provisions of BCP 78 and BCP 79.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF), its areas, and its working groups.  Note that
   other groups may also distribute working documents as
   Internet-Drafts.

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."



Patel, et al.           Expires January 2, 2017                 [Page 1]


INTERNET DRAFT  Seamless Interop between EVPN & MVPN PEs    July 2, 2017


   The list of current Internet-Drafts can be accessed at
   http://www.ietf.org/1id-abstracts.html

   The list of Internet-Draft Shadow Directories can be accessed at
   http://www.ietf.org/shadow.html


Copyright and License Notice

   Copyright (c) 2015 IETF Trust and the persons identified as the
   document authors. All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents
   (http://trustee.ietf.org/license-info) in effect on the date of
   publication of this document. Please review these documents
   carefully, as they describe your rights and restrictions with respect
   to this document. Code Components extracted from this document must
   include Simplified BSD License text as described in Section 4.e of
   the Trust Legal Provisions and are provided without warranty as
   described in the Simplified BSD License.


Table of Contents

   1.  Introduction . . . . . . . . . . . . . . . . . . . . . . . . .  4
   2.  Requirements Language  . . . . . . . . . . . . . . . . . . . .  5
   3.  Terminology  . . . . . . . . . . . . . . . . . . . . . . . . .  5
   4.  Requirements . . . . . . . . . . . . . . . . . . . . . . . . .  5
     4.1. Optimum Forwarding  . . . . . . . . . . . . . . . . . . . .  6
     4.2. Optimum Replication . . . . . . . . . . . . . . . . . . . .  6
     4.3. All-Active and Single-Active Multi-Homing . . . . . . . . .  6
     4.4. Inter-AS Tree Stitching . . . . . . . . . . . . . . . . . .  6
     4.5. EVPN Service Interfaces . . . . . . . . . . . . . . . . . .  7
     4.6. Distributed Anycast Gateway . . . . . . . . . . . . . . . .  7
     4.7. Selective & Aggregate Selective Tunnels . . . . . . . . . .  7
     4.8. Tenants' (S,G) or (*,G) states  . . . . . . . . . . . . . .  7
   5.  Solution . . . . . . . . . . . . . . . . . . . . . . . . . . .  7
     5.1.  Operational Model for Homogenous EVPN IRB NVEs . . . . . .  8
       5.1.1  Control Plane Operation . . . . . . . . . . . . . . . . 10
       5.1.2  Data Plane Operation  . . . . . . . . . . . . . . . . . 12
         5.1.2.1 Sender and Receiver in same MAC-VRF  . . . . . . . . 12
         5.1.2.2 Sender and Receiver in different MAC-VRF . . . . . . 13
     5.2.  Operational Model for Heterogeneous EVPN IRB PEs . . . . . 13
     5.3.  All-Active Multi-Homing  . . . . . . . . . . . . . . . . . 13
       5.3.1.  Source and receivers in same ES but on different
               subnets  . . . . . . . . . . . . . . . . . . . . . . . 14
       5.3.2.  Source and some receivers in same ES and on same



Patel, et al.           Expires January 2, 2017                 [Page 2]


INTERNET DRAFT  Seamless Interop between EVPN & MVPN PEs    July 2, 2017


               subnet . . . . . . . . . . . . . . . . . . . . . . . . 14
     5.4.  Mobility for Tenant's sources and receivers  . . . . . . . 15
     5.5.  Single-Active Multi-Homing . . . . . . . . . . . . . . . . 15
   6.  DCs with only EVPN NVEs  . . . . . . . . . . . . . . . . . . . 15
     6.1 Setup of overlay multicast delivery  . . . . . . . . . . . . 16
     6.3 Data plane considerations  . . . . . . . . . . . . . . . . . 17
   7 Handling of different encapsulations . . . . . . . . . . . . . . 17
     7.1  MPLS Encapsulation  . . . . . . . . . . . . . . . . . . . . 18
     7.2  VxLAN Encapsulation . . . . . . . . . . . . . . . . . . . . 18
     7.3  Other Encapsulation . . . . . . . . . . . . . . . . . . . . 18
   8.  DCI with MPLS in WAN and VxLAN in DCs  . . . . . . . . . . . . 18
     8.1 Control plane inter-connect  . . . . . . . . . . . . . . . . 18
     8.2 Data plane inter-connect . . . . . . . . . . . . . . . . . . 20
     8.3 Multi-homing among DCI gateways  . . . . . . . . . . . . . . 20
   9.  Inter-AS Operation . . . . . . . . . . . . . . . . . . . . . . 20
   10.  Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 20
     10.1  DCs with only IGMP/MLD hosts w/o tenant router . . . . . . 20
     10.2  DCs with mixed of IGMP/MLD hosts & multicast routers
           running PIM-SSM  . . . . . . . . . . . . . . . . . . . . . 21
     10.3  DCs with mixed of IGMP/MLD hosts & multicast routers
           running PIM-ASM  . . . . . . . . . . . . . . . . . . . . . 21
     10.4  DCs with mixed of IGMP/MLD hosts & multicast routers
           running PIM-Bidir  . . . . . . . . . . . . . . . . . . . . 22
   11.  IANA Considerations . . . . . . . . . . . . . . . . . . . . . 22
   12.  Security Considerations . . . . . . . . . . . . . . . . . . . 22
   13.  Acknowledgements  . . . . . . . . . . . . . . . . . . . . . . 22
   14.  References  . . . . . . . . . . . . . . . . . . . . . . . . . 22
     14.1.  Normative References  . . . . . . . . . . . . . . . . . . 22
     15.2.  Informative References  . . . . . . . . . . . . . . . . . 23
   15.  Authors' Addresses  . . . . . . . . . . . . . . . . . . . . . 23





















Patel, et al.           Expires January 2, 2017                 [Page 3]


INTERNET DRAFT  Seamless Interop between EVPN & MVPN PEs    July 2, 2017


1.  Introduction

   Ethernet Virtual Private Network (EVPN) solution is becoming
   pervasive for Network Virtualization Overlay (NVO) services in data
   center (DC) networks and as the next generation VPN services in
   service provider (SP) networks.

   As service providers transform their networks in their COs toward
   next generation data center with Software Defined Networking (SDN)
   based fabric and Network Function Virtualization (NFV), they want to
   be able to maintain their offered services including multicast VPN
   (MVPN) service between their existing network and their new SPDC
   network seamlessly without the use of gateway devices. There are
   several reasons for having such seamless interoperability between
   their new DCs and their existing networks:

   - Lower Cost: gateway devices need to have very high scalability to
   handle VPN services for their DCs and as such need to handle large
   number of VPN instances (in tens or hundreds of thousands) and very
   large number of routes (e.g., in millions). For the same speed and
   feed, these high scale gateway boxes are relatively much more
   expensive than their TOR devices that support much lower number of
   routes and VPN instances.

   - Optimum Forwarding: in a given CO, both EVPN PEs and MVPN PEs can
   be connected to the same network (e.g., same IGP domain). In such
   scenarios, the service providers want to have optimum forwarding
   among these PE devices without the use of gateway devices. Because if
   gateway devices are used, then the multicast traffic between an EVPN
   and MVPN PEs can no longer be optimum and is some case, it may even
   get tromboned. Furthermore, when an SPDC network spans across
   multiple LATA (multiple geographic areas) and gateways are used
   between EVPN and MVPN PEs, then with respect to multicast traffic,
   only one GW can be designated forwarder (DF) between EVPN and MVPN
   PEs. Such scenarios not only results in non-optimum forwarding but
   also it can result in tromboing of multicast traffic between the two
   LATAs when both source and destination PEs are in the same LATA and
   the DF gateway is elected to be in a different LATA.

   - Less Provisioning: If gateways are used, then the operator need to
   configure per-tenant info. In other words, for each tenant that is
   configured, one (or maybe two) additional touch points are needed.


   This document describes a unified solution based on [RFC6513] and
   [RFC6514] for seamless interoperability of multicast VPN between EVPN
   and MVPN PEs. Furthermore, it describes how the proposed solution can
   be used as a routed multicast solution for EVPN-only applications in



Patel, et al.           Expires January 2, 2017                 [Page 4]


INTERNET DRAFT  Seamless Interop between EVPN & MVPN PEs    July 2, 2017


   data centers (e.g., routed multicast VPN only among EVPN PEs).


2.  Requirements Language

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" are to
   be interpreted as described in [RFC2119] only when they appear in all
   upper case.  They may also appear in lower or mixed case as English
   words, without any normative meaning.


3.  Terminology

   ARP: Address Resolution Protocol
   BEB: Backbone Edge Bridge
   B-MAC: Backbone MAC Address
   CE: Customer Edge
   C-MAC: Customer/Client MAC Address
   ES: Ethernet Segment
   ESI: Ethernet Segment Identifier
   IRB: Integrated Routing and Bridging
   LSP: Label Switched Path
   MP2MP: Multipoint to Multipoint
   MP2P: Multipoint to Point
   ND: Neighbor Discovery
   NA: Neighbor Advertisement
   P2MP: Point to Multipoint
   P2P: Point to Point
   PE: Provider Edge
   EVPN: Ethernet VPN
   EVI: EVPN Instance
   RT: Route Target

   Single-Active Redundancy Mode: When only a single PE, among a group
   of PEs attached to an Ethernet segment, is allowed to forward traffic
   to/from that Ethernet Segment, then the Ethernet segment is defined
   to be operating in Single-Active redundancy mode.

   All-Active Redundancy Mode: When all PEs attached to an Ethernet
   segment are allowed to forward traffic to/from that Ethernet Segment,
   then the Ethernet segment is defined to be operating in All-Active
   redundancy mode.


4.  Requirements

   This section describes the requirements specific in providing



Patel, et al.           Expires January 2, 2017                 [Page 5]


INTERNET DRAFT  Seamless Interop between EVPN & MVPN PEs    July 2, 2017


   seamless multicast VPN service between MVPN and EVPN capable
   networks.


4.1. Optimum Forwarding

   The solution SHALL support optimum multicast forwarding between EVPN
   and MVPN PEs within a network. The network can be confined to a CO or
   it can span across multiple LATAs. The solution SHALL support optimum
   multicast forwarding with both ingress replication tunnels and P2MP
   tunnels.

4.2. Optimum Replication

   For EVPN PEs with IRB capability, the solution SHALL use only a
   single multicast tunnel among EVPN and MVPN PEs for IP multicast
   traffic. Multicast tunnels can be either ingress replication tunnels
   or P2MP tunnels. The solution MUST support optimum replication for
   both Intra-subnet and Inter-subnet IP multicast traffic:

   - Non-IP traffic SHALL be forwarded per EVPN baseline [RFC7432] or
   [OVERLAY]

   - If a Multicast VPN spans across both Intra and Inter subnets, then
   for Ingress replication regardless of whether the traffic is Intra or
   Inter subnet, only a single copy of multicast traffic SHALL be sent
   from the source PE to the destination PE.

   - If a Multicast VPN spans across both Intra and Inter subnets, then
   for P2MP tunnels regardless of whether the traffic is Intra or Inter
   subnet, only a single copy of multicast data SHALL be transmitted by
   the source PE. Source PE can be either EVPN or MVPN PE and receiving
   PEs can be a mix of EVPN and MVPN PEs - i.e., a multicast VPN can be
   spread across both EVPN and MVPN PEs.

4.3. All-Active and Single-Active Multi-Homing

   The solution MUST support multi-homing of source devices and
   receivers that are sitting in the same subnet (e.g., VLAN) and are
   multi-homed to EVPN PEs. The solution SHALL allow for both Single-
   Active and All-Active multi-homing. The solution MUST prevent loop
   during steady and transient states just like EVPN baseline solution
   [RFC7432] and [OVERLAY] for all multi-homing types.

4.4. Inter-AS Tree Stitching

   The solution SHALL support multicast tree stitching when the tree
   spans across multiple Autonomous Systems.



Patel, et al.           Expires January 2, 2017                 [Page 6]


INTERNET DRAFT  Seamless Interop between EVPN & MVPN PEs    July 2, 2017


4.5. EVPN Service Interfaces

   The solution MUST support all EVPN service interfaces listed in
   section 6 of [RFC7432]:

   - VLAN-based service interface
   - VLAN-bundle service interface
   - VLAN-aware bundle service interface

4.6. Distributed Anycast Gateway

   The solution SHALL support distributed anycast gateways for tenant
   workloads on NVE devices operating in EVPN-IRB mode.


4.7. Selective & Aggregate Selective Tunnels

   The solution SHALL support selective and aggregate selective P-
   tunnels as well as inclusive and aggregate inclusive P-tunnels. When
   selective tunnels are used, then multicast traffic SHOULD only be
   forwarded to the remote PE which have receivers - i.e., if there are
   no receivers at a remote PE, the multicast traffic SHOULD NOT be
   forwarded to that PE and if there are no receivers on any remote PEs,
   then the multicast traffic SHOULD NOT be forwarded to the core.

4.8. Tenants' (S,G) or (*,G) states

   The solution SHUOLD store (C-S,C-G) and (C-*,C-G) states only on PE
   devices that have interest in such states hence reducing memory and
   processing requirements - i.e., PE devices that have sources and/or
   receivers interested in such multicast groups.


5.  Solution

   [EVPN-IRB] describes the operation for EVPN PEs in IRB mode for
   unicast traffic. The same EVPN PE model, where an IP-VRF is attached
   to one or more MAC-VRF via virtual IRB interfaces, is also applicable
   here. However, there are some noticeable differences between the IRB
   mode operation for unicast traffic described in [EVPN-IRB] versus for
   multicast traffic described here. For unicast traffic, the intra-
   subnet traffic, is bridged within the MAC-VRF associated with that
   subnet (i.e., a lookup based on MAC-DA is performed); whereas, the
   inter-subnet traffic is routed in the corresponding IP-VRF (ie, a
   lookup based on IP-DA is performed). A given tenant can have one or
   more IP-VRFs; however, without loss of generality, this document
   assumes one IP-VRF per tenant. For multicast traffic, the intra-
   subnet traffic is bridged for non-IP traffic and it is Layer-2



Patel, et al.           Expires January 2, 2017                 [Page 7]


INTERNET DRAFT  Seamless Interop between EVPN & MVPN PEs    July 2, 2017


   switched for IP traffic. The differentiation between bridging and L2-
   switching for multicast traffic is that the former uses MAC-DA lookup
   for forwarding the traffic; whereas, the latter uses IP-DA lookup for
   forwarding the multicast traffic where the forwarding states are
   built using IGMP/MLD snooping. The inter-subnet multicast traffic is
   always routed in the corresponding IP-VRF.

   This section describes a multicast VPN solution based on [MVPN] for
   EVPN PEs operating in IRB mode that want to perform seamless
   interoperability with their counterparts MVPN PEs.

5.1.  Operational Model for Homogenous EVPN IRB NVEs

   In this section, we consider the scenario where all EVPN PEs have IRB
   capability and operating in IRB mode for both unicast and multicast
   traffic (e.g., all EVPN PEs are homogenous in terms of their
   capabilities and operational modes). In this scenario, the EVPN PEs
   terminate IGMP/MLD messages from tenant host devices or PIM messages
   from tenant routers on their IRB interfaces, thus avoid sending these
   messages over MPLS/IP core. A tenant virtual/physical router (e.g.,
   CE) attached to an EVPN PE becomes a multicast routing adjacency of
   that PE and the multicast routing protocol on the PE-CE link link is
   presumed to be PIM-SM with both the ASM and the SSM service models
   per [RFC6513]. Furthermore, the PE uses MVPN BGP protocol and
   procedures per [RFC6513] and [RFC6514]. With respect to tenant PIM
   protocol, PIM-SM with Any Source Multicast (ASM) mode, PIM-SM with
   Source Specific Multicast (SSM) mode, and PIM Bidirectional (BIDIR)
   mode are all supported per [RFC6513]. Support of PIM-DM (Dense Mode)
   is excluded in this document per [RFC6513].

   The EVPN PEs use MVPN BGP routes [RFC 6514] to convey tenant (S,G) or
   (*,G) states to other MVPN or EVPN PEs and to set up overlay trees
   (inclusive or selective) for a given MVPN. The leaves and roots of
   these overlay trees are composed of Provider Multicast Service
   Interface (PMSI) and it can be Inclusive-PMSI (I-PMSI) or Selective-
   PMSI (S-PMSI) per [RFC6513]. A given PMSI is associated with a single
   IP-VRF of an EVPN PE and/or a MVPN PE for that MVPN - e.g., a MVPN
   PMSI is never associated with a MAC-VRF of an EVPN PE. Overlay-trees
   are instantiated by underlay provider tunnels (P-tunnels) - e.g.,
   P2MP, MP2MP, or unicast tunnels per [RFC 6513]. When there are many-
   to-one mapping of PMSIs to a P-tunnel (e.g. mapping many S-PMSIs or
   many I-PMSI to a single P-tunnel), the tunnel is referred to as
   aggregate tunnel.

   Figure-1 below depicts a scenario where a tenant's MVPN spans across
   both EVPN and MVPN PEs; where all EVPN PEs have IRB capability. An
   EVPN PE (with IRB capability) can be modeled as a MVPN PE where the
   virtual IRB interface of an EVPN PE (virtual interface between MAC-



Patel, et al.           Expires January 2, 2017                 [Page 8]


INTERNET DRAFT  Seamless Interop between EVPN & MVPN PEs    July 2, 2017


   VRF and IP-VRF) can be considered as an attachment circuit (AC) for
   the MVPN PE. In other words, an EVPN PE can be modeled as a PE that
   consists of a MVPN PE whose ACs are replaced with IRB interfaces
   connecting each IP-VRF of the MVPN PE to a set of MAC-VRFs. Similar
   to a MVPN PE where an attachment circuit serves as a routed multicast
   interface for an IP-VRF associated with a MVPN instance, an IRB
   interface serves as a routed multicast interface for the IP-VRF
   associated with the MVPN instance. Since EVPN PEs run MVPN protocols
   (e.g., [RFC6513] and [RFC6514]), for all practical purposes, they
   look just like MVPN PEs to other PE devices. Such modeling of EVPN
   PEs, transforms the multicast VPN operation of EVPN PEs to that of
   [MVPN] and thus simplifies the interoperability between EVPN and MVPN
   PEs to that of running a single unified solution based on [MVPN].





                      EVPN PE1
                   +------------+
         Src1 +----|(MAC-VRF1)  |                   MVPN PE1
        Rcvr1 +----|      \     |    +---------+   +--------+
                   |    (IP-VRF)|----|         |---|(IP-VRF)|--- Rcvr5
                   |      /     |    |         |   +--------+
         Rcvr2 +---|(MAC-VRF2)  |    |         |
                   +------------+    |         |
                                     |  MPLS/  |
                      EVPN PE2       |  IP     |
                   +------------+    |         |
         Rcvr3 +---|(MAC-VRF1)  |    |         |    MVPN PE2
                   |       \    |    |         |   +--------+
                   |    (IP-VRF)|----|         |---|(IP-VRF)|--- Rcvr6
                   |       /    |    +---------+   +--------+
         Rcvr4 +---|(MAC-VRF3)  |
                   +------------+

                         Figure-1: Homogenous EVPN NVEs


   Although modeling an EVPN PE as a MVPN PE, conceptually simplifies
   the operation to that of a solution based on [MVPN], the following
   operational aspects of EVPN are impacted and needs to be factored in
   the solution:

        1) All-Active multi-homing of IP multicast sources and receivers
        2) Mobility for Tenant's sources and receivers
        3) Unicast route advertisements for IP multicast source
        4) non-IP multicast traffic handling



Patel, et al.           Expires January 2, 2017                 [Page 9]


INTERNET DRAFT  Seamless Interop between EVPN & MVPN PEs    July 2, 2017


   The first bullet, All-Active multi-homing of IP multicast source and
   receivers, is described in section 5.3. The second bullet is
   described in section 5.4. Third and fourth bullets are described
   next.

   When an IP multicast source is attached to an EVPN PE, the unicast
   route for that IP multicast source needs to be advertised. This
   unicast route is advertised with VRF Route Import extended community
   which in turn is used as the Route Target for Join (S,G) messages
   sent toward the source PE by the remote MVPN PEs. The EVPN PE
   advertises this unicast route using EVPN route type 5 or IPVPN
   unicast route or both along with VRF Route Import extended community.
   When unicast routes are advertised by MVPN PEs, they are advertised
   using IPVPN unicast route along with VRF Route Import extended
   community per [RFC6514].

   Link local multicast traffic (e.g. addressed to 224.0.0.x in case of
   IPv4) as well as IP protocols such as OSPF, and non-IP
   multicast/broadcast traffic are sent per EVPN [RF7432] BUM procedures
   and does not get routed via IP-VRF for multicast addresses. So, such
   BUM traffic will be limited to a given EVI/VLAN (e.g., a give
   subnet); whereas, IP multicast traffic, will be locally switched for
   local interfaces attached on the same subnet and will be routed for
   local interfaces attached on a different subnet or for forwarding
   traffic to other EVPN PEs (refer to section 5.1.1 for data plane
   operation).


5.1.1  Control Plane Operation

   Just like a MVPN PE, an EVPN PE runs a separate tenant multicast
   routing instance (VPN-specific) per MVPN instance and the following
   tenant multicast routing instances are supported:

        - PIM Sparse Mode (PIM-SM) with the ASM service model
        - PIM Sparse Mode with the SSM service model
        - PIM Bidirectional Mode (BIDIR-PIM), which uses bidirectional
          tenant-trees to support the ASM service model

   A given tenant's PIM join messages, (C-*, C-G) or (C-S, C-G), are
   processed by the corresponding tenant multicast routing protocol and
   they are advertised over MPLS/IP network using Shared Tree Join route
   (route type 6) and Source Tree Join route (route type 7) respectively
   of MCAST-VPN NLRI per [RFC6514].

   The following NLRIs from [RFC6514] SHOULD be used for forming
   Underlay/Core tunnels inside a data center.




Patel, et al.           Expires January 2, 2017                [Page 10]


INTERNET DRAFT  Seamless Interop between EVPN & MVPN PEs    July 2, 2017


      Intra-AS I-PMSI A-D route is used to form default tunnel (also
      called inclusive tunnel) for a tenant VRF.  The tunnel attributes
      are indicated using PMSI attribute with this route.

      S-PMSI A-D route is used to form Customer flow specific underlay
      tunnels.  This enables selective delivery of data to PEs having
      active receivers and optimizes fabric bandwidth utilization.  The
      tunnel attributes are indicated using PMSI attribute with this
      route.

      Source Active A-D route is used by source connected PE in order to
      announce active multicast source.  This enables PEs having active
      receivers for the flow to join the tunnels and switch to Shortest
      Path tree.

   Each EVPN PE supporting a specific MVPN discovers the set of other
   PEs in its AS that are attached to sites of that MVPN using Intra-AS
   I-PMSI A-D route (route type 1) per [RFC6514]. It can also discover
   the set of other ASes that have PEs attached to sites of that MVPN
   using Inter-AS I-PMSI A-D route (route type 2) per [RFC6514]. After
   the discovery of PEs that are attached to sites of the MVPN, an
   inclusive overlay tree (I-PMSI) can be setup for carrying tenant
   multicast flows for that MVPN; however, this is not a requirement per
   [RFC6514] and it is possible to adopt a policy in which all tenant
   flows are carried on S-PMSIs.

   An EVPN PE also sets up a multipoint-to-multipoint (MP2MP) tree per
   EVI using Inclusive Multicast Ethernet Tag route (route type 3) of
   EVPN NLRI per [RFC7432]. This MP2MP tree can be instantiated using
   unicast tunnels or P2MP tunnels. In [RFC7432], this tree is used for
   transmission of all BUM traffic including IP multicast traffic.
   However, for multicast traffic handling in EVPN-IRB PEs, this tree is
   used for all broadcast, unknown-unicast and non-IP multicast traffic
   - i.e., it is used for all BUM traffic except IP multicast user
   traffic. Therefore, an EVPN-IRB PE sends a customer IP multicast flow
   only on a single tunnel that is instantiated for MVPN I-PMSI or S-
   PMSI. In other words, IP multicast traffic sent over MPLS/IP network
   are not sent off of MAC-VRF but rather IP-VRF.

   If a tenant host device is multi-homed to two or more EVPN PEs using
   All-Active multi-homing, then IGMP join and leave messages are
   synchronized between these EVPN PEs using EVPN IGMP Join Synch route
   (route type 7) and  EVPN IGMP Leave Synch route (route type 8). There
   is no need to use EVPN Selective Multicast Tag route (SMET route)
   because the IGMP messages are terminated by the EVPN-IRB PE and
   tenant (S,G) or (*,G) join messages are sent via MVPN Source/Shared
   Tree Join messages.




Patel, et al.           Expires January 2, 2017                [Page 11]


INTERNET DRAFT  Seamless Interop between EVPN & MVPN PEs    July 2, 2017


5.1.2  Data Plane Operation

   When an EVPN-IRB PE receives an IGMP/MLD join message over one of its
   Attachment Circuits (ACs), it adds that AC to its Layer-2 (L2) OIF
   list. This L2 OIF list is associated with the MAC-VRF corresponding
   to the subnet of the tenant device that sent the IGMP/MLD join.
   Therefore, tenant (S,G) or (*,G) forwarding entries are
   created/updated for the corresponding MAC-VRF based on these source
   and group IP addresses. Furthermore, the IGMP/MLD join message is
   propagated over the corresponding IRB interface and it is processed
   by the tenant multicast routing instance which creates the
   corresponding tenant (S,G) or (*,G) Layer-3 (L3) forwarding entries.
   It adds this IRB interface to the L3 OIF list. An IRB is removed as a
   L3 OIF when all L2 tenant (S,G) or (*,G) forwarding states is removed
   for the MAC-VRF associated with that IRB. Furthermore, tenant (S,G)
   or (*,G) L3 forwarding state is removed when all of its L3 OIFs are
   removed - i.e., all the IRB interfaces associated with that tenant
   (S,G) or (*,G) are removed.

   When an EVPN-IRB PE receives IP multicast traffic, if it has any
   attached receivers for that subnet, it does L2 switching for such
   intra-subnet traffic. It then sends the multicast traffic over the
   corresponding IRB interface. The multicast traffic then gets routed
   over IRB interfaces that are included in the OIF list for that
   multicast traffic (and TTL gets decremented). When the multicast
   traffic is received on an IRB interface by the MAC-VRF corresponding
   to that interface, it gets L2 switched and sent over ACs that belong
   to the L2 OIF list. Furthermore, the multicast traffic gets sent over
   I-PMSI or S-PMSI associated with that multicast flow to other PE
   devices that are participating in that MVPN.

5.1.2.1 Sender and Receiver in same MAC-VRF

   Rcvr1 in Figure 1 is connected to PE1 in MAC-VRF1 (same as Src1) and
   sends IGMP join for (C-S, C-G), IGMP snooping will record this state
   in local bridging entry.  A routing entry will be formed as well
   which will point to MAC-VRF1 as RPF for Src1.  We assume that Src1 is
   known via ARP or similar procedures.  Rcvr1 will get a locally
   bridged copy of multicast traffic from Src1.  Rcvr3 is also connected
   in MAC-VRF1 but to PE2 and hence would send IGMP join which will be
   recorded at PE2. PE2 will also form routing entry and RPF will be
   assumed as Tenant Tunnel "Tenant1" formed beforehand using MVPN
   procedures.  Also this would cause multicast control plane to
   initiate a BGP MCAST-VPN type 7 route which would include VRI for PE1
   and hence be accepted on PE1.  PE1 will include Tenant1 tunnel as
   Outgoing Interface (OIF) in the routing entry.  Now, since it has
   knowledge of remote receivers via MVPN control plane it will
   encapsulate original multicast traffic in Tenant1 tunnel towards



Patel, et al.           Expires January 2, 2017                [Page 12]


INTERNET DRAFT  Seamless Interop between EVPN & MVPN PEs    July 2, 2017


   core. On PE2, since C-S falls in the MAC-VRF1 subnet, MAC-VRF1
   Outgoing interface is treated as Ingress MAC-VRF bridging. Hence no
   rewrite is performed on the received customer data packet while
   forwarding towards Rcvr3.

5.1.2.2 Sender and Receiver in different MAC-VRF

   Rcvr2 in Figure 1 is connected to PE1 in MAC-VRF2 and hence PE2 will
   record its membership in MAC-VRF2.  Since MAC-VRF2 is enabled with
   IRB, it gets added as another OIF to routing entry formed for (C-S,
   C-G).  Rcvr3 and Rcvr4 are also in different MAC-VRFs than multicast
   speaker Src1 and hence need Inter-subnet forwarding.  PE2 will form
   local bridging entry in MAC-VRF2 due to IGMP joins received from
   Rcvr3 and Rcvr4 respectively. PE2 now adds another OIF 'MAC-VRF2' to
   its existing routing entry.  But there is no change in control plane
   states since its already sent MVPN route and no further signaling is
   required.  Also since Src1 is not part of MAC-VRF2 subnet, it is
   treated as routing OIF and hence MAC header gets modified as per
   normal procedures for routing.  PE3 forms routing entry very similar
   to PE2.  It is to be noted that PE3 does not have MAC-VRF1 configured
   locally but still can receive the multicast data traffic over Tenant1
   tunnel formed due to MVPN procedures

5.2.  Operational Model for Heterogeneous EVPN IRB PEs



5.3.  All-Active Multi-Homing

   EVPN solution [RFC7432] uses ESI MPLS label for split-horizon
   filtering of Broadcast/Unknown unicast/multicast (BUM) traffic from
   an All-Active multi-homing Ethernet Segment to ensure that BUM
   traffic doesn't get loop back to the same Ethernet Segment that it
   came from. In MVPN, there is no concept of ESI label and split-
   horizon filtering because there is no support for All-Active multi-
   homing; however, EVPN NVEs rely on this function to prevent loop for
   an access Ethernet Segment. Figure-2 depicts a source sitting behind
   an All-Active dual-homing Ethernet Segment. The following scenarios
   needs special considerations:












Patel, et al.           Expires January 2, 2017                [Page 13]


INTERNET DRAFT  Seamless Interop between EVPN & MVPN PEs    July 2, 2017


                      EVPN PE1
                   +------------+
        Rcvr1 +----|(MAC-VRF1)  |                    MVPN PE1
                   |      \     |    +---------+   +--------+
                   |    (IP-VRF)|----|         |---|(IP-VRF)|--- Rcvr4
                   |      /     |    |         |   +--------+
               +---|(MAC-VRF2)  |    |         |
          Src1 |   +------------+    |         |
         (ES1) |                     |  MPLS/  |
         Rcvr6 |      EVPN PE2       |  IP     |
         (*,G) |   +------------+    |         |
               +---|(MAC-VRF2)  |    |         |     MVPN PE2
                   |       \    |    |         |   +--------+
                   |    (IP-VRF)|----|         |---|(IP-VRF)|--- Rcvr5
                   |       /    |    +---------+   +--------+
        Rcvr2 +----|(MAC-VRF3)  |
                   +------------+


                         Figure-2: Multi-homing


5.3.1.  Source and receivers in same ES but on different subnets

   If the tenant multicast source sits on a different subnet than its
   receivers, then EVPN DF election procedure for multi-homing ES is
   sufficient and there will be no need to do any split-horizon
   filtering for that Ethernet Segment because with IGMP/MLD snooping
   enabled on VLANs for the multi-homing ES, only the VLANs for which
   IGMP/MLD join have been received are placed in OIF list for that
   (S,G) or (*,G) on that ES. Therefore, multicast traffic will not be
   loop backed on the source subnet (because there is no receiver on
   that subnet) and for other subnets that the multicast traffic is loop
   backed, the DF election ensures only a single copy of the multicast
   traffic is sent on that subnet.


5.3.2.  Source and some receivers in same ES and on same subnet

   If the tenant multicast source sits on the same subnet and the same
   ES as some of its receivers and those receivers have interest in
   (*,G), then Besides DF election mechanism, there needs to be split-
   horizon filtering to ensure that the multicast traffic originated
   from that <ES, EVI, BD> is not loop backed to itself. The existing
   split-horizon filtering as specified in [RFC7432] cannot be used
   because the received VPN label identifies the multicast IP-VRF and
   not MAC-VRF. Therefore, egress PE doesn't know for which EVI/BD it
   needs to perform split-horizon filtering and for which EVI/BDs



Patel, et al.           Expires January 2, 2017                [Page 14]


INTERNET DRAFT  Seamless Interop between EVPN & MVPN PEs    July 2, 2017


   belonging to the the same ES, it needs not to perform split-horizon
   filtering. This issue is resolved by extending the local-bias
   solution per [OVERLAY] to MPLS tunnels. There are two cases to
   consider here: a) Ingress-replication tunnels used for the multicast
   traffic and b) P2MP tunnels used for the multicast traffic.

   If ingress-replication tunnels are used, then each PE in the multi-
   homing group instead of advertising an ESI label, it advertises to
   each PE in the multi-homing group a downstream assigned label
   identifying that PE, so that when it receives a packet with this
   label, it know who the originating PE is. Once the egress PE can
   identify the originating PE for a packet, then it can execute local-
   bias procedure per [OVERLAY] for each of its EVI/BDs corresponding to
   that IP-VRF.

   If P2MP tunnels are used (e.g., mLDP, RSVP-TE, or BIER), the tunnel
   label identifies the tunnel and thus the originating PE. Since the
   originating PE can be identified, the local-bias procedure per
   [OVERLAY] is applied to prevent multicast data to be sent on the
   Ethernet Segments in common with the originating PE. The difference
   between the local-bias procedure in here versus the one described in
   [OVERLAY] is that the multicast traffic in [OVERLAY] is only intended
   for one subnet (and thus one BD) whereas the multicast traffic in
   Figure-2 can span across multiple subnets (and thus multiple BDs).
   Therefore, local-bias procedure in [OVERLAY] is expanded to perform
   local bias across all the BDs of that tenant. In other words, the
   same local-bias procedure is applied to all BDs of that tenant in
   both the originating EVPN NVE as well as all other EVPN NVEs that
   share the Ethernet Segment with the originating EVPN NVE.


5.4.  Mobility for Tenant's sources and receivers


5.5.  Single-Active Multi-Homing


6.  DCs with only EVPN NVEs

   As mentioned earlier, the proposed solution can be used as a routed
   multicast solution for EVPN-only applications in data centers (e.g.,
   routed multicast VPN only among EVPN PEs). It should be noted that
   the scope of intra-subnet, forwarding for the solution described in
   this document, is limited to a single EVPN-IRB PE. In other words,
   the IP multicast traffic that needs to be forwarded from one PE to
   another is always routed (L3 forwarded) regardless of whether the
   traffic is intra-subnet or inter-subnet. As the result, the TTL value
   for intra-subnet traffic that spans across two or more PEs get



Patel, et al.           Expires January 2, 2017                [Page 15]


INTERNET DRAFT  Seamless Interop between EVPN & MVPN PEs    July 2, 2017


   decremented. Based on past experiences with MVPN over last dozen
   years for supported IP multicast applications, layer-3 forwarding of
   intra-subnet multicast traffic should be fine. However, if there are
   applications that require intra-subnet multicast traffic to be L2
   forwarded (e.g., without decrementing TTL value), then [EVPN-IRB-
   MCAST] proposes a solution to accommodate such applications.


6.1 Setup of overlay multicast delivery

   It must be emphasized that this solution poses no restriction on the
   setup of the tenant BDs and that neither the source PE, nor the
   receiver PEs do not need to know/learn about the BD configuration on
   other PEs in the MVPN. The Reverse Path Forwarder (RPF) is selected
   per the tenant multicast source and the IP-VRF in compliance with the
   procedures in [RFC6514], using the incoming IP Prefix route (route
   type 5) of EVPN NLRI per [RFC7432].

   The VRF Route Import (VRI) extended community that is carried with
   the IP-VPN routes in [RFC6514] MUST be carried via the EVPN unicast
   routes instead. The construction and processing of the VRI are
   consistent with [RFC6514]. The VRI MUST uniquely identify the PE
   which is advertising a multicast source and the IP-VRF it resides in.

   VRI is constructed as following:

      -  The 4-octet Global Administrator field MUST be set to an IP
         address of the PE.  This address SHOULD be common for all the
         IP-VRFs on the PE (e.g., this address may be the PE's loopback
         address).
      -  The 2-octet Local Administrator field associated with a given
         IP-VRF contains a number that uniquely identifies that IP-VRF
         within the PE that contains the IP-VRF.

   Every PE which detects a local receiver via a local IGMP join or a
   local PIM join for a specific source (overlay SSM mode) MUST
   terminate the IGMP/PIM signaling at the IP-VRF and generate a (C-S,C-
   G) via the BGP MCAST-VPN route type 7 per [RFC6514] if and only if
   the RPF for the source points to the fabric. If the RPF points to a
   local multicast source on the same MAC-VRF or a different MAC-VRF on
   that PE, the MCAST-VPN MUST NOT be advertised and data traffic will
   be locally routed/bridged to the receiver as detailed in section 6.2.

   The VRI received with EVPN route type 5 NLRI from source PE will be
   appended as an export route-target extended community. More details
   about handling of various types of local receivers are in section 10.
   The PE which has advertised the unicast route with VRI, will import
   the incoming MCAST-VPN NLRI in the IP-VRF with the same import route-



Patel, et al.           Expires January 2, 2017                [Page 16]


INTERNET DRAFT  Seamless Interop between EVPN & MVPN PEs    July 2, 2017


   target extended-community and other PEs SHOULD ignore it. Following
   such procedure the source PE learns about the existence of at least
   one remote receiver in the tenant overlay and programs data plane
   accordingly so that a single copy of multicast data is forwarded into
   the core VRF using tenant VRF tunnel.

   If the multicast source is unknown (overlay ASM mode), the MCAST-VPN
   route type 6 (C-*,C-G) join SHOULD be targeted towards the designated
   overlay Rendezvous Point (RP) by appending the received RP VRI as an
   export route-target extended community. Every PE which detects a
   local source, registers with its RP PE. That is how the RP learns
   about the tenant source(s) and group(s) within the MVPN. Once the
   overlay RP PE receives either the first remote (C-RP,C-G) join or a
   local IGMP join or a local PIM join,  it will trigger an MCAST-VPN
   route type 7 (C-S,C-G) towards the actual source PE for which it has
   received PIM register message in full compliance with regular PIM
   procedures. This involves the source PE to advertise the MCAST-VPN
   Source Active A-D route (MCAST-VPN route-type 5) towards all PEs.
   The Source Active A-D route is used to inform the active multicast
   source to all PEs in the Overlay so they can potentially switch from
   RP-Shared-Tree to Shortest-Path-Tree. The above procedure is optional
   per [RFC6514], and user SHALL enable an auto-discovery mode where the
   temporary RP-Shared-Tree is not involved. In this mode, the source PE
   MUST advertise the MCAST-VPN Source Active A-D route (type 5) as soon
   as it detects data traffic from the local tenant multicast source.
   Hence the PEs at different sites of the same MVPN will directly join
   the Shortest-Path-Tree once they receive the MCAST-VPN Source Active
   A-D route.


6.3 Data plane considerations

   Data-center fabrics are implemented using variety of core
   technologies but predominant ones are IP/VXLAN Ingress Replication,
   IP/VXLAN PIM and MPLS LSM.  IP and MPLS have been predominant choice
   for MVPN core as well hence all existing procedures for forming
   tunnels for these technologies are applicable in EVPN as well.  Also
   as described in earlier section, each PE acts as PIM DR in its
   locally connected Bridge Domain, we MUST NOT forward post-routed
   traffic out of IRB interfaces towards the core.


7 Handling of different encapsulations

   Just as in [RFC6514] the A-D routes are used to form the overlay
   multicast tunnels and signal the tunnel type using the P-Multicast
   Service Interface Tunnel (PMSI Tunnel) attribute.




Patel, et al.           Expires January 2, 2017                [Page 17]


INTERNET DRAFT  Seamless Interop between EVPN & MVPN PEs    July 2, 2017


7.1  MPLS Encapsulation

   The [RFC6514] assumes MPLS/IP core and there is no modification to
   the signaling procedures and encoding for PMSI tunnel formation
   therein. Also, there is no need for a gateway to inter-operate with
   non-EVPN PEs supporting [RFC6514] based MVPN over IP/MPLS.

7.2  VxLAN Encapsulation

   In order to signal VXLAN, the corresponding BGP encapsulation
   extended community [TUNNEL-ENCAP] SHOULD be appended to the A-D
   routes. The MPLS label in the PMSI Tunnel Attribute MUST be the
   Virtual Network Identifier (VNI) associated with the customer MVPN.
   The supported PMSI tunnel types with VXLAN encapsulation are: PIM-SSM
   Tree, PIM-SM Tree, BIDIR-PIM Tree, Ingress Replication [RFC6514].
   Further details are in [OVERLAY].

   In this case, a gateway is needed for inter-operation between the
   EVPN-IRB PEs and non-EVPN MVPN PEs. The gateway should re-originate
   the control plane signaling with the relevant tunnel encapsulation on
   either side. In the data plane, the gateway terminates the tunnels
   formed on either side and performs the relevant stitching/re-
   encapsulation on data packets.

7.3  Other Encapsulation

   In order to signal a different tunneling encapsulation such as NVGRE,
   VXLAN-GPE or MPLSoGRE the corresponding BGP encapsulation extended
   community [TUNNEL-ENCAP] SHOULD be appended to the A-D routes. If the
   Tunnel Type field in the encapsulation extended-community is set to a
   type which requires Virtual Network Identifier (VNI), e.g., VXLAN-GPE
   or NVGRE [TUNNEL-ENCAP], then the MPLS label in the PMSI Tunnel
   Attribute MUST be the VNI associated with the customer MVPN. Same as
   in VXLAN case, a gateway is needed for inter-operation between the
   EVPN-IRB PEs and non-EVPN MVPN PEs.

8.  DCI with MPLS in WAN and VxLAN in DCs

   This section describers the inter-operation between MVPN MPLS WAN
   with MVPN-EVPN in a data-center which runs on VxLAN. Since the tunnel
   encapsulation between these networks are different, we must have at
   least one gateway in between. Usually, two or more are required for
   redundancy and load balancing purpose. Some aspects of the multi-
   homing between VxLAN DC networks and MPLS WAN is in common with
   [INTERCON-EVPN]. Herein, only the differences are described.

8.1 Control plane inter-connect




Patel, et al.           Expires January 2, 2017                [Page 18]


INTERNET DRAFT  Seamless Interop between EVPN & MVPN PEs    July 2, 2017


   The gateway(s) MUST be setup with the inclusive set of all the IP-
   VRFs that span across the two domains. On each gateway, there will be
   at least two BGP sessions: one towards the DC side and the other
   towards the WAN side. Usually for redundancy purpose, more sessions
   are setup on each side. The unicast route propagation follows the
   exact same procedures in [INTERCON-EVPN]. Hence, a multicast host
   located in either domain, is advertised with the gateway IP address
   as the next-hop to the other domain. As a result, PEs view the hosts
   in the other domain as directly attached to the gateway and all
   inter-domain multicast signaling is directed towards the gateway(s).
   Received MVPN routes type 1-7 from either side of the gateway(s),
   MUST NOT be reflected back to the same side but processed locally and
   re-advertised (if needed) to the other side:

        - Intra-AS I-PMSI A-D Route: these are distributed within
          each domain to form the overlay tunnels which terminate at
          gateway(s). They are not passed to the other side of the
          gateway(s).

        - C-Multicast Route: joins are imported into the corresponding
          IP-VRF on each gateway and advertised as a new route to the
          other side with the following modifications (the rest of NLRI
          fields and path attributes remain on-touched):
                * Route-Distinguisher is set to that of the IP-VRF
                * Route-target is set to the exported route-target
                  list on IP-VRF
                * The PMSI tunnel attribute and BGP Encapsulation
                  extended community will be modified according to
                  section 8
                * Next-hop will be set to the IP address which represents
                  the gateway on either domain

        - Source Active A-D Route: same as joins

        - S-PMSI A-D Route: these are passed to the other side to form
          selective PMSI tunnels per every (C-S,C-G) from the gateway
          to the PEs in the other domain provided it contains receivers
          for the given (C-S, C-G). Similar modifications made to joins
          are made to the newly originated S-PMSI.


   In addition, the Originating Router's IP address is set to GW's IP
   address. Multicast signaling from/to hosts on local ACs on the
   gateway(s) are generated and propagated in both domains (if needed)
   per the procedures in section 7 in this document and in [RFC6514]
   with no change. It must be noted that for a locally attached source,
   the gateway will program an OIF per every domain from which it
   receives a remote join in its forwarding plane and different



Patel, et al.           Expires January 2, 2017                [Page 19]


INTERNET DRAFT  Seamless Interop between EVPN & MVPN PEs    July 2, 2017


   encapsulation will be used on the data packets.

   Other point to notice is that if there are multiple gateways in an
   ESI which peer with each other, each one will receive two sets of the
   local MCAST-VPN routes from the other gateway: 1) the WAN set 2) the
   DC set. Following the same procedure as in [INTERCON-EVPN], the WAN
   set SHALL be given a higher priority.

8.2 Data plane inter-connect

   Traffic forwarding procedures on gateways are same as those described
   for PEs in section 5 and 6 except that, unlike a non-border leaf PE,
   the gateway will not only route or bridge the incoming traffic from
   one side to its local receivers, but will also send it to the remote
   receivers in the the other domain after de-capsulation and appending
   the right encapsulation. The OIF and IIF are programmed in FIB based
   on the received joins from either side and the RPF calculation to the
   source or RP. The de-capsulation and encapsulation actions are
   programmed based on the received I-PMSI or S-PMSI A-D routes from
   either sides.

   If there are more than one gateway between two domains, the multi-
   homing procedures described in the following section must be
   considered so that incoming traffic from one side is not looped back
   to the other gateway.

   The multicast traffic from local hosts on each gateway flows to the
   other gateway with the preferred encapsulation (WAN encapsulation is
   preferred as described in previous section).

8.3 Multi-homing among DCI gateways Just as in [INTERCON-EVPN] every set
   of multi-homed gateways between the WAN and a given DC are assigned a
   unique ESI.


9.  Inter-AS Operation

10.  Use Cases


10.1  DCs with only IGMP/MLD hosts w/o tenant router

   In a EVPN network consisting of only IGMP/MLD hosts, PE's will
   receive IGMP (*, G) or (S, G) joins from their locally attached host
   and would originate MVPN C-Multicast Route Type 6 and 7 NLRI's
   respectively. As described in RFC 6514 these NLRI's are directed
   towards RP-PE for Type 6 or Source-PE for Type 7. In case of (*, G)
   join a Shared-Path Tree will be built in the core from RP-PE towards



Patel, et al.           Expires January 2, 2017                [Page 20]


INTERNET DRAFT  Seamless Interop between EVPN & MVPN PEs    July 2, 2017


   all Receiver-PE's. Once a Source starts to send Multicast data to
   specified multicast-group, the PE directly connected to Source will
   do PIM-registration with RP. Since there are existing receivers for
   the Group, RP will originate a PIM (S, G) join towards Source. This
   will be converted to MVPN Type 7 NLRI by RP-PE. Please note that
   since there are no other routers RP-PE would be the PE configured as
   RP using static configuration or by using BSR or Auto-RP procedures.
   The detailed working of such protocols is beyond the scope of this
   document. Upon receiving Type 7 NLRI, Source-PE will include MVPN
   Tunnel in its Outgoing Interface List. Furthermore, Source-PE will
   follow the procedures in RFC-6514 to originate MVPN SA-AD route (RT
   5) to avoid duplicate traffic and allow all Receiver-PE's to shift
   from Share-Tree to Shortest-Path-Tree rooted at Source-PE. Section 13
   of RFC6514 describes it.

   However a network operator can chose to have only Shortest-Path-Tree
   built in MVPN core as described in RFC6513. To achieve this, all PE's
   can act as RP for its locally connected hosts and thus avoid sending
   any Shared-Tree Join (MVPN Type 6) into the core. In this scenario,
   there will be no PIM registration needed since all PE's are first-hop
   router as well as acting RP. One a source starts to send multicast
   data, the PE directly connected to it originates Source-Active AD (RT
   5) to all other PE's in network. Upon Receiving Source-Active AD
   route a PE must cache it in its local database and also look for any
   matching interest for (*, G) where G is the multicast group described
   in received Source-Active AD route. If it finds any such matching
   entry, it must originate a C-Multicast route (RT 7) in order to start
   receiving traffic from Source-PE. This procedure must be repeated on
   reception of any further Source-Active AD routes.

10.2  DCs with mixed of IGMP/MLD hosts & multicast routers running PIM-
   SSM

   This scenario has multicast routers which can send PIM SSM (S, G)
   joins. Upon receiving these joins and if source described in join is
   learnt to be behind a MVPN peer PE, local PE will originate C-
   Multicast Join (RT 7) towards Source-PE. It is expected that PIM SSM
   group ranges are kept separate from ASM range for which IGMP hosts
   can send (*, G) joins. Hence both ASM and SSM groups shall operate
   without any overlap. There is no RP needed for SSM range groups and
   Shortest Path tree rooted at Source is built once a receiver interest
   is known.

10.3  DCs with mixed of IGMP/MLD hosts & multicast routers running PIM-
   ASM

   This scenario includes reception of PIM (*, G) joins on PE's local
   AC. These joins are handled similar to IGMP (*, G) join as explained



Patel, et al.           Expires January 2, 2017                [Page 21]


INTERNET DRAFT  Seamless Interop between EVPN & MVPN PEs    July 2, 2017


   in sections above. Another interesting case can arise here is when
   one of the tenant routers can act as RP for some of the ASM Groups.
   In such scenario, a Upstream Multicast Hop (UMH) will be elected by
   other PE's in order to send C-Multicast Routes (RT 6). All procedures
   described in RFC 6513 with respect to UMH should be used to avoid
   traffic duplication due to incoherent selection of RP-PE by different
   Receiver-PE's.

10.4  DCs with mixed of IGMP/MLD hosts & multicast routers running PIM-
   Bidir

   Creating Bidirectional (*, G) trees is useful when a customer wants
   least amount of control state in network. But on downside all
   receivers for a particular multicast group receive traffic from all
   sources sending to that group. However for the purpose of this
   document, all procedures as described in RFC 6513 and RFC 6514 apply
   when PIM-Bidir is used.



11.  IANA Considerations

   There is no additional IANA considerations for PBB-EVPN beyond what
   is already described in [RFC7432].


12.  Security Considerations

   All the security considerations in [RFC7432] apply directly to this
   document because this document leverages [RFC7432] control plane and
   their associated procedures.


13.  Acknowledgements

   The authors would like to thank Samir Thoria, Ashutosh Gupta,
   Niloofar Fazlollahi, and Aamod Vyavaharkar for their discussions and
   contributions.


14.  References

14.1.  Normative References

   [RFC7024]  Jeng, H., Uttaro, J., Jalil, L., Decraene, B., Rekhter,
              Y., and R. Aggarwal, "Virtual Hub-and-Spoke in BGP/MPLS
              VPNs", RFC 7024, October 2013.




Patel, et al.           Expires January 2, 2017                [Page 22]


INTERNET DRAFT  Seamless Interop between EVPN & MVPN PEs    July 2, 2017


   [RFC7432]  A. Sajassi, et al., "BGP MPLS Based Ethernet VPN", RFC
              7432 , February 2015.


15.2.  Informative References

   [RFC7080]  A. Sajassi, et al., "Virtual Private LAN Service (VPLS)
              Interoperability with Provider Backbone Bridges", RFC
              7080, December 2013.

   [RFC7209]  D. Thaler, et al., "Requirements for Ethernet VPN (EVPN)",
              RFC 7209, May 2014.

   [RFC4389]  A. Sajassi, et al., "Neighbor Discovery Proxies (ND
              Proxy)", RFC 4389, April 2006.

   [RFC4761]  K. Kompella, et al., "Virtual Private LAN Service (VPLS)
              Using BGP for Auto-Discovery and Signaling", RFC 4761,
              Jauary 2007.

   [OVERLAY]  A. Sajassi, et al., "A Network Virtualization Overlay
              Solution using EVPN", draft-ietf-bess-evpn-overlay-01,
              work in progress, February 2015.

   [RFC6514] R. Aggarwal, et al., "BGP Encodings and Procedures for
              Multicast in MPLS/BGP IP VPNs", RFC6514, February 2012.

   [RFC6513] E. Rosen, et al., "Multicast in MPLS/BGP IP VPNs", RFC6513,
              February 2012.

   [INTERCON-EVPN] J. Rabadan, et al., "Interconnect Solution for EVPN
              Overlay networks", https://tools.ietf.org/html/draft-ietf-
              bess-dci-evpn-overlay-04, September 2016

   [TUNNEL-ENCAPS] E. Rosen, et al. "The BGP Tunnel Encapsulation
              Attribute", https://tools.ietf.org/html/draft-ietf-idr-
              tunnel-encaps-06, work in progress, June 2017.


15.  Authors' Addresses

              Ali Sajassi
              Cisco
              170 West Tasman Drive
              San Jose, CA  95134, US
              Email: sajassi@cisco.com





Patel, et al.           Expires January 2, 2017                [Page 23]


INTERNET DRAFT  Seamless Interop between EVPN & MVPN PEs    July 2, 2017


              Samir Thoria
              Cisco
              170 West Tasman Drive
              San Jose, CA  95134, US
              Email: sthoria@cisco.com


              Niloofar Fazlollahi
              Cisco
              170 West Tasman Drive
              San Jose, CA  95134, US
              Email: nifazlol@cisco.com


              Ashutosh Gupta
              Avi Networks
              Email: ashutosh@avinetworks.com


































Patel, et al.           Expires January 2, 2017                [Page 24]