Network Working Group L. Dunbar
Internet Draft Futurewei
Intended status: Informational Andy Malis
Expires: Dec 2019 Independent
C. Jacquenet
Orange
M. Toy
Verizon
September 23, 2019
Dynamic Networks to Hybrid Cloud DCs Problem Statement
draft-ietf-rtgwg-net2cloud-problem-statement-04
Abstract
This document describes the problems that enterprises face today
when interconnecting their branch offices with dynamic workloads in
third party data centers (a.k.a. Cloud DCs).
It examines some of the approaches interconnecting cloud DCs with
enterprises' on-premises DCs & branch offices. This document also
describes some of the network problems that many enterprises face
when they have workloads & applications & data split among different
data centers, especially for those enterprises with multiple sites
that are already interconnected by VPNs (e.g., MPLS L2VPN/L3VPN).
Current operational problems are examined to determine whether there
is a need to improve existing protocols or whether a new protocol is
necessary to solve them.
Status of this Memo
This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF), its areas, and its working groups. Note that
other groups may also distribute working documents as Internet-
Drafts.
xxx, et al. Expires March 23, 2020 [Page 1]
Internet-Draft Net2Cloud Problem Statement September 2019
Internet-Drafts are draft documents valid for a maximum of six
months and may be updated, replaced, or obsoleted by other documents
at any time. It is inappropriate to use Internet-Drafts as
reference material or to cite them other than as "work in progress."
The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt
The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html
This Internet-Draft will expire on March 23, 2009.
Copyright Notice
Copyright (c) 2019 IETF Trust and the persons identified as the
document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with
respect to this document. Code Components extracted from this
document must include Simplified BSD License text as described in
Section 4.e of the Trust Legal Provisions and are provided without
warranty as described in the Simplified BSD License.
Table of Contents
1. Introduction...................................................3
1.1. On the evolution of Cloud DC connectivity.................3
1.2. The role of SD-WAN techniques in Cloud DC connectivity....4
2. Definition of terms............................................4
3. Interconnecting Enterprise Sites with Cloud DCs................5
3.1. Multiple connections to workloads in a Cloud DC...........5
3.2. Interconnect Private and Public Cloud DCs.................7
3.3. Desired Properties for Networks that interconnect Hybrid
Clouds.........................................................8
4. Multiple Clouds Interconnection................................9
4.1. Multi-Cloud Interconnection...............................9
4.2. Desired Properties for Multi-Cloud Interconnection.......11
Dunbar, et al. Expires Dec 23, 2019 [Page 2]
Internet-Draft Net2Cloud Problem Statement September 2019
5. Problems with MPLS-based VPNs extending to Hybrid Cloud DCs...11
6. Problem with using IPsec tunnels to Cloud DCs.................13
6.1. Complexity of multi-point any-to-any interconnection.....13
6.2. Poor performance over long distance......................14
6.3. Scaling Issues with IPsec Tunnels........................14
7. Problems of Using SD-WAN to connect to Cloud DCs..............15
7.1. SD-WAN among branch offices vs. interconnect to Cloud DCs15
8. End-to-End Security Concerns for Data Flows...................18
9. Requirements for Dynamic Cloud Data Center VPNs...............18
10. Security Considerations......................................19
11. IANA Considerations..........................................19
12. References...................................................19
12.1. Normative References....................................19
12.2. Informative References..................................19
13. Acknowledgments..............................................20
1. Introduction
1.1. On the evolution of Cloud DC connectivity
The ever-increasing use of cloud applications for communication
services change the way corporate business works and shares
information. Such cloud applications use resources hosted in third
party DCs that also host services for other customers.
With the advent of widely available third-party cloud DCs in diverse
geographic locations and the advancement of tools for monitoring and
predicting application behaviors, it is technically feasible for
enterprises to instantiate applications and workloads in locations
that are geographically closest to their end-users. Such proximity
improves end-to-end latency and overall user experience. Conversely,
an enterprise can easily shutdown applications and workloads
whenever end-users are in motion (thereby modifying the networking
connection of subsequently relocated applications and workloads). In
addition, an enterprise may wish to take advantage of more and more
business applications offered by third party private cloud DCs.
Most of those enterprise branch offices & on-premises data centers
are already connected via VPNs, such as MPLS-based L2VPNs and
L3VPNs. Then connecting to the cloud-hosted resources may not be
straightforward if the provider of the VPN service does not have
direct connections to the corresponding cloud DCs. Under those
circumstances, the enterprise can upgrade the CPEs deployed in its
Dunbar, et al. Expires Dec 23, 2019 [Page 3]
Internet-Draft Net2Cloud Problem Statement September 2019
various premises to utilize SD-WAN techniques to reach cloud
resources (without any assistance from the VPN service provider), or
wait for their VPN service provider to make new agreements with data
center providers to connect to the cloud resources. Either way has
additional infrastructure and operational costs.
In addition, more enterprises are moving towards hybrid cloud DCs,
i.e. owned or operated by different Cloud operators, to maximize the
benefits of geographical proximity, elasticity and special features
offered by different cloud DCs.
1.2. The role of SD-WAN techniques in Cloud DC connectivity
This document discusses the issues associated with connecting
enterprise's workloads/applications instantiated in multiple third-
party data centers (a.k.a. Cloud DCs) and its on-prem data centers.
Very often, the actual Cloud DCs that host the
workloads/applications can be transient.
SD-WAN, initially launched to maximize bandwidths between locations
by aggregating multiple paths managed by different service
providers, has expanded to include flexible, on-demand, application-
based connections established over any networks to access dynamic
workloads in Cloud DCs.
Therefore, this document discusses the use of SD-WAN techniques to
improve enterprise-to-cloud DC and cloud DC-to-cloud DC
connectivity.
2. Definition of terms
Cloud DC: Third party Data Centers that usually host applications
and workload owned by different organizations or
tenants.
Controller: Used interchangeably with SD-WAN controller to manage
SD-WAN overlay path creation/deletion and monitoring the
path conditions between two or more sites.
DSVPN: Dynamic Smart Virtual Private Network. DSVPN is a secure
network that exchanges data between sites without
needing to pass traffic through an organization's
Dunbar, et al. Expires Dec 23, 2019 [Page 4]
Internet-Draft Net2Cloud Problem Statement September 2019
headquarter virtual private network (VPN) server or
router.
Heterogeneous Cloud: applications and workloads split among Cloud
DCs owned or managed by different operators.
Hybrid Clouds: Hybrid Clouds refers to an enterprise using its own
on-premises DCs in addition to Cloud services provided
by one or more cloud operators. (e.g. AWS, Azure,
Google, Salesforces, SAP, etc).
SD-WAN: Software Defined Wide Area Network. In this document,
"SD-WAN" refers to the solutions of pooling WAN
bandwidth from multiple underlay networks to get better
WAN bandwidth management, visibility & control. When the
underlay networks are private networks, traffic can
traverse without additional encryption; when the
underlay networks are public, such as Internet, some
traffic needs to be encrypted when traversing through
(depending on user provided policies).
VPC: Virtual Private Cloud is a virtual network dedicated to
one client account. It is logically isolated from other
virtual networks in a Cloud DC. Each client can launch
his/her desired resources, such as compute, storage, or
network functions into his/her VPC. Most Cloud
operators' VPCs only support private addresses, some
support IPv4 only, others support IPv4/IPv6 dual stack.
3. Interconnecting Enterprise Sites with Cloud DCs
3.1. Multiple connections to workloads in a Cloud DC
Most Cloud operators offer some type of network gateway through
which an enterprise can reach their workloads hosted in the Cloud
DCs. For example, AWS (Amazon Web Services) offers the following
options to reach workloads in AWS Cloud DCs:
- AWS Internet gateway allows communication between instances in
AWS VPC and the internet.
Dunbar, et al. Expires Dec 23, 2019 [Page 5]
Internet-Draft Net2Cloud Problem Statement September 2019
- AWS Virtual gateway (vGW) where IPsec tunnels [RFC6071] are
established between an enterprise's own gateway and AWS vGW, so
that the communications between those gateways can be secured
from the underlay (which might be the public Internet).
- AWS Direct Connect, which allows enterprises to purchase direct
connect from network service providers to get a private leased
line interconnecting the enterprises gateway(s) and the AWS
Direct Connect routers. In addition, an AWS Transit Gateway can
be used to interconnect multiple VPCs in different Availability
Zones. AWS Transit Gateway acts as a hub that controls how
traffic is forwarded among all the connected networks which act
like spokes.
As an example, some branch offices of an enterprise can connect to
over the Internet to reach AWS's vGW via IPsec tunnels. Other branch
offices of the same enterprise can connect to AWS DirectConnect via
a private network (without any encryption). ). It is important for
enterprises to be able to observe the specific behaviors when
connected by different connections.
Figure below shows an example of some tenants' workloads are
accessible via a virtual router connected by AWS Internet Gateway;
some are accessible via AWS vGW, and others are accessible via AWS
Direct Connect. vR1 uses IPsec to establish secure tunnels over the
Internet to avoid paying extra fees for the IPsec features provided
by AWS vGW. Some tenants can deploy separate virtual routers to
connect to internet traffic and to traffic from the secure channels
from vGW and DirectConnect, e.g. vR1 & vR2. Others may have one
virtual router connecting to both types of traffic. Customer Gateway
can be customer owned router or ports physically connected to AWS
Direct Connect GW.
Dunbar, et al. Expires Dec 23, 2019 [Page 6]
Internet-Draft Net2Cloud Problem Statement September 2019
+------------------------+
| ,---. ,---. |
| (TN-1 ) ( TN-2)|
| `-+-' +---+ `-+-' |
| +----|vR1|----+ |
| ++--+ |
| | +-+----+
| | /Internet\ For External
| +-------+ Gateway +----------------------
| \ / to reach via Internet
| +-+----+
| |
| ,---. ,---. |
| (TN-1 ) ( TN-2)|
| `-+-' +---+ `-+-' |
| +----|vR2|----+ |
| ++--+ |
| | +-+----+
| | / virtual\ For IPsec Tunnel
| +-------+ Gateway +----------------------
| | \ / termination
| | +-+----+
| | |
| | +-+----+ +------+
| | / \ For Direct /customer\
| +-------+ Gateway +----------+ gateway |
| \ / Connect \ /
| +-+----+ +------+
| |
+------------------------+
Figure 1: Examples of Multiple Cloud DC connections.
3.2. Interconnect Private and Public Cloud DCs
It is likely that hybrid designs will become the rule for cloud
services, as more enterprises see the benefits of integrating public
and private cloud infrastructures. However, enabling the growth of
hybrid cloud deployments in the enterprise requires fast and safe
interconnection between public and private cloud services.
For an enterprise to connect to applications & workloads hosted in
multiple Cloud DCs, the enterprise can use IPsec tunnels established
over the Internet or a (virtualized) leased line service to connect
its on-premises gateways to each of the Cloud DC's gateways, virtual
Dunbar, et al. Expires Dec 23, 2019 [Page 7]
Internet-Draft Net2Cloud Problem Statement September 2019
routers instantiated in the Cloud DCs, or any other suitable design
(including a combination thereof).
Some enterprises prefer to instantiate their own virtual
CPEs/routers inside the Cloud DC to connect the workloads within the
Cloud DC. Then an overlay path is established between customer
gateways to the virtual CPEs/routers for reaching the workloads
inside the cloud DC.
3.3. Desired Properties for Networks that interconnect Hybrid Clouds
The networks that interconnect hybrid cloud DCs must address the
following requirements:
- High availability to access all workloads in the desired cloud
DCs.
Many enterprises include cloud infrastructures in their
disaster recovery strategy, e.g., by enforcing periodic backup
policies within the cloud, or by running backup applications in
the Cloud, etc. Therefore, the connection to the cloud DCs may
not be permanent, but rather needs to be on-demand.
- Global reachability from different geographical zones, thereby
facilitating the proximity of applications as a function of the
end users' location, to improve latency.
- Elasticity: prompt connection to newly instantiated
applications at Cloud DCs when usages increase and prompt
release of connection after applications at locations being
removed when demands change.
Some enterprises have front-end web portals running in cloud
DCs and database servers in their on-premises DCs. Those Front-
end web portals need to be reachable from the public Internet.
The backend connection to the sensitive data in database
servers hosted in the on-premises DCs might need secure
connections.
- Scalable security management. IPsec is commonly used to
interconnect cloud gateways with CPEs deployed in the
enterprise premises. For enterprises with a large number or
branch offices, managing the IPsec's Security Associations
among many nodes can be very difficult.
Dunbar, et al. Expires Dec 23, 2019 [Page 8]
Internet-Draft Net2Cloud Problem Statement September 2019
4. Multiple Clouds Interconnection
4.1. Multi-Cloud Interconnection
Enterprises today can instantiate their workloads or applications in
Cloud DCs owned by different Cloud providers, e.g. AWS, Azure,
GoogleCloud, Oracle, etc. Interconnecting those workloads involves
three parties: The Enterprise, its network service providers, and
the Cloud providers.
All Cloud Operators offer secure ways to connect enterprises' on-
prem sites/DCs with their Cloud DCs.
Some Cloud Operators allow enterprises to connect via private
networks. For example, AWS's DirectConnect allows enterprises to use rd 3 party provided private Layer 2 path from enterprises' GW to AWS
DirectConnect GW. Microsoft's ExpressRoute allows extension of a
private network to any of the Microsoft cloud services, including
Azure and Office365. ExpressRoute is configured using Layer 3
routing. Customers can opt for redundancy by provisioning dual links
from their location to two Microsoft Enterprise edge routers (MSEEs)
located within a third-party ExpressRoute peering location. The BGP
routing protocol is then setup over WAN links to provide redundancy
to the cloud. This redundancy is maintained from the peering data
center into Microsoft's cloud network.
Google's Cloud Dedicated Interconnect offers similar network
connectivity options as AWS and Microsoft. One distinct difference,
however, is that Google's service allows customers access to the
entire global cloud network by default. It does this by connecting
your on-premises network with the Google Cloud using BGP and Google
Cloud Routers to provide optimal paths to the different regions of
the global cloud infrastructure.
All those connectivity options are between Cloud providers' DCs and
the Enterprises, but not between cloud DCs. For example, to connect
applications in AWS Cloud to applications in Azure Cloud, there must
be a third-party gateway (physical or virtual) to interconnect the
AWS's Layer 2 DirectConnect path with Azure's Layer 3 ExpressRoute.
Enterprises can also instantiate their own virtual routers in
different Cloud DCs and administer IPsec tunnels among them, which
by itself is not a trivial task. Or by leveraging open source VPN
software such as strongSwan, you create an IPSec connection to the
Azure gateway using a shared key. The strong swan instance within
AWS not only can connect to Azure but can also be used to facilitate
traffic to other nodes within the AWS VPC by configuring forwarding
Dunbar, et al. Expires Dec 23, 2019 [Page 9]
Internet-Draft Net2Cloud Problem Statement September 2019
and using appropriate routing rules for the VPC. Most Cloud
operators, such as AWS VPC or Azure VNET, use non-globally routable
CIDR from private IPv4 address ranges as specified by RFC1918. To
establish IPsec tunnel between two Cloud DCs, it is necessary to
exchange Public routable addresses for applications in different
Cloud DCs. [BGP-SDWAN] describes one method. Other methods are worth
exploring.
In summary, here are some approaches, available now (which might
change in the future), to interconnect workloads among different
Cloud DCs:
a) Utilize Cloud DC provided inter/intra-cloud connectivity
services (e.g., AWS Transit Gateway) to connect workloads
instantiated in multiple VPCs. Such services are provided with
the cloud gateway to connect to external networks (e.g., AWS
DirectConnect Gateway).
b) Hairpin all traffic through the customer gateway, meaning all
workloads are directly connected to the customer gateway, so
that communications among workloads within one Cloud DC must
traverse through the customer gateway.
c) Establish direct tunnels among different VPCs (AWS' Virtual
Private Clouds) and VNET (Azure's Virtual Networks) via
client's own virtual routers instantiated within Cloud DCs.
DMVPN (Dynamic Multipoint Virtual Private Network) or DSVPN
(Dynamic Smart VPN) techniques can be used to establish direct
Multi-point-to-Point or multi-point-to multi-point tunnels
among those client's own virtual routers.
Approach a) usually does not work if Cloud DCs are owned and managed
by different Cloud providers.
Approach b) creates additional transmission delay plus incurring
cost when exiting Cloud DCs.
For the Approach c), DMVPN or DSVPN use NHRP (Next Hop Resolution
Protocol) [RFC2735] so that spoke nodes can register their IP
addresses & WAN ports with the hub node. The IETF ION
(Internetworking over NBMA (non-broadcast multiple access) WG
standardized NHRP for connection-oriented NBMA network (such as ATM)
network address resolution more than two decades ago.
There are many differences between virtual routers in Public Cloud
DCs and the nodes in an NBMA network. NHRP cannot be used for
Dunbar, et al. Expires Dec 23, 2019 [Page 10]
Internet-Draft Net2Cloud Problem Statement September 2019
registering virtual routers in Cloud DCs unless an extension of such
protocols is developed for that purpose, e.g. taking NAT or dynamic
addresses into consideration. Therefore, DMVPN and/or DSVPN cannot
be used directly for connecting workloads in hybrid Cloud DCs.
Other protocols such as BGP can be used, as described in [BGP-
SDWAN].
4.2. Desired Properties for Multi-Cloud Interconnection
Different Cloud Operators have different APIs to access their Cloud
resources. It is difficult to move applications built by one Cloud
operator's APIs to another. However, it is highly desirable to have
a single and consistent way to manage the networks and respective
security policies for interconnecting applications hosted in
different Cloud DCs.
The desired property would be having a single network fabric to
which different Cloud DCs and enterprise's multiple sites can be
attached or detached, with a common interface for setting desired
policies. SDWAN is positioned to become that network fabric enabling
Cloud DCs to be dynamically attached or detached. But the reality is
that different Cloud Operators have different access methods, and
Cloud DCs might be geographically far apart. More Cloud connectivity
problems are described in the subsequent sections.
The difficulty of connecting applications in different Clouds might
be stemmed from the fact that they are direct competitors. Usually
traffic flow out of Cloud DCs incur charges. Therefore, direct
communications between applications in different Cloud DCs can be
more expensive than intra Cloud communications.
5. Problems with MPLS-based VPNs extending to Hybrid Cloud DCs
Traditional MPLS-based VPNs have been widely deployed as an
effective way to support businesses and organizations that require
network performance and reliability. MPLS shifted the burden of
managing a VPN service from enterprises to service providers. The
CPEs attached to MPLS VPNs are also simpler and less expensive,
since they do not need to manage routes to remote sites; they simply
pass all outbound traffic to the MPLS VPN PEs to which the CPEs are
attached (albeit multi-homing scenarios require more processing
logic on CPEs). MPLS has addressed the problems of scale,
Dunbar, et al. Expires Dec 23, 2019 [Page 11]
Internet-Draft Net2Cloud Problem Statement September 2019
availability, and fast recovery from network faults, and
incorporated traffic-engineering capabilities.
However, traditional MPLS-based VPN solutions are sub-optimized for
connecting end-users to dynamic workloads/applications in cloud DCs
because:
- The Provider Edge (PE) nodes of the enterprise's VPNs might not
have direct connections to third party cloud DCs that are used
for hosting workloads with the goal of providing an easy access
to enterprises' end-users.
- It usually takes some time to deploy provider edge (PE) routers
at new locations. When enterprise's workloads are changed from
one cloud DC to another (i.e., removed from one DC and re-
instantiated to another location when demand changes), the
enterprise branch offices need to be connected to the new cloud
DC, but the network service provider might not have PEs located
at the new location.
One of the main drivers for moving workloads into the cloud is
the widely available cloud DCs at geographically diverse
locations, where apps can be instantiated so that they can be
as close to their end-users as possible. When the user base
changes, the applications may be migrated to a new cloud DC
location closest to the new user base.
- Most of the cloud DCs do not expose their internal networks. An
enterprise with a hybrid cloud deployment can use an MPLS-VPN
to connect to a Cloud provider at multiple locations. The
connection locations often correspond to gateways of different
Cloud DC locations from the Cloud provider. The different
Cloud DCs are interconnected by the Cloud provider's own
internal network. At each connection location (gateway), the
Cloud provider uses BGP to advertise all of the prefixes in the
enterprise's VPC, regardless of which Cloud DC a given prefix
is actually in. This can result in inefficient routing for the
end-to-end data path.
- Extensive usage of Overlay by Cloud DCs:
Dunbar, et al. Expires Dec 23, 2019 [Page 12]
Internet-Draft Net2Cloud Problem Statement September 2019
Many cloud DCs use an overlay to connect their gateways to the
workloads located inside the DC. There is currently no standard
that specifies the interworking between the Cloud Overlay and
the enterprise' existing underlay networks. One of the
characteristics of overlay networks is that some of the WAN
ports of the edge nodes connect to third party networks. There
is therefore a need to propagate WAN port information to remote
authorized peers in third party network domains in addition to
route propagation. Such an exchange cannot happen before
communication between peers is properly secured.
Another roadblock is the lack of a standard way to express and
enforce consistent security policies for workloads that not only use
virtual addresses, but in which are also very likely hosted in
different locations within the Cloud DC [RFC8192]. The current VPN
path computation and bandwidth allocation schemes may not be
flexible enough to address the need for enterprises to rapidly
connect to dynamically instantiated (or removed) workloads and
applications regardless of their location/nature (i.e., third party
cloud DCs).
6. Problem with using IPsec tunnels to Cloud DCs
As described in the previous section, many Cloud operators expose
their gateways for external entities (which can be enterprises
themselves) to directly establish IPsec tunnels. Enterprises can
also instantiate virtual routers within Cloud DCs to connect to
their on-premises devices via IPsec tunnels. If there is only one
enterprise location that needs to reach the Cloud DC, an IPsec
tunnel is a very convenient solution.
However, many medium-to-large enterprises usually have multiple
sites and multiple data centers. For workloads and apps hosted in
cloud DCs, multiple sites need to communicate securely with those
cloud workloads and apps. This section documents some of the issues
associated with using IPsec tunnels to connect enterprise premises
with cloud gateways.
6.1. Complexity of multi-point any-to-any interconnection
The dynamic workload instantiated in cloud DC needs to communicate
with multiple branch offices and on-premises data centers. Most
Dunbar, et al. Expires Dec 23, 2019 [Page 13]
Internet-Draft Net2Cloud Problem Statement September 2019
enterprises need multi-point interconnection among multiple
locations, which can be provided by means of MPLS L2/L3 VPNs.
Using IPsec overlay paths to connect all branches & on-premises data
centers to cloud DCs requires CPEs to manage routing among Cloud DCs
gateways and the CPEs located at other branch locations, which can
dramatically increase the complexity of the design, possibly at the
cost of jeopardizing the CPE performance.
The complexity of requiring CPEs to maintain routing among other
CPEs is one of the reasons why enterprises migrated from Frame Relay
based services to MPLS-based VPN services.
MPLS-based VPNs have their PEs directly connected to the CPEs.
Therefore, CPEs only need to forward all traffic to the directly
attached PEs, which are therefore responsible for enforcing the
routing policy within the corresponding VPNs. Even for multi-homed
CPEs, the CPEs only need to forward traffic among the directly
connected PEs. However, when using IPsec tunnels between CPEs and
Cloud DCs, the CPEs need to compute, select, establish and maintain
routes for traffic to be forwarded to Cloud DCs, to remote CPEs via
VPN, or directly.
6.2. Poor performance over long distance
When enterprise CPEs or gateways are far away from cloud DC gateways
or across country/continent boundaries, performance of IPsec tunnels
over the public Internet can be problematic and unpredictable. Even
though there are many monitoring tools available to measure delay
and various performance characteristics of the network, the
measurement for paths over the Internet is passive and past
measurements may not represent future performance.
Many cloud providers can replicate workloads in different available
zones. An App instantiated in a cloud DC closest to clients may have
to cooperate with another App (or its mirror image) in another
region or database server(s) in the on-premises DC. This kind of
coordination requires predicable networking behavior/performance
among those locations.
6.3. Scaling Issues with IPsec Tunnels
IPsec can achieve secure overlay connections between two locations
over any underlay network, e.g., between CPEs and Cloud DC Gateways.
Dunbar, et al. Expires Dec 23, 2019 [Page 14]
Internet-Draft Net2Cloud Problem Statement September 2019
If there is only one enterprise location connected to the cloud
gateway, a small number of IPsec tunnels can be configured on-demand
between the on-premises DC and the Cloud DC, which is an easy and
flexible solution.
However, for multiple enterprise locations to reach workloads hosted
in cloud DCs, the cloud DC gateway needs to maintain multiple IPsec
tunnels to all those locations (e.g., as a hub & spoke topology).
For a company with hundreds or thousands of locations, there could
be hundreds (or even thousands) of IPsec tunnels terminating at the
cloud DC gateway, which is not only very expensive (because Cloud
Operators usually charge their customers based on connections), but
can be very processing intensive for the gateway. Many cloud
operators only allow a limited number of (IPsec) tunnels & bandwidth
to each customer. Alternatively, you could use a solution like
group encryption where a single IPsec SA is necessary at the GW but
the drawback here is key distribution and maintenance of a key
server, etc.
7. Problems of Using SD-WAN to connect to Cloud DCs
SD-WAN can establish parallel paths over multiple underlay networks
between two locations on-demand, for example, to support the
connections established between two CPEs interconnected by a
traditional MPLS VPN ([RFC4364] or [RFC4664]) or by IPsec [RFC6071]
tunnels.
SD-WAN lets enterprises augment their current VPN network with cost-
effective, readily available Broadband Internet connectivity,
enabling some traffic offloading to paths over the Internet
according to differentiated, possibly application-based traffic
forwarding policies, or when the MPLS VPN connection between the two
locations is congested, or otherwise undesirable or unavailable.
7.1. SD-WAN among branch offices vs. interconnect to Cloud DCs
SD-WAN interconnection of branch offices is not as simple as it
appears. For an enterprise with multiple sites, using SD-WAN overlay
paths among sites requires each CPE to manage all the addresses that
local hosts have the potential to reach, i.e., map internal VPN
addresses to appropriate SD-WAN paths. This is similar to the
complexity of Frame Relay based VPNs, where each CPE needed to
maintain mesh routing for all destinations if they were to avoid an
extra hop through a hub router. Even though SD-WAN CPEs can get
assistance from a central controller (instead of running a routing
Dunbar, et al. Expires Dec 23, 2019 [Page 15]
Internet-Draft Net2Cloud Problem Statement September 2019
protocol) to resolve the mapping between destinations and SD-WAN
paths, SD-WAN CPEs are still responsible for routing table
maintenance as remote destinations change their attachments, e.g.,
the dynamic workload in other DCs are de-commissioned or added.
Even though originally envisioned for interconnecting branch
offices, SD-WAN offers a very attractive way for enterprises to
connect to Cloud DCs.
The SD-WAN for interconnecting branch offices and the SD-WAN for
interconnecting to Cloud DCs have some differences:
- SD-WAN for interconnecting branch offices usually have two end-
points (e.g., CPEs) controlled by one entity (e.g., a
controller or management system operated by the enterprise).
- SD-WAN for Cloud DC interconnects may consider CPEs owned or
managed by the enterprise, while remote end-points are being
managed or controlled by Cloud DCs (For the ease of
description, let's call such CPEs asymmetrically-managed CPEs).
Dunbar, et al. Expires Dec 23, 2019 [Page 16]
Internet-Draft Net2Cloud Problem Statement September 2019
- Cloud DCs may have different entry points (or devices) with one
entry point that terminates a private direct connection (based
upon a leased line for example) and other entry points being
devices terminating the IPsec tunnels, as shown in Figure 2.
Therefore, the SD-WAN design becomes asymmetric.
+------------------------+
| ,---. ,---. |
| (TN-1 ) ( TN-2)| TN: Tenant applications/workloads
| `-+-' +---+ `-+-' |
| +----|vR1|----+ |
| ++--+ |
| | +-+----+
| | /Internet\ One path via
| +-------+ Gateway +---------------------+
| \ / Internet \
| +-+----+ \
+------------------------+ \
\
+------------------------+ native traffic \
| ,---. ,---. | without encryption|
| (TN-3 ) ( TN-4)| |
| `-+-' +--+ `-+-' | | +------+
| +----|vR|-----+ | +----+ CPE |
| ++-+ | | +------+
| | +-+----+ |
| | / virtual\ One path via IPsec Tunnel |
| +-------+ Gateway +-------------------------- +
| \ / Encrypted traffic over|
| +-+----+ public network |
+------------------------+ |
|
+------------------------+ |
| ,---. ,---. | Native traffic |
| (TN-5 ) ( TN-6)| without encryption |
| `-+-' +--+ `-+-' | over secure network|
| +----|vR|-----+ | |
| ++-+ | |
| | +-+----+ +------+ |
| | / \ Via Direct /customer\ |
| +-------+ Gateway +----------+ gateway |-----+
| \ / Connect \ /
| +-+----+ +------+
+------------------------+Customer GW has physical connection to AWS GW
Figure 2: Different Underlays to Reach Cloud DC
Dunbar, et al. Expires Dec 23, 2019 [Page 17]
Internet-Draft Net2Cloud Problem Statement September 2019
8. End-to-End Security Concerns for Data Flows
When IPsec tunnels established from enterprise on-premises CPEs
are terminated at the Cloud DC gateway where the workloads or
applications are hosted, some enterprises have concerns regarding
traffic to/from their workload being exposed to others behind the
data center gateway (e.g., exposed to other organizations that
have workloads in the same data center).
To ensure that traffic to/from workloads is not exposed to
unwanted entities, IPsec tunnels may go all the way to the
workload (servers, or VMs) within the DC.
9. Requirements for Dynamic Cloud Data Center VPNs
In order to address the aforementioned issues, any solution for
enterprise VPNs that includes connectivity to dynamic workloads or
applications in cloud data centers should satisfy a set of
requirements:
- The solution should allow enterprises to take advantage of the
current state-of-the-art in VPN technology, in both traditional
MPLS-based VPNs and IPsec-based VPNs (or any combination
thereof) that run over the public Internet.
- The solution should not require an enterprise to upgrade all
their existing CPEs.
- The solution should support scalable IPsec key management among
all nodes involved in DC interconnect schemes.
- The solution needs to support easy and fast, on-the-fly, VPN
connections to dynamic workloads and applications in third
party data centers, and easily allow these workloads to migrate
both within a data center and between data centers.
- Allow VPNs to provide bandwidth and other performance
guarantees.
- Be a cost-effective solution for enterprises to incorporate
dynamic cloud-based applications and workloads into their
existing VPN environment.
Dunbar, et al. Expires Dec 23, 2019 [Page 18]
Internet-Draft Net2Cloud Problem Statement September 2019
10. Security Considerations
The draft discusses security requirements as a part of the problem
space, particularly in sections 4, 5, and 8.
Solution drafts resulting from this work will address security
concerns inherent to the solution(s), including both protocol
aspects and the importance (for example) of securing workloads in
cloud DCs and the use of secure interconnection mechanisms.
11. IANA Considerations
This document requires no IANA actions. RFC Editor: Please remove
this section before publication.
12. References
12.1. Normative References
12.2. Informative References
[RFC2735] B. Fox, et al "NHRP Support for Virtual Private
networks". Dec. 1999.
[RFC8192] S. Hares, et al "Interface to Network Security Functions
(I2NSF) Problem Statement and Use Cases", July 2017
[ITU-T-X1036] ITU-T Recommendation X.1036, "Framework for creation,
storage, distribution and enforcement of policies for
network security", Nov 2007.
[RFC6071] S. Frankel and S. Krishnan, "IP Security (IPsec) and
Internet Key Exchange (IKE) Document Roadmap", Feb 2011.
[RFC4364] E. Rosen and Y. Rekhter, "BGP/MPLS IP Virtual Private
Networks (VPNs)", Feb 2006
Dunbar, et al. Expires Dec 23, 2019 [Page 19]
Internet-Draft Net2Cloud Problem Statement September 2019
[RFC4664] L. Andersson and E. Rosen, "Framework for Layer 2 Virtual
Private Networks (L2VPNs)", Sept 2006.
[BGP-SDWAN] L. Dunbar, et al. "BGP Extension for SDWAN Overlay
Networks", draft-dunbar-idr-bgp-sdwan-overlay-ext-03,
work-in-progress, Nov 2018.
13. Acknowledgments
Many thanks to Alia Atlas, Chris Bowers, Ignas Bagdonas, Michael
Huang, Liu Yuan Jiao, Katherine Zhao, and Jim Guichard for the
discussion and contributions.
Dunbar, et al. Expires Dec 23, 2019 [Page 20]
Internet-Draft Net2Cloud Problem Statement September 2019
Authors' Addresses
Linda Dunbar
Futurewei
Email: Linda.Dunbar@futurewei.com
Andrew G. Malis
Independent
Email: agmalis@gmail.com
Christian Jacquenet
Orange
Rennes, 35000
France
Email: Christian.jacquenet@orange.com
Mehmet Toy
Verizon
One Verizon Way
Basking Ridge, NJ 07920
Email: mehmet.toy@verizon.com
Dunbar, et al. Expires Dec 23, 2019 [Page 21]