Internet-Draft | Computing-Aware Traffic Steering (CATS) | September 2023 |
Yao, et al. | Expires 18 March 2024 | [Page] |
- Workgroup:
- cats
- Internet-Draft:
- draft-yao-cats-gap-analysis-00
- Published:
- Intended Status:
- Informational
- Expires:
Computing-Aware Traffic Steering (CATS) Gap Analysis
Abstract
This document provides gap analysis for problem statement and use cases for Computing-Aware Traffic Steering(CATS) that are outlined in[I-D.ietf-cats-usecases-requirements]. It identifies the key engineering investigation areas that require potential architecture improvements and protocol enhancements so as to reach the optimal balance between compute services, via the proper choice of servers, and network paths, with the holistic consideration of metrics that are comprised of network status, coupled with the compute capabilities and resources.¶
Status of This Memo
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 18 March 2024.¶
Copyright Notice
Copyright (c) 2023 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License.¶
1. Introduction
Compute service instances deployed at different geographical locations are used to better realize distributed computing service as described in CATS problem statement, use cases, and requirements[I-D.ietf-cats-usecases-requirements]. A fundamental requirement in this type of deployment is to optimally deliver a service request to the most appropriate service instance, which would be dynamically selected by taking into consideration both the available computing resources and the quality of various network paths. Moreover, the potential requirement of the service & session continuity for a client transaction over its lifetime, possibly consisting of multiple requests, suggests some mechanism(s) be in place to maintain the service affinity between the client and the dynamically chosen service instance.¶
Overall, traditional techniques to manage the load distribution or balancing of clients requests include either the choose-the-closest or the round- robin mode. Solutions derived from these techniques are relatively static, which may lead to an unbalanced distribution in terms of network utilization and computational load among available resources. For example, Domain Name System (DNS)-based load balancing usually configures a domain in DNS such that client requests to that domain name will be statically resolved to one of several pre-provisioned IP addresses, with each IP corresponding to one node out of a group of servers. Successively, the client loads are distributed to the selected server, without further considering the dynamism of the server environment.¶
Certainly, there do exist some dynamic solutions to distribute client requests to servers. These solutions usually involve the layer 4 to layer 7 handling of packets, such as through DNS-based or indirection servers. Unfortunately, this category of approaches is inefficient for large number of short connections. Another disadvantage (of the approaches) falls in their lacking of effective ways to retrieve the desired metrics, such as the runtime status of network devices, in a real-time way. Therefore, the choice of the service node is almost entirely determined by the computing status, rather than the comprehensive considerations of both computing and network metrics or makes rather long-term decisions due to the (upper layer) overhead in the decision making itself.¶
Based on the gap analysis of existing related approaches, this document presents the necessity of why new mechanism should be designed to realize efficient traffic steering when considering the metrics of computing capabilities and resources as well as connectivity status.¶
2. Definition of Terms
- Client:
- An endpoint that is connected to a service provider network.¶
- Computing-Aware Traffic Steering (CATS):
- A traffic engineering approach [I-D.ietf-teas-rfc3272bis] that takes into account the dynamic nature of computing resources and network state to optimize service-specific traffic forwarding towards a given service contact instance. Various relevant metrics may be used to enforce such computing-aware traffic steering policies.¶
- CATS Components:
- The network devices and functions that could realize CATS's demands & objectives.¶
- Service:
-
An offering that is made available by a provider by orchestrating a set of resources (networking, compute, storage, etc.). Which and how these resources are solicited is part of the service logic which is internal to the provider. For example, these resources may be:¶
- * Exposed by one or multiple processes (a.k.a. Service Functions (SFs) ). [RFC7665]¶
- * Provided by virtual instances, physical, or a combination thereof.¶
- * Hosted within the same or distinct nodes.¶
- * Hosted within the same or multiple service sites.¶
- * Chained to provide a service using a variety of means.¶
- How a service is structured is out of the scope of CATS.¶
- The same service can be provided in many locations; each of them constitutes a service instance.¶
- Computing Service:
- An offering that is made available by a provider by orchestrating a set of computing resources (without networking resources).¶
- Service instance:
- An instance of running resources according to a given service logic. Many such instances can be enabled by a provider. Instances that adhere to the same service logic provide the same service. An instance is typically running in a service site. Clients' requests are serviced by one of these instances.¶
- Service identifier:
- An identifier representing a service, which the clients use to access it.¶
- Service transaction:
- Has one or more several service requests that has several flows which require the instance affinity(see below) because of the transaction related state.¶
- Instance affinity:
- To maintain the request of several flows belongs to the same service transaction to the same service instance.¶
- Anycast:
- An addressing and packet sending methodology that assign an "anycast" identifier for one or more service instances to which requests to an "anycast" identifier could be routed, following the definition in [RFC4786] as anycast being "the practice of making a particular Service Address available in multiple, discrete, autonomous locations, such that datagrams sent are routed to one of several available locations".¶
- Even though this document is not a protocol specification, it makes use of upper case key words to define requirements unambiguously. The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.¶
3. Gap Analysis of Existing Solutions
There are a number of problems that may occur when realizing the use cases based on existing solutions. This section analyzes the gap of DNS, load balancer, etc. and suggests a classification for those problems to aid the possible identification of solution components for addressing them.¶
3.1. Gap Analysis of DNS and Global Server Load Balancing(GSLB)
DNS [RFC1035] uses 'early binding' to explicitly bind from the service identification to a network address. It uses 'geographical location' to pick up the closest candidate and applies 'health check' to preventing the single point failure and also realizing load balance.¶
Computing resource information may be collected by DNS servers for some static use cases, such as computing resource deployment. But it can not meet the use cases that needs to update or adjust frequently.¶
For the Early binding, clients resolve IP address first and then steer traffic accordingly to the selected edge site. Not surprisingly, most of the time, a cached copy at the client side will be used. The consequence is that sometimes stale info obtained a couple of minutes ago could be used, which makes almost impractical choose the appropriate edge site. Further, it is fairly common that a resolver and a Load Balancer (or LB) are separate entities. The incurred signaling flow between them introduces additional overhead to the decision making procedure that is comprised of sequentially resolving first and redirecting to LB second. What's more, an IP resolution is normally at the Layer 7 and being a less-efficient app-level decision process, e.g., the database lookup that is originally intended for control but not data plane speed!¶
For the Health check, it is designed based on infrequent periodicity with the checking interval more than 1 second. This for sure will lead to slow or not-timely switching over upon failure. On the other aspect, limited computing resources at edges render it definitely cost-prohibitive to set up any more frequent health check.¶
Moreover, a Load Balancer at edge usually focuses on server load to select the 'optimal' server node first (could be virtual), and then adopts the lowest-latency (or lowest-cost) routing to reach the selected server (via IP address). Obviously, this type of standalone sequential steps lacks the organic way to combine and then jointly consider both compute/server load & routing latency (and/or cost) for a better E2E guarantee . And the last but not least, how to obtain necessary metrics from mattered entities for decision is also critical .¶
There is also the DNS-SD[RFC6763] and Multi-cast DNS[RFC6762] that could be used to dicover the service, which might be extended to collect the computing information. However, in most cases, they are used in the LAN environment. They need enhanced work and improvement should we intend to apply them in a wider network. Moreover, the instance selection will be pushed back to the client but rely on decision criteria being multicast to all clients , so there is a scalability limit. The gap of client based solution could be found at Section 3.4.¶
In addition, DNS push mechanism defined in [RFC8765] offers an relative efficient way to publish computing status information to clients. It uses the DNS stateful operations which runs over TCP, as defined in [RFC8490], to give long-lived low-traffic connections better longevity. The default keep-alive session duration is 15 seconds, which is relatively acceptable for refreshing the computing information. However, this kind of DNS-based solution still cannot grab the link connection information, thus an integrated decision based on compute load and network status cannot be derived, which may not be best for CATS problems.¶
Generally speaking, DNS is not designed for the computing information collection and it is not well suited for computing-aware traffic steering problems. The frequency of DNS resolution limits its applicability to meet the dynamicity of CATS requirements. Even though DNS push mechanism could have better refreshing rate, DNS solution still cannot generate traffic steering decisions based on network and computing information. Moreover, frequent resolving of the same service name would likely lead to an overload of the system. These issues are also discussed in Section 5.4 of[I-D.sarathchandra-coin-appcentres]. Some work like CDNI[RFC7336] is also based on the DNS/HTTP redirection, which has the similar problems and may not be suitable for CATS.¶
3.2. Gap Analysis of Load Balancer
A Load balancer could be seen as the external components of a network, which is designed for and deployed in a computing domain to support balanced load distribution. It may also be based on DNS system and require app level query.¶
For the existing load balancer solutions, there are two common ways. One way is to deploy a single load balancer at a central location for all service instances across different sites. It is the common way and is the easiest to implement. However, it bears the risk of the single point of failure. Plus, the network path from the (centrally-located) LB to server instances at (remote) sites might not always be optimal. The second way is to deploy an individual load balancer in each site, with its scope of application only to service instances in the site. It is still relatively easy to deploy. But, its main deficiency lies in no more inter-site load balancing that could prevent the achievement of better traffic steering across sites.¶
While most load-balancing solutions revolve around the egress-side load dispatching, there exist other designs, especially in 5G mobile networks, that conforms to the ingress-side principle by putting distributed load balancers closer to User Plane Functions(UPFs), with either 1:1 or 1:N mapping. Thru some higher-level coordination with a centralized load-balance controller residing in the mobile system, the distributed load balancers could help steer the traffic according to the running status of UPFs. Of course, further enhancement are needed to collect network status in order to support the joint optimization. More details will be explored to realize the solution and verify the feasibility.¶
Generally, to achieve the joint optimization of network and computing resources, a load balancer should also learn the network path status, which would lead to the problem of how to learn and use them in an efficient way.¶
3.3. Gap Analysis of ALTO
ALTO [RFC7285]addresses the problem of selecting the 'optimal' service instance as an off-path solution, which can be seen as an alternative way of tackling the problem space of CATS at the Application Layer. So in that respect, even if both ALTO and CATS target at the common problem, they have reached different approaches; further, they impose different needs with different assumptions on how applications and networks may interact.¶
The critical aspect is the signaling latency and the control plane load that a service-instance selection process may incur, in both on- and off-path solutions. This in turn may impact the frequency with which applications will query ALTO server(s), especially in the mobile system where User Equipments(UEs) may move to different cell sites (gNodeBs) or even roam to different mobile networks that would trigger the switchover to different network paths.¶
As a result, off-path systems, e.g., ALTO, which are based on receiving replies for applications/services before traffic could be delivered, might not keep optimal or even valid after the handover. So, ALTO need more improvement, including possible extension to support multi-domain deployment, quick interaction among all involved entities (like applications, service instances, etc.), and the integration of more performance metric information into the system, etc.¶
3.4. Gap Analysis of Message Broker
Message brokers (MBs) could be used to dispatch the incoming service requests from clients to a suitable service instance, where such dispatching could be controlled by metrics such as computing load. However, MBs will face the following adversities:¶
May use richer computing metrics (such as load) but may lack the necessary network metrics.¶
May lead to 'middleman' adverse effects on efficiency, specifically when it comes to additional latencies as experienced by clients due to the extra but necessary communication with the broker. This introduces the 'path stretch' compared to the possible direct path between client and service instance.¶
Preventing the DDoS attack would be entirely limited to the cases of service instances being hidden by the broker.¶
3.5. Gap Analysis of Client Based Solution
A solution that leaves the collection of computing and network resource and further dispatching of service requests entirely to the client itself may be possible to achieve the needed dynamism. However, it does bear some drawbacks: e.g., the individual destination, i.e., the network identifier for a service instance, must be known to the client a priori for direct service dispatching. While this may be viable for certain applications, it cannot generally scale to a large number of clients. Furthermore, there would exist undesirable reasons for clients to learn the identifiers of all available service instances in a service domain.¶
It may be undesirable for clients to learn all available service instance identifiers for reasons of Service Providers' being reluctant to expose their 'valuable' information to clients.¶
It may be undesirable for clients to learn all available network paths that could be obtained either directly from the operators' exposure or indirectly by clients' self measurement.¶
For scalability concern if the number of service instances and network paths are very high.¶
3.6. Summary of Gap Analysis
3.6.1. Dynamicity of Relations
CATS is desired to be aware of multiple edge sites' computing resource status, to provide the further opportunity of traffic steering based on the specific routing decision. So the dynamicity of relations among the multiple edge sites or service instances is the basic attributes of the potential CATS system/functions. Even further, the degree of the dynamicity may be different for different use cases. Especially the traffic steering demands a more frequent information collection and routing decision.¶
The mapping from a service identifier to a specific service instance that may execute the service request for a client usually happens through resolving the service identification into a specific IP address at which the service instance is reachable.¶
Application layer solutions can be foreseen, using an application server to resolve the binding updates. While the viability of these solutions will generally subject to the additional latency that is being introduced by the resolution of the mapping via the said application server, the potentially higher frequencies of changing the mapping relation every a few service requests is seen as difficult to be practical.¶
Moreover, we can foresee scenarios in which such relationship may change so frequently that it occurs even at the level of each service request. One possible factor might be the frequently changing metrics for a decision making process, e.g., the latency and load (metrics) as reported from all mattered service instances. Further, the client mobility creates a natural & physical dynamics with the consequence that a 'better' service instances may become available, or, vice versa, the previous assignment of the client to a service instance may turn less optimal, leading to the reduced performance that could root in the increased latency.¶
Existing solutions exhibit limitations in providing the dynamic 'instance affinity'. These limitations are inherently embedded in the solution design that is used for the mapping between a service identifier and the address of a candidate service instance. This is particularly noticeable upon relying on an indirection point in the form of a resolution or load balancing server. These limitations may result in the static 'instance stickiness' that would span many service requests or even last for the lifetime of a client session. This is normally undesirable from the perspective of a service provider in terms of achieving the best balanced request handling across many or all possible service instances.¶
3.6.2. Efficiency
For different use case of further utilize the collected computing resource information, there will be different demand to meet the efficiency issues. If the computing resource information is used for service deployment or joint resource management, there is no critical latency demand for receive and refresh the information. If the computing resource information is used for traffic steering of service to different edge sites/service instance, it requires the real-time or near real-time information, and the frequecy of refresh also needs to be quick and depend on the applications' specific demand.¶
The use of external resolvers, such as application layer repositories in general, also affects the efficiency of the overall service request. Extra signaling process is required between a client and the resolver, possibly through application layer solutions that result in not only more message exchanges but also increased latency thanks to the involvement of additional resolutions. Further, accommodating the instance affinities for a large number of short-live client sessions will exacerbate this additional signaling process and worsen the latencies, thus impacting the overall efficiency of the service transactions.¶
Existing solutions may introduce additional latencies and inefficiencies in packet transmission due to the need for additional resolution steps or indirection points, and will lead to the accuracy problems to select the appropriate edge.¶
3.6.3. Complexity and Accuracy
As we can see from the efficiency discussion in the previous subsection, at the moment when external resolvers have succeeded in collecting the necessary information and processing them to select the edge node, the network and computing resource status may have changed already. Accordingly, any additional control decision on which service instance to choose and for which incoming service request requires careful planning in order to address the potential inefficiencies that are caused by extra latencies and path stretching, at a minimum. Additional control plane elements, such as brokers, are usually neither well nor optimally placed in relation to the data path that a service request will ultimately traverse.¶
Existing solutions require careful planning for the placement of necessary control plane functions in relation to the resulting data plane traffic to improve the accuracy; a problem often intractable in scenarios of varying service demands.¶
3.6.4. Metric Exposure and Use
Some systems may use the geographical location, as deduced from an IP prefix, to pick up the closest edge. The issue here is that different edge sites may not be far apart in some field deployments, which renders it hard to deduce the geo-locations from IP addresses. Furthermore, the geo-location itself may not be the key distinguishing metric to be considered, particularly if the geographic co-location does not necessarily mean the congruency of various network topologies. Also, "geographically closer" cannot exclude those closer yet more loaded nodes, consequently leading to possibly worse performance for the end user.¶
Some solutions may also perform 'health checks' on an infrequent base (>1s) to reflect the service node status and switch over in service- degrading or failing situations. Health checks, however, inadequately reflect the overall computing status of a service instance. It may therefore not reflect at all the fundamental yet meaningful basis a suitable service instance will act upon, e.g., insufficiently using the number of ongoing sessions as the indicator of load. Infrequent checks would for sure lead to too coarse granularity to support high-accurate applications, e.g., applications requiring mobility-induced dynamics such as the Intelligent transportation scenario of Section 4.2 in[I-D.ietf-cats-usecases-requirements].¶
Existing solutions lack the necessary information to make the right decisions on the selection of the suitable service instance due to the limited semantic or due to information not being exposed across boundaries between, e.g., service and network providers.¶
3.6.5. Security
Resolution systems open up two dimensions of attacks, namely attacking the mapping system itself, and attacking the service instance directly after having been resolved. The latter is particularly critical for a service provider with significantly deployed service infrastructure. A resolved (global) IP address will not only enable a (malicious) client to directly attack the corresponding service instance, but also offer the client the opportunity to infer (over time) information about available service instances in the service infrastructure, which might nurture even wider and coordinated Denial-of-Service (DoS) attacks.¶
Existing solutions may expose control as well as data plane to the possibility of a distributed Denial-of-Service attack on the resolution system as well as service instance. Localizing the attack to the data plane ingress point would be desirable from the perspective of securing service request routing, which is not achieved by existing solutions.¶
4. Security Considerations
Section 3.6 discusses some security considerations. Other security issues are also mentioned in [I-D.ietf-cats-usecases-requirements]¶
5. IANA Considerations
No IANA action is required so far.¶
6. Contributors
The following people have substantially contributed to this document:¶
Peter Willis pjw7904@rjt.edu Philip Eardley philip.eardley@googlemail.com Markus Amend Deutsche Telekom Markus.Amend@telekom.de¶
7. Informative References
- [RFC4786]
- Abley, J. and K. Lindqvist, "Operation of Anycast Services", BCP 126, RFC 4786, DOI 10.17487/RFC4786, , <https://www.rfc-editor.org/info/rfc4786>.
- [RFC1035]
- Mockapetris, P., "Domain names - implementation and specification", STD 13, RFC 1035, DOI 10.17487/RFC1035, , <https://www.rfc-editor.org/info/rfc1035>.
- [RFC2119]
- Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, DOI 10.17487/RFC2119, , <https://www.rfc-editor.org/info/rfc2119>.
- [RFC6762]
- Cheshire, S. and M. Krochmal, "Multicast DNS", RFC 6762, DOI 10.17487/RFC6762, , <https://www.rfc-editor.org/info/rfc6762>.
- [RFC6763]
- Cheshire, S. and M. Krochmal, "DNS-Based Service Discovery", RFC 6763, DOI 10.17487/RFC6763, , <https://www.rfc-editor.org/info/rfc6763>.
- [RFC7285]
- Alimi, R., Ed., Penno, R., Ed., Yang, Y., Ed., Kiesel, S., Previdi, S., Roome, W., Shalunov, S., and R. Woundy, "Application-Layer Traffic Optimization (ALTO) Protocol", RFC 7285, DOI 10.17487/RFC7285, , <https://www.rfc-editor.org/info/rfc7285>.
- [RFC7336]
- Peterson, L., Davie, B., and R. van Brandenburg, Ed., "Framework for Content Distribution Network Interconnection (CDNI)", RFC 7336, DOI 10.17487/RFC7336, , <https://www.rfc-editor.org/info/rfc7336>.
- [RFC7665]
- Halpern, J., Ed. and C. Pignataro, Ed., "Service Function Chaining (SFC) Architecture", RFC 7665, DOI 10.17487/RFC7665, , <https://www.rfc-editor.org/info/rfc7665>.
- [RFC8174]
- Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, , <https://www.rfc-editor.org/info/rfc8174>.
- [RFC8490]
- Bellis, R., Cheshire, S., Dickinson, J., Dickinson, S., Lemon, T., and T. Pusateri, "DNS Stateful Operations", RFC 8490, DOI 10.17487/RFC8490, , <https://www.rfc-editor.org/info/rfc8490>.
- [RFC8765]
- Pusateri, T. and S. Cheshire, "DNS Push Notifications", RFC 8765, DOI 10.17487/RFC8765, , <https://www.rfc-editor.org/info/rfc8765>.
- [I-D.ietf-cats-usecases-requirements]
- Yao, K., Trossen, D., Boucadair, M., Contreras, L. M., Shi, H., Li, Y., and S. Zhang, "Computing-Aware Traffic Steering (CATS) Problem Statement, Use Cases, and Requirements", Work in Progress, Internet-Draft, draft-ietf-cats-usecases-requirements-00, , <https://datatracker.ietf.org/doc/html/draft-ietf-cats-usecases-requirements-00>.
- [I-D.ietf-teas-rfc3272bis]
- Farrel, A., "Overview and Principles of Internet Traffic Engineering", Work in Progress, Internet-Draft, draft-ietf-teas-rfc3272bis-27, , <https://datatracker.ietf.org/doc/html/draft-ietf-teas-rfc3272bis-27>.
- [I-D.sarathchandra-coin-appcentres]
- Trossen, D., Sarathchandra, C., and M. Boniface, "In-Network Computing for App-Centric Micro-Services", Work in Progress, Internet-Draft, draft-sarathchandra-coin-appcentres-04, , <https://datatracker.ietf.org/doc/html/draft-sarathchandra-coin-appcentres-04>.
- [I-D.contreras-alto-service-edge]
- Contreras, L. M., Randriamasy, S., Ros-Giralt, J., Perez, D. A. L., and C. E. Rothenberg, "Use of ALTO for Determining Service Edge", Work in Progress, Internet-Draft, draft-contreras-alto-service-edge-09, , <https://datatracker.ietf.org/doc/html/draft-contreras-alto-service-edge-09>.
- [TR22.874]
- 3GPP, "Study on traffic characteristics and performance requirements for AI/ML model transfer in 5GS (Release 18)", .
Acknowledgements
The author would like to thank Adrian Farrel, Peng Liu, Yizhou Li, Luigi IANNONE, Kaibin Zhang and Geng Liang for their valuable suggestions to this document.¶