Skip to main content

Computing in the Network
charter-irtf-coinrg-01-00

The information below is for a proposed recharter. The current approved charter is version 01
Document Charter Computing in the Network Research Group RG (coinrg)
Title Computing in the Network
Last updated 2019-08-21
State Approved
RG State Active
Send notices to (None)

charter-irtf-coinrg-01-00

Proposed Charter:

The COIN proposed research group (COINRG) will explore existing research and
foster investigation of “Compute In the Network” and resultant impacts to the
data plane. The goal is to investigate how to harness and to benefit from this
emerging disruption to the Internet architecture to improve network and
application performance as well as user experience. COIN will encourage
scrutiny of research solutions that comprehend the re-imagining of the network
to be a place where routing, compute, and storage blend.

COIN will address both controlled environments such as DCN and the ongoing
shift from data center (DC) toward edge computing and will debate whether this
shift can be viewed as a cloud continuum. COIN specifically will focus on the
evolution necessary for networking to move beyond packet interception as the
basis of network computation. While existing DCs employ rudimentary languages
for programming switch, richer programmability is required to support emerging
workloads, such as edge network analytics, machine learning and deep learn.
Such applications not only need access to more general-purpose languages, but
also need to operate in conjunction with local and remote caches, dynamic
control points, and data stewardship.

Orchestration of end-to-end resources between the DC network and the edge is
another key topic to address in COIN. In particular, the RG will examine
orchestration with increasingly heterogeneous distributed components and draw
inspiration from current approaches (e.g., Kubernetes, Swarm) that are likely
to need updating, extending, and/or simplifying in multi-domain network
environments.

Use-case-driven requirements, gathered from next-generation applications and
services (e.g., video streaming, immersive AR/VR, autonomous/connected
vehicles, industrial IoT), may lead to new architectures, which employ new ways
to perform functional distribution and leverage co-design of layered
approaches. COIN will pay close attention to work in DINRG and ICNRG in
particular, and will work carefully to keep a focused scope despite the breadth
of initial discussions. COIN will interact closely with the IETF transport area
to avoid proposals that would increase the friction between the end-to-end
privacy and security of new transport protocols and the need for in-network
computations. The SPUD efforts in the IETF already addressed some of these
challenges and COIN intends to continue the dialog around evolving end-to-end
semantics of IETF transports protocols. The PRG will be aware that targeting
the space of multi-party computations may impact or be impacted by
crypto/security properties. COIN will foster discussion on what could (or could
not) be exposed across network layers, including parameters that might enable
QoS/QoE, orchestration dynamics, and seamless mobility.

In order to achieve its goals, COIN will expose and advance research on
distributed, decentralized networks and resources required by DC, edge and
ambient computing. COIN will investigate the implications of increased
heterogeneity and limitations that arise if/when DC and edge computing employ a
common architecture, programmable networks and API and interchangeable
functionality in the Internet. An assumption will be that to improve Internet
performance, the network, compute and storage resources must work jointly in
close partnership throughout the network, while servicing data-intensive
distributed applications.

SCOPE (to be refined by the PRG meetings)

(1) Research on solutions to use programmable network devices, languages and
abstractions to implement network functions for improved Internet performance.
(2) Research on use case driven requirements analysis: the cloud continuum from
the data center to edge networks and beyond including in-network computing
using programmable switches. Identify potential benefits from in-network
functionality, including but not limited to compute, cache, manage, control,
etc. (3) Research on novel architectures, data-plane abstractions and new
network /transport protocol designs to efficiently federate decentralized
computing resources, across the infrastructure regardless of where in the
network the compute is placed (the DC, the core, the edge, and even in the
end-user devices). (4) Research on potential new privacy and security
mechanisms required or enabled by in-network compute.