Skip to main content

Considerations of network/system for AI services
draft-hong-nmrg-ai-deploy-07

Document Type Expired Internet-Draft (individual)
Expired & archived
Authors Yong-Geun Hong , Joo-Sang Youn , Seung-Woo Hong , Ho-Sun Yoon , Pedro Martinez-Julia
Last updated 2025-04-24 (Latest revision 2024-10-21)
RFC stream (None)
Intended RFC status (None)
Formats
Stream Stream state (No stream defined)
Consensus boilerplate Unknown
RFC Editor Note (None)
IESG IESG state Expired
Telechat date (None)
Responsible AD (None)
Send notices to (None)

This Internet-Draft is no longer active. A copy of the expired Internet-Draft is available in these formats:

Abstract

As the development of AI technology matured and AI technology began to be applied in various fields, AI technology is changed from running only on very high-performance servers with small hardware, including microcontrollers, low-performance CPUs and AI chipsets. In this document, we consider how to configure the network and the system in terms of AI inference service to provide AI service in a distributed method. Also, we describe the points to be considered in the environment where a client connects to a cloud server and an edge device and requests an AI service. Some use cases of deploying network-based AI services, such as self-driving vehicles and network digital twins, are described.

Authors

Yong-Geun Hong
Joo-Sang Youn
Seung-Woo Hong
Ho-Sun Yoon
Pedro Martinez-Julia

(Note: The e-mail addresses provided for the authors of this Internet-Draft may no longer be valid.)