Intelligent Network Management using Reinforcement Learning
draft-kim-nmrg-rl-03

Document Type Expired Internet-Draft (individual)
Last updated 2019-01-03 (latest revision 2018-07-02)
Stream (None)
Intended RFC status (None)
Formats
Expired & archived
plain text pdf html bibtex
Stream Stream state (No stream defined)
Consensus Boilerplate Unknown
RFC Editor Note (None)
IESG IESG state Expired
Telechat date
Responsible AD (None)
Send notices to (None)

This Internet-Draft is no longer active. A copy of the expired Internet-Draft can be found at
https://www.ietf.org/archive/id/draft-kim-nmrg-rl-03.txt

Abstract

This document describes intelligent network management system to autonomously manage and monitor using machine learning techniques. Reinforcement learning is one of the machine learning techniques that can provide autonomously management with multi-agent path-planning over a communication network. According to intelligent distributed multi-agent system, the main centralized node called by the global environment should not only manage all agents workflow in a hybrid peer-to-peer networking architecture and, but transfer and share information in distributed nodes. All agents in distributed nodes are able to be provided with a cumulative reward for each action that a given agent takes with respect to an optimized knowledge based on a to-be-learned policy over the learning process. The optimized and trained knowledge would be involved with a large state information by the control action over a network. A reward from the global environment is reflected to the next optimized control action autonomously for network management in distributed networking nodes. The Reinforcement Learning(RL) Process have developed and expanded to Deep Reinforcement Learning(DRL) with model-driven or data-driven technical approaches for learning process. The trendy technique has been widely to attempt and apply to networking fields since Deep Reinforcement Learning can be used in practical networking areas beyond dynamics and heterogeneous environment disturbances, so that in the technique can be intelligently learned in the effective strategy.

Authors

Min-Suk Kim (mskim16@etri.re.kr)
Yong-Geun Hong (YGHONG@ETRI.RE.KR)
Youn-Hee Han (yhhan@koreatech.ac.kr)
Tae-Jin Ahn (taejin.ahn@kt.com)
Kwi-Hoon Kim (kwihooi@etri.re.kr)

(Note: The e-mail addresses provided for the authors of this Internet-Draft may no longer be valid.)