Title: Adaptive data-sharing methods for multi-agent systems using deep reinforcement learning

Authors: Tomohiro Hayashida; Ichiro Nishizaki; Shinya Sekizaki; Qi Liu

Addresses: Hiroshima University, 1-4-1, Kagamiyama, Higashihiroshima, Hiroshima, 739-8527, Japan ' Hiroshima University, 1-4-1, Kagamiyama, Higashihiroshima, Hiroshima, 739-8527, Japan ' Hiroshima University, 1-4-1, Kagamiyama, Higashihiroshima, Hiroshima, 739-8527, Japan ' Hiroshima University, 1-4-1, Kagamiyama, Higashihiroshima, Hiroshima, 739-8527, Japan

Abstract: In general, the interaction between an agent and the environment can be described by a Markov decision process in a single-agent system (SAS). However, it is difficult to define them by a Markov decision process in a multi-agent system (MAS), and makes it difficult to learn appropriate action by the agents to avoid the difficulty. Lowe et al. have constructed data-sharing methods among the agents based on the actor-critic algorithm. This paper improves the data-sharing method by limiting the data-sharing rather than all the empirical data possessed by the other agents. This paper proposes three types of training data-sharing methods and conducts simulation experiments using multiple maze environments of different complexity to indicate the effectiveness of the proposed methods. Based on the experimental results, the proposed methods have better performance than the existing methods. In addition, this paper shows the appropriate method according to the characteristics of each target problem.

Keywords: deep reinforcement learning; DRL; multi-agent system; MAS; data-sharing.

DOI: 10.1504/IJCISTUDIES.2022.129015

International Journal of Computational Intelligence Studies, 2022 Vol.11 No.3/4, pp.176 - 199

Received: 29 Mar 2022
Accepted: 10 Jun 2022

Published online: 14 Feb 2023 *

Full-text access for editors Full-text access for subscribers Purchase this article Comment on this article