Title: A data value-driven collaborative data collection method in complex multi-constraint environments
Authors: LinLiang Zhang; LianShan Yan; ZhiSheng Liu; Shuo Li; RuiFang Du; ZhiGuo Hu
Addresses: Shanxi Intelligent Transportation Institute Co., Ltd., Taiyuan, 030036, China; School of Information Science and Technology, Southwest Jiaotong University, Chengdu, 611756, China ' School of Information Science and Technology, Southwest Jiaotong University, Chengdu, 611756, China ' Shanxi Transportation Holdings Group Co., Ltd., Taiyuan, 030006, China ' School of Information Science and Technology, Southwest Jiaotong University, Chengdu, 611756, China ' Institute of Big Data Science and Industry, Shanxi University, Taiyuan, 030006, China; Shanxi Professional College of Finance, Taiyuan, 030000, China ' Institute of Big Data Science and Industry, Shanxi University, Taiyuan, 030006, China
Abstract: Data collection is a foundational task in mobile crowd sensing. However, existing data collection methods prioritise quantity, neglecting heterogeneity, cooperation, energy efficiency, and collision avoidance, causing low multi-agent efficiency in complex scenarios. To address this issue, this paper integrates multi-agent reinforcement learning and deep learning to propose the CS_MCE method. The CS_MCE method, applying to unmanned aerial vehicle (UAV) collaborative data collection scenarios, utilises deep neural networks to solve representation problems in vast state-action spaces and provides intelligent decision-making capabilities. In various experimental environments with different data values, experiments comparing CS_MCE with the MADDPG and IL-DDPG algorithms in terms of reward values, data quality, energy efficiency, and the number of collisions showed that the data quality collected by CS_MCE increased by 5-6 times, and energy efficiency improved by more than 60%, demonstrating the efficiency and stability of the CS_MCE method.
Keywords: MCS; mobile crowd-sensing; data collection; heterogeneous data; unmanned vehicles; deep reinforcement learning.
International Journal of Data Science, 2025 Vol.10 No.1, pp.27 - 52
Received: 27 Feb 2024
Accepted: 04 Aug 2024
Published online: 04 Mar 2025 *