DHA-RL: three-tier hybrid offloading network optimisation for the Internet of Things Online publication date: Mon, 20-Jan-2025
by Sili He; Zhenjiang Zhang; Qing-An Zeng
International Journal of Mobile Network Design and Innovation (IJMNDI), Vol. 11, No. 2, 2024
Abstract: With the rise of 5G and increasing demand for computing-intensive applications, efficient solutions are needed, especially in remote areas where cellular networks struggle. This paper proposes a three-tier hybrid offloading framework involving local nodes (devices), edge nodes (satellites), and cloud nodes (ground stations). It introduces a novel RL-based offloading strategy, decodable hybrid actions reinforcement learning (DHARL), to optimise latency and energy consumption within limited local and satellite resources. Using conditional variational autoencoders (VAE), the strategy learns dependencies in the hybrid action space. Constraints on the action space and supervision of representation shifts address issues like inadequate sampling and variations. Extensive simulations show that DHARL outperforms existing methods in task latency, energy consumption, and system cost, proving its potential for efficient computation offloading in Internet of Things (IoT) environments.
Existing subscribers:
Go to Inderscience Online Journals to access the Full Text of this article.
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of Mobile Network Design and Innovation (IJMNDI):
Login with your Inderscience username and password:
Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.
If you still need assistance, please email subs@inderscience.com