Forthcoming and Online First Articles

International Journal of Sensor Networks

International Journal of Sensor Networks (IJSNet)

Forthcoming articles have been peer-reviewed and accepted for publication but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

Online First articles are published online here, before they appear in a journal issue. Online First articles are fully citeable, complete with a DOI. They can be cited, read, and downloaded. Online First articles are published as Open Access (OA) articles to make the latest research available as early as possible.

Open AccessArticles marked with this Open Access icon are Online First articles. They are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.

Register for our alerting service, which notifies you by email when new issues are published online.

International Journal of Sensor Networks (22 papers in press)

Regular Issues

  • Optimizing Power Management in Wireless Sensor Networks Using Machine Learning: An Experimental Study on Energy Efficiency   Order a copy of this article
    by Mohammed Amine Zafrane, Ahmed Ramzi Houalef, Miloud Benchehima 
    Abstract: Wireless sensor networks (WSNs) have emerged as essential components across various fields. Comprising small, self-sustaining devices known as "Nodes," they play a critical role in data collection and analysis. However, ensuring optimal longevity without compromising data collection timeliness is a fundamental challenge. Regular data aggregation tasks, while essential, consume substantial energy resources. Furthermore, constraints in computation power, storage capacity, and energy supply pose significant design challenges within the Wireless Sensor Network domain. In pursuit of optimizing energy efficiency and extending the operational lifetime of nodes through artificial intelligence, we have developed a prototype for data collection to create a comprehensive dataset. Our approach leverages both current and precedent measurements, triggering data transmission only in the presence of significant changes. This intelligent strategy minimizes unnecessary communication and conserves energy resources. Based on the
    Keywords: Artificial intelligent; WSN; Power optimization; data acquisition.
    DOI: 10.1504/IJSNET.2024.10068162
     
  • Advanced Air Quality Prediction Modelling using Intelligent Optimisation Algorithm in Urban Regions   Order a copy of this article
    by Wendi Tan, Zhisheng Li 
    Abstract: Ambient air contamination is a significant environmental challenge threatening human well-being and quality of life, especially in urban areas. Technical breakthroughs in artificial intelligence models offer more accurate air quality predictions by analysing significant data sources, including meteorological factors like humidity, wind speed, and pollution data. Also, existing methods of air quality prediction often lack due to their dependency on statistical models that may not adequately capture the complexities of environmental data like pollutants and meteorological factors. Thus, the research introduces a grey wolf optimised variational autoencoder to enhance the air quality prediction by effectively capturing complex relationships in environmental data. The model acquires the probabilistic nature of variational latent representations from historical air quality input data and prevents overfitting. The relevant features are selected using the grey wolf technique, identifying the appropriate variables to enhance the data quality. Additionally, it optimises critical hyperparameters like learning rates and greedy layer sizes, leading to better convergence during model training and improved performance in air quality index prediction. Experimental results demonstrate improved prediction accuracy, reduced error rate, and faster convergence.
    Keywords: Air Quality Index; Prediction Model; Optimization; Artificial Intelligence; Urban Region; Environmental factors.
    DOI: 10.1504/IJSNET.2025.10070252
     
  • PULP-Lite: a More Light-weighted Multi-core Framework for IoT Applications   Order a copy of this article
    by Yong Yang, Yuyu Lian, Yanxiang Zhu, Shun Li, Wenhua Gu, Ming Ling 
    Abstract: The increasing volume of data generated by real-time applications and sensors places significant performance demands on processors. Single-core processors are constrained by the inherent limitations of their architecture in terms of parallel processing capability, making it challenging to handle real-time applications. To address this, we propose parallel ultra low power lite (PULP-Lite), a tightly coupled multi-core on-chip system to efficiently handle near-sensor data analysis in the Internet of Things endpoint devices. PULP-Lite uses low-latency interconnections and an innovative address-mapping mechanism to connect central processing units (CPU), ensuring high-performance processing while maintaining flexibility with a lightweight multi-core programming. We evaluate a field-programmable gate array (FPGA) implementation of PULP-Lite with 8 cores, showing a speedup of 6
    Keywords: multi-core optimization; parallel processing; computer systems organization.
    DOI: 10.1504/IJSNET.2025.10070743
     
  • ToI-Model: Trustworthy Objects Identification Model for Social-Internet-of-Things (S-IoT)   Order a copy of this article
    by Rahul Gaikwad, Venkatesh R 
    Abstract: The social-internet-of-things (SIoT) paradigm integrates social concepts into IoT systems. Identifying trustworthy SIoT objects, as well as managing trust, are essential for promoting cooperation among them. The current state-of-the-art methods inadequately quantify the trustworthiness of SIoT objects and fails to evaluate trustworthiness of SIoT objects. This paper comprehensively considers specific features of SIoT objects and integrates them with the theory of social trust. The proposed Trustworthy-objects identification model (ToI-Model) captures comprehensive trust, proficiency, readiness, recommendation, reputation, honesty and excellence metrics for identifying trustworthy objects in SIoT. Service requester (SR) uses trust score of service providers (SP) before initiating service delegation. A series of experiments are conducted to evaluate the proposed trust models effectiveness in the successful completion of services, convergence, accuracy, and resilience against deceitful activities. Results of experiment shows that trust model identifies trustworthy service provider that has 19.89% more trust score and a 27.61% less latency than state-of-the-art models.
    Keywords: Trustworthy; Proficiency; Readiness; Recommendation; Reputation; Honesty; and Excellence.
    DOI: 10.1504/IJSNET.2025.10070868
     
  • A Decoupling Algorithm for Three-Dimensional Electric Field Sensors Based on Extreme Learning Machines Optimised by Bat Algorithm   Order a copy of this article
    by Wei Zhao, Zhizhong Li 
    Abstract: During measuring the spatial electric field intensity using a three-dimensional electric field sensor, due to the electric field components coupling effect caused by the electric field distortion, a certain coupling error exists in the electric field intensity components measurement. Aiming at the problem of insufficient decoupling accuracy of the traditional extreme learning machine method, an optimised extreme learning machine method based on the combination of maximum inter-class variance and the bat algorithm is proposed to decouple the three dimensional electric field sensor. The Bat algorithm optimised the extreme learning machine methods optimal initial weight and threshold. The maximum inter-class variance method was used to analyse the inherent coupling characteristics of the sensor. The coupling effect was classified according to the varying coupling contribution degree. The traditional extreme learning machine decoupling network was extended. The calibration experiments and decoupling calculations show that the extreme learning machine algorithm optimised by the bat algorithm and maximum inter-class variance can effectively reduce the error, which is between the electric field components obtained by the model calculation and the actual electric field components, and can effectively reduce the interference generated by the inter-dimensional coupling effect of the sensor, and further improve the measurement accuracy of the electric field intensity.
    Keywords: bat algorithm; decoupling; extreme learning machine; maximum inter-class variance; three-dimensional electric field sensor.
    DOI: 10.1504/IJSNET.2025.10072003
     
  • A New Hyperchaotic Image Encryption Scheme Based on DNA Computing and SHA-512   Order a copy of this article
    by Shuliang Sun, Xiping Wang, Zihua Zhao 
    Abstract: Smartphones and digital cameras are becoming more and more widespread in the world. Massive images are generated every day in the world. They are easily transmitted on the insecure channel-Internet. Encryption technique is usually adopted to protect sensitive images during communication. A new cryptosystem is constructed by six-dimensional (6D) chaotic system and deoxyribonucleic acid (DNA) techniques. Firstly, the hash value is calculated. It keeps the encrypted result closely connected with the original image. The initial conditions of the cryptosystem are produced with the generated hash value and secret key. Secondly, the pixel is divided into four parts, forming a large matrix. Scrambling is performed on the new image. Subsequently, DNA coding, modern DNA complementary rules, DNA computing, and DNA decoding are utilised. Diffusion operation is also executed to improve the security, and the ciphered image is achieved finally. The experimental performance reveals that the designed algorithm has some advantages. It also signifies that the designed algorithm could protect against common attacks and is more secure than some existing methods.
    Keywords: hyperchaotic system; DNA computing; SHA-512.
    DOI: 10.1504/IJSNET.2025.10072159
     
  • A Novel Offloading Algorithm for Cost-Sensitive Tasks in VEC Networks using Deep Reinforcement Learning   Order a copy of this article
    by Benhong Zhang, Hao Xu, Xiang Bi, Qiwei Hu 
    Abstract: Vehicular Edge Computing(VEC) provides a fundamental condition for the fast and complete realisation of complex intelligent functions in autonomous driving vehicles. However, different tasks have different requirements on time cost, for example, the tasks related to safe driving have strict requirements on real-time, which brings challenges to task offloading and resource allocation in VEC. This paper first defines different utility evaluation functions that measure the delay requirements of different tasks. Then, an optimisation problem is presented by considering the task types, the dynamic generation feature of tasks and the price cost that measures the willingness of the service providers. Finally, the task offloading and resource allocation process is modelled as a Markov Decision Process(MDP) and a D3QN-based algorithm is designed to solve our problem. Simulation results show that the proposed algorithm has better performance on utility and task success rate compared to other algorithms.
    Keywords: Vehicle Edge Computing; Task Offloading; Resource Allocation; Markov Decision Process; D3QN.
    DOI: 10.1504/IJSNET.2025.10072414
     
  • A Path Optimization Method Based on Dynamic Clustering Strategy and Nondominated Sorting Genetic Algorithm II in Wireless Sensor Networks   Order a copy of this article
    by Liying Zhao, Jin Zhu, Chao Liu, Yu Wang, Sinan Shi, Chao Lu, Qi Luan 
    Abstract: The traditional wireless sensor network data transmission path selection often considers only single-objective optimization, such as energy consumption or transmission delay, which leads to problems such as unbalanced node load and insufficient path reliability. Therefore, this study proposed a wireless sensor network routing optimization method that integrates a dynamic clustering strategy with the nondominated sorting genetic algorithm II. First, the network nodes of the wireless sensor network were divided into clusters of varying scales using the MiniBatchKMeans method. Then, the residual energy of nodes, their distance to the cluster center, and historical load were comprehensively evaluated to elect a cluster head for each cluster. Subsequently, the nondominated sorting genetic algorithm II algorithm was employed to generate Pareto-optimal paths, with the objective functions encompassing minimization of energy consumption, reduction of transmission delay, and maximization of signal strength (Received Signal Strength Indication).
    Keywords: Clustering strategy, nondominated sorting genetic algorithm II, path optimization, wireless sensor network.

  • A Path Optimisation Method Based on Dynamic Clustering Strategy and Nondominated Sorting Genetic Algorithm II in Wireless Sensor Networks   Order a copy of this article
    by Qi Luan 
    Abstract: Traditional wireless sensor network data transmission path selection often focuses on single- objective optimisation, resulting in unbalanced node load and insufficient path reliability. This study proposes a routing optimisation method that combines dynamic clustering strategy with non- dominated sorting genetic algorithm II. MiniBatchKMeans is used to divide network nodes into clusters of different scales, and cluster heads are elected by comprehensively evaluating node residual energy, distance to cluster centre and historical load. The algorithm generates Pareto optimal paths with objectives of minimising energy consumption and transmission delay and maximising signal strength. Simulation results show that the proposed method extends network lifetime by 12.2%, increases total data throughput by 21.1%, and improves load balancing performance.
    Keywords: Clustering strategy; nondominated sorting genetic algorithm II; path optimization; wireless sensor network.
    DOI: 10.1504/IJSNET.2025.10072468
     
  • Security Analysis and Improvement of Key Exchange Protocol in LoRaWAN Network   Order a copy of this article
    by Arman Amjadian, Hamid Meghdadi, Ali Shahzadi 
    Abstract: While using very low-power and inexpensive transmitters, LoRaWAN networks exhibit very high sensitivity and excellent reliability over very long ranges. Although these networks benefit from higher security performance compared to other low-power wide-area communication protocols, some aspects of their security can be greatly improved. Namely, the key exchange protocol was considered as one of the weakest links in the security of LoRaWAN networks. This issue was addressed at the second edition of the LoRaWAN protocol. However, the improvement was achieved at the cost of using much more complicated algorithms. Even then, some of the security issues of the protocol such as vulnerability against node capture attack and forward secrecy remained the same. In this paper, we demonstrate the limitations of new LoRaWAN key exchange protocols using Scyther and ProVerif security analysis tools. Then we propose a novel scheme that while requiring much less complex computations, offers a more robust security for LoRaWAN networks. We use the aforementioned tools to verify that the proposed method considerably improves the resilience of LoRaWAN against known attacks
    Keywords: IoT; LoRaWAN; Network security; OTAA; Scyther; ProVerif.
    DOI: 10.1504/IJSNET.2025.10072911
     
  • Nonlinear Least Squares-Based Localisation Method for WSN-based Smart Agriculture Systems using Range Measurements   Order a copy of this article
    by Emad Hassan 
    Abstract: Accurate source localisation in Wireless Sensor Networks (WSN) is critical for applications requiring precise target tracking, environmental monitoring, and security surveillance. Traditional localisation techniques suffer from multipath interference, leading to degraded accuracy. This paper proposes an enhanced localisation algorithm leveraging multipath exploitation to improve position estimation. The proposed approach utilises time difference of arrival (TDOA) and direction-of-arrival (DOA) measurements, incorporating a hybrid scheme that can mitigate noise and enhance accuracy. A space division multiple access (SDMA) spread spectrum receiver is employed to extract DOA estimates, while TDOA information is utilised to differentiate between line-of-sight (LOS) and non-line-of-sight (NLOS) components. By associating multipath signals with corresponding reflectors, the scheme significantly improves localisation performance, even in environments where LOS paths are obstructed. Simulation results demonstrate that the proposed scheme significantly improves localisation accuracy compared to conventional schemes. The root mean square error (RMSE) is reduced by 30%, and the overall localisation success rate is increased by 25%, showcasing the robustness of the proposed scheme. These findings suggest that integrating multi-path components constructively rather than treating them as interference can enhance WSN localisation performance, making it suitable for real-world deployment.
    Keywords: WSNs; source localization; DOA; NLOS; TDOA; smart irrigation systems.
    DOI: 10.1504/IJSNET.2025.10072995
     
  • A Blockchain-Based Privacy Protection Model for a Spatial Crowdsourcing Platform   Order a copy of this article
    by Amal Albilali, Maysoon Abulkhair, Manal Bayousef 
    Abstract: Spatial crowdsourcing (SC) involves collecting geographic information from a crowd of people using mobile devices, raising critical privacy issues regarding participants' location data. In this article, we propose an efficient privacy protection task assignment model (ePPTA) as a novel method that combines centralised and decentralised platforms to achieve privacy protection for worker location, worker identity, and task location during the task assignment (TA) process. Through a centralized SC platform, we achieve privacy protection using an elliptic curve cryptography (ECC), ensuring low user computational and communication overheads. The task assignment process and its data integrity are managed via blockchain technology. We evaluate our model on a real dataset, comparing it with state-of-the-art methods. The ePPTA model demonstrates low user computational and communication overheads and theoretically prevents task-tracking and eavesdropping attacks from external entities. Performance evaluation results confirm that the proposed model's efficiency is reasonable, providing robust privacy protection for SC.
    Keywords: Crowdsourcing; Privacy; Location Privacy; Spatial Crowdsourcing (SC); Blockchain.

  • Adaptive offloading in multi-access edge networks via hierarchical federated learning and real-time system adaptation   Order a copy of this article
    by Jie Wang, Qiao Liang, Amin Mohajer 
    Abstract: Achieving ultra-reliable real-time digital twin (DT) adaptation in mobile edge environments requires intelligent orchestration of computation and communication under user heterogeneity and dynamic mobility. This paper introduces GADENet, a graph attention-enhanced digital twin evolution network that fuses graph neural modelling, multi-agent actor-critic learning, and hierarchical federated personalisation to enable seamless digital representations of user equipment (UE) in distributed edge networks. At its core, GADENet employs a GAT-assisted multi-agent deep deterministic policy gradient (MADDPG) framework to jointly learn optimal DT migration and personalisation strategies across edge servers, guided by real-time traffic topologies and resource interdependencies. Each DT model is modularised into generalisable and adaptive subspaces, trained collaboratively through a three-tier edge-cloud federated loop and refined using localised attention-based updates. For efficient mobility handling, we propose a parameter-sliced DT relay protocol that selectively migrates the minimal personalisation subset across servers, leveraging learned action-value functions to minimise response latency. Extensive simulations on CIFAR-based datasets and synthetic edge workloads demonstrate that GADENet achieves up to 30% reduction in interaction latency and significantly boosts modelling fidelity versus strong federated and DRL-based baselines. This work offers a principled blueprint for intelligent DT deployment under the constraints of 6G and next-gen IoT fabrics.
    Keywords: digital twin modelling; graph attention networks; GATs; multi-agent deep reinforcement learning; federated learning; intelligent network orchestration.
    DOI: 10.1504/IJSNET.2025.10071733
     
  • A self-distillation approach for enhancing intelligence tutoring system math solving based on large language models   Order a copy of this article
    by Guanlin Chen, Yuchen Jin, Wenyong Weng, Tian Li, Jianshao Wu 
    Abstract: Intelligent tutoring systems have demonstrated strong capabilities in supporting students' learning, particularly in solving predefined problems. However, they have a key limitation: intelligent tutoring systems are designed to solve only the problems specifically programmed into the system. This paper introduces a novel approach that integrates large pre-trained models into local intelligent tutoring systems to address this challenge. Specifically, we propose a method where a local large pre-trained model generates high-accuracy logical reasoning through the chain of thought and enhanced computational capabilities via the program of thought. By combining these two outputs, we generate high-quality synthetic data to train the local model, improving its ability to solve a broader range of mathematical problems, including those it has not previously encountered. Experimental results demonstrate that our approach significantly enhances both reasoning precision and computational efficiency, ultimately improving the overall performance of local intelligent tutoring systems in supporting students with mathematical problem-solving.
    Keywords: intelligent tutoring systems; ITS; large pre-trained models; chain of thought; program of thought; mathematical problem-solving; fine-tuning; sensor.
    DOI: 10.1504/IJSNET.2025.10072011
     
  • Cybersecurity, throughput and delay analysis using multiple reconfigurable intelligent surfaces   Order a copy of this article
    by Faisal Alanazi 
    Abstract: The integration of multiple reconfigurable intelligent surfaces (MRIS) into modern wireless communication systems has emerged as a promising solution to enhance throughput and minimise delay. MRIS are artificial surfaces designed to dynamically manipulate the propagation of electromagnetic waves to optimise signal transmission. This paper presents an in-depth analysis of the throughput and delay performance in systems utilising MRIS. We explore the underlying mechanisms by which MRIS can improve channel conditions and reduce interference, thereby improving overall system efficiency. Through analytical models and simulations, we investigate the trade-offs between throughput enhancement and delay reduction in various MRIS-assisted communication scenarios. The results demonstrate that MRIS can effectively optimise the network's performance by providing adaptive control over the signal environment, leading to significant improvements in throughput and latency. We also study the physical layer security using multiple RIS.
    Keywords: cybersecurity; multiple RIS; delay analysis; throughput analysis; outage probability; 6G.
    DOI: 10.1504/IJSNET.2025.10072131
     
  • Advanced GNSS positioning for low-cost Android smartphones using RTS assisted extended Kalman filter   Order a copy of this article
    by Yuhan Zhou, Xuanwen Wang, Shuaiyong Zheng, Xiaoqin Jin, Shuailong Chen, Jixi Liu, Jixi Yang 
    Abstract: Global navigation satellite system (GNSS) is pervasively employed in smartphone location services. However, the positioning performance of low-cost Android smartphones is often limited by the sub-optimal quality of their GNSS observation data, impeding their ability to fulfil user demands. To address this need, this paper proposes an improved positioning algorithm using Rauch-Tung-Striebel and extended Kalman filter (RTS-EKF). This algorithm initially enhances the raw data quality through rigorous gross error detection and elimination, coupled with Doppler smoothing. Subsequently, the RTS-EKF algorithm is employed to estimate and smooth localisation results, ultimately enhancing positioning accuracy. To validate the effectiveness of this algorithm, dynamic experiment was conducted, yielding results that demonstrate horizontal positioning accuracy exceeding 1.9 metres and vertical positioning accuracy better than 7 metres. Compared with traditional positioning algorithms, the RTS-EKF exhibits at least a 15% improvement in localisation accuracy and superior performance, thereby satisfying the high-precision requirements of low-cost smartphone users.
    Keywords: Android smartphone; extended Kalman filter; EKF; Rauch-Tung-Striebel; RTS; global navigation satellite system; GNSS; navigation; positioning.
    DOI: 10.1504/IJSNET.2025.10072253
     
  • LSB-DSN: sensor-assisted deep learning for robust English speech recognition   Order a copy of this article
    by Aili Tang, Dezhi Zeng 
    Abstract: In modern communication, the English speech recognition system is essential for improving the personalised user experience and global communication. The recognition systems use sensor devices and deep learning techniques to ensure the system's robustness in all diverse environments. The traditional system efficiency is reduced due to accents, varying pronunciations, and limited contextual considerations. The paper introduces the lion-swarm boosted deep sesame networks, a new speech recognition framework that fuses sensor technologies and deep learning to improve accuracy and robustness. This model combines acoustic signals with sensor-based inputs from accelerometers, gyroscopes, and electromyography devices to capture the delicate speech-related modulations for better recognition across diversified environments. The hierarchical attention mechanism and lion swarm optimisation enable optimal feature selection, reducing the recognition error and computational overhead. The experiments show that it achieves a 9.5% word error rate in clean conditions, a low cross-entropy loss of 0.65, and 100 ms of processing latency - far superior to baseline models for noisy environments. The proposed framework can adapt to different accents and pronunciations, making it a strong solution for real-world applications in speech recognition.
    Keywords: English speech recognition; signal modulation; sensor devices; subnetworks; lion swarm optimisation; LSO; similarity measures; deep sesame networks.
    DOI: 10.1504/IJSNET.2025.10070349
     
  • Reinforcement learning approach for quality of coverage-driven mobile charger optimal scheduling in wireless rechargeable sensor networks   Order a copy of this article
    by Haoran Wang, Jinglin Li, Tianhang Chen, Wendong Xiao 
    Abstract: In wireless rechargeable sensor networks (WRSNs), mobile charger (MC) scheduling is one critical issue for improving network utility and resource utilisation efficiency. Traditional charging scheduling approaches usually focus on maximising charging utility while neglecting the importance of network service performance, especially quality of coverage (QoC). In practice, QoC directly affects the integrity of network information acquisition and its effectiveness and reliability. Therefore, the QoC-driven MC optimal scheduling (QCOS) problem is studied, and then a novel reinforcement learning-based mobile charger scheduling algorithm (RL-MCS) is proposed to maximise the network QoC and achieve stable network service performance. Meanwhile, a broadcast charging mechanism is also introduced to improve the overall charging efficiency and reduce the node charging time. In RL-MCS, the real-time energy demand of nodes and the network monitoring performance are considered, which aims to achieve the equilibrium between node survivability and network QoC. In addition, an experience extraction mechanism is designed, which enables MC to make smarter and more prospective charging decisions based on the current network state. Extensive simulations show that RL-MCS significantly outperforms other approaches in improving network QoC and ensuring node survival rate.
    Keywords: wireless rechargeable sensor networks; mobile charger scheduling; reinforcement learning; quality of coverage; broadcast charging.
    DOI: 10.1504/IJSNET.2025.10072595
     
  • Adaptive energy-efficient task offloading and resource management in UAV-assisted mobile edge networks using dynamic DRL   Order a copy of this article
    by Yunfeng Zhou, Hong Cao, Jiateng Duan, Haohua Qing, Amin Mohajer 
    Abstract: Next-generation aerial edge networks must support delay-critical and computation-intensive services in highly dynamic wireless environments. This paper introduces a distributed control architecture that integrates flight-aware workload distribution, predictive mobility mapping, and adaptive edge resource slicing. The model combines spatiotemporal learning with an enhanced policy gradient mechanism for joint optimisation of service placement, bandwidth provisioning, and UAV trajectory scheduling. By integrating predictive modelling of user mobility via recurrent neural structures and embedding temporal attention, the system anticipates regional demand fluctuations and proactively reconfigures aerial coverage. A dual-critic actor-learner structure ensures stable policy evolution under hybrid discrete-continuous action spaces. Extensive evaluations across diverse network densities and traffic dynamics reveal that the proposed scheme improves task finalisation rates by over 30%, sustains autonomous operation via harvested energy, and consistently outperforms existing baselines in spectral efficiency and decision latency. These findings position the framework as a robust foundation for real-time orchestration in scalable, mission-adaptive aerial edge infrastructures.
    Keywords: UAV-enabled edge intelligence; predictive mobility modelling; spatiotemporal resource orchestration; policy-driven task offloading; dynamic edge slicing.
    DOI: 10.1504/IJSNET.2025.10071611
     
  • Sleep behaviour monitoring based on the probability density model   Order a copy of this article
    by Ying Liu, Zhiyang Cao, Mengyuan Hu 
    Abstract: Behavioural changes in the human body during sleep are an important reflection of sleep quality. Most of the existing methods for the recognition of sleep behaviour are based on the magnitude or phase of CSI, but such features often suffer from low differentiation and poor stability. To address the above problems, a sleep behaviour monitoring system based on the CSI amplitude probability density model is proposed on the theoretical basis of wireless channel fading characteristics. First, CSI amplitudes from different antennas and carriers are collected as sense base signals using spatial diversity and frequency diversity techniques of commercial Wi-Fi devices. Next, the raw data are preprocessed and probability densities of the different subcarrier amplitudes are obtained to segment the action using Gaussian fitting. Finally, a support vector machine classifier is constructed to classify and recognise different sleep postures, and a back-propagation neural network is also used to recognise related actions. The results show that the recognition accuracy of this method is higher than that of the traditional method in the case of small samples, which proves the robustness of the feature model.
    Keywords: channel state information; probability density model; sleep posture recognition; action recognition.
    DOI: 10.1504/IJSNET.2025.10072311
     
  • ESFM-Net: low-dose CT artefact suppression method based on edge-guided spatial-frequency mutual network   Order a copy of this article
    by Xueying Cui, Weisen Song, Xiaoling Han, Lizhong Jin, Hong Shangguan, Xiong Zhang 
    Abstract: Deep learning has shown superior performance in low-dose CT artefact suppression. However, the existing deep network has a limited perception of edge and texture information, and the transformer-based approaches have high computational complexity in calculating self-attention. To alleviate these issues, an artefact suppression method based on an edge-guided spatial-frequency mutual network (ESFM-Net) is proposed, which can enhance edge and texture information with low computational complexity. Specifically, an edge decoder is designed to supplement multi-scale edge details for reconstructing high-frequency in a single-encoder dual-decoder. For obtaining rich edge and texture information, a spatial-frequency feature extraction module is developed to obtain local spatial and global frequency features with little computational complexity. Considering the complementarity of information, a spatial-frequency mutual module is further constructed to enhance the feature representation capability by adaptive fusion. High and low-frequency features with different scales are also gradually fused through a multi-scale fusion module to obtain final denoised images. The comparative experiments and ablation results show the superior performance of our method in edges, texture preservation, and artefact suppression.
    Keywords: low dose CT; artefact suppression; edge guidance; spatial-frequency mutual fusion.
    DOI: 10.1504/IJSNET.2025.10070503
     
  • Named entity recognition for function point descriptions in software cost estimation processes   Order a copy of this article
    by Boyan Zhao, Xiaofei Zou, Shijie Xin, Di Liu 
    Abstract: With the advancement of software technology, the industry's informatisation level has improved, but the growing size and complexity of software have raised costs. Consequently, assessing software project costs early is crucial. Function point analysis, the primary method for cost evaluation, quantifies functional elements like external data inputs and outputs to measure software size from the user's perspective. However, it heavily relies on manual effort, especially in extracting function point descriptions, leading to errors and inefficiency. This paper proposes an entity recognition model to address these challenges, integrating a BiLSTM-CRF framework with CNN layers and hierarchical learning. A domain-specific dictionary is developed to enhance the model's performance. Experimental results show that the proposed method outperforms BERT by improving accuracy by 0.42% and recall by 1.04%. The method achieves 95.37% accuracy in entity recognition for a sensor data system, demonstrating its effectiveness and reliability in software cost evaluation.
    Keywords: software cost evaluation; named entity recognition; convolutional neural network; CNN; bidirectional long-short memory network; hierarchical learning.
    DOI: 10.1504/IJSNET.2025.10070067