Forthcoming Articles

International Journal of Computational Science and Engineering

International Journal of Computational Science and Engineering (IJCSE)

Forthcoming articles have been peer-reviewed and accepted for publication but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

Online First articles are also listed here. Online First articles are fully citeable, complete with a DOI. They can be cited, read, and downloaded. Online First articles are published as Open Access (OA) articles to make the latest research available as early as possible.

Open AccessArticles marked with this Open Access icon are Online First articles. They are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.

Register for our alerting service, which notifies you by email when new issues are published online.

International Journal of Computational Science and Engineering (11 papers in press)

Regular Issues

  • Self-supervised learning with split batch repetition strategy for long-tail recognition   Order a copy of this article
    by Zhangze Liao, Liyan Ma, Xiangfeng Luo, Shaorong Xie 
    Abstract: Deep neural networks cannot be well applied to balance testing when the training data present a long tail distribution. Existing works improve the performance of the model in long tail recognition by changing the model training strategy, data expansion, and model structure optimisation. However, they tend to use supervised approaches when training the model representations, which makes the model difficult to learn the features of the tail classes. In this paper, we use self-supervised representation learning (SSRL) to enhance the model's representations and design a three-branch network to merge SSRL with decoupled learning. Each branch adopts different learning goals to enable the model to learn balanced image features in the long-tail data. In addition, we propose a Split Batch Repetition strategy for long-tailed datasets to improve the model. Our experiments on the Imbalance CIFAR-10, Imbalance CIFAR-100, and ImageNet-LT datasets outperform existing similar methods. The ablation experiments prove that our method performs better on more imbalanced datasets. All experiments demonstrate the effectiveness of incorporating the self-supervised representation learning model and split batch repetition strategy.
    Keywords: long-tail recognition; self-supervised learning; decoupled learning; image classification; deep learning; neural network; computer vision;.

  • Prediction model for recruitment of railway bureaus and enrolment of railway schools based on deep learning   Order a copy of this article
    by Haijun Wang, Wei He, Junlun Sun 
    Abstract: With urbanization accelerating, the demand for railway transportation is increasing, making it essential to plan recruitment for railway bureaus and adjust enrollment at railway schools. This study aims to accurately predict recruitment needs using historical data. We applied deep learning models, including Back Propagation Neural Network (BP Neural Network), Long Short-Term Memory (LSTM), and LSTM-Attention, to forecast recruitment numbers for eight positions across 18 railway bureaus in 2025, yielding MAE values of 100000, 0.16, and 0.13, respectively. We also used Linear Regression, Ridge Regression, LASSO Regression, and Random Forests to predict the number of remaining graduates in eight major railway programs for 2025, with most models showing MSE values between 0 and 4. Finally, we established upper and lower limits for vocational student enrollment quotas in 2025 by applying factors of 80% and 75% to the predicted recruitment numbers. These findings provide valuable insights for recruitment and enrollment planning, enhancing the application of deep learning in railway recruitment forecasting.
    Keywords: long short-term memory; LSTM; attention mechanism; ridge regression; LASSO regression; random forest; prediction model.
    DOI: 10.1504/IJCSE.2025.10070809
     
  • FPrune: a parameter pruning algorithm based on federated deep classification model   Order a copy of this article
    by Xinjing Li, Zheng Huo, Teng Wang 
    Abstract: Federated learning is a distributed machine learning framework that enables multiple participants to train models collaboratively without sharing raw data. However, significant data transmission is required for parameter communication. As deep neural network models grow in size, deploying federated learning in complex network environments results in substantially increased communication costs. To address this challenge, we propose a pruning algorithm for deep federated text classification models, called FPrune. This algorithm evaluates the importance of locally trained models during the federated learning training stage by calculating the importance of each filter. Filters with lower importance are pruned. Additionally, we introduce a bidirectional pruning strategy that prunes filters on both the client and server sides. Experimental results demonstrate that the FPrune/25% and FPrune/50% algorithms reduce the communication cost by 70.22% and 42.03%, respectively, compared to FedAvg. Furthermore, the model’s performance loss is limited to approximately 1.34%, demonstrating that the FPrune algorithm can effectively reduce communication costs while maintaining minimal performance degradation.
    Keywords: federated learning; deep classification; model pruning; TextCNN.
    DOI: 10.1504/IJCSE.2025.10070810
     
  • Enhancing fairness in deep learning: key tasks, measurement methods, and experimental validation   Order a copy of this article
    by Xiaoqian Liu, Weiyu Shi 
    Abstract: Deep learning is an important field in machine learning research. It has powerful feature extraction capabilities and superior performance in numerous applications, including computer vision, natural language processing, and speech recognition etc. However, unfairness in deep learning models has increasingly harmed people's interests. Therefore, designing methods to effectively enhance fairness has become a major trend in the development of deep learning. This work reviews key tasks and fairness measurement methods in deep learning. In addition, we conduct experiments on typical fair deep learning datasets to implement individual fairness. The experimental results show that a balance is achieved between accuracy and fairness of classification tasks.
    Keywords: deep learning; algorithmic bias; individual fairness.
    DOI: 10.1504/IJCSE.2025.10071366
     
  • Temporal similarity-constraint graph networks for stock prediction with stock relations   Order a copy of this article
    by Jincheng Hu, Yu Zhang 
    Abstract: Stock prediction aims to enhance investment decisions by forecasting future stock trends, traditionally using time-series data. While deep learning has advanced time-series modeling, most existing methods treat stocks as independent entities, overlooking the rich relationships between them. Additionally, conventional approaches frame stock prediction as a regression problem focused on price prediction, which does not align directly with investment goals. To address these issues, we propose Temporal Similarity-constraint Graph Networks (TSCGN), a novel framework that incorporates stock relations and selects stocks with the highest return ratio. TSCGN embeds sequential stock data into features and constructs a stock knowledge graph to capture interactions between stocks. By integrating temporal similarity constraints, TSCGN enhances prediction accuracy and robustness. Experiments on real-world datasets (NASDAQ and NYSE) demonstrate that TSCGN outperforms state-of-the-art models in prediction accuracy and investment returns, making it a valuable tool for financial decision-making.
    Keywords: graph networks; similarity constraint; stock prediction; stock relation.
    DOI: 10.1504/IJCSE.2025.10072064
     
  • QGA-optimised BL xLSTM MLP model for portfolio   Order a copy of this article
    by Meng Li, Zhihui Song, Jiaxu Feng 
    Abstract: The Black-Litterman (BL) model integrates market conditions and investor judgments. However, existing research on generating expert insights focuses on either regressing returns against external variables or modeling return series as time series, without integrating both approaches. Furthermore, traditional BL portfolio optimization neglects transaction costs and fails to optimize hyperparameters, limiting its adaptability to varying market conditions. To address these, we propose a QGA-optimized BL_xLSTM_MLP model, that combines external variable regression (via MLP) and time-series modeling(via xLSTM) ,which integrates temporal dependencies and macroeconomic features into expert views, while optimizing hyperparameters and transaction costs using a Quantum Genetic Algorithm (QGA). The QGA adopts the sum of the Sharpe ratio and Information ratio (accounting for transaction costs) as the fitness function, effectively addressing the traditional BL model’s neglect of transaction costs. Finally, we conducted experiments on USA 30 industry portfolio demonstrating that our method achieved state-of-the-art performance.
    Keywords: Black-Litterman model; quantum genetic algorithm; xLSTM model; asset allocation; portfolio.
    DOI: 10.1504/IJCSE.2025.10072092
     
  • Achieve Sim2Real based on semantic constrained cycle generative adversarial network   Order a copy of this article
    by Xiangfeng Luo, Hongbin Huo, Xinzhi Wang 
    Abstract: In the field of vision-based control systems, the discrepancy between simulator and real-world environments renders models trained in simulators ineffective in real-world scenarios. Previous approaches have attempted to mitigate this issue by mapping the simulator and realworld into a shared latent space, but this can result in the loss of semantic information relevant to decision-making in the images. In this paper, we propose a method called Semantically Constrained CycleGAN (SCCGan) to address these limitations. SCCGan extracts semantic information from generated images and compares it with the original images to ensure consistency. Experimental results demonstrate that our method preserves the semantic information of the original images during the generation process, enabling the transfer of decision models from simulators to the real world. By leveraging semantic constraints, SCCGan facilitates the effective migration of decision models, bridging the gap between simulated and real-world environments in vision-based control systems.
    Keywords: simulator to reality; CycleGAN; reinforcement learning; autonomous decision making.
    DOI: 10.1504/IJCSE.2025.10072111
     
  • Uplink transmission interference suppression technique for ultra-dense networks based on locally weighted regression   Order a copy of this article
    by Sujuan Li 
    Abstract: Abnormal transmission signal is the main reason for serious uplink transmission interference in ultra-dense networks, which leads to insufficient transmission interference suppression effect. Therefore, a local weighted regression based uplink transmission interference suppression for ultra-dense networks is proposed. Firstly, the robust local weighted regression is used to analyze the abnormal transmission signals of the uplink transmission link in the ultra-dense network, and improved the suppression effect of the uplink transmission interference. Secondly, the adaptive time-frequency analysis is used to extract the characteristics of abnormal signals and determine the existence of uplink transmission interference. Finally, the residual neural network is used to identify the interference signal, and the interference reconstruction and suppression are combined to achieve the interference suppression of the uplink transmission in the ultra-dense network. Experimental results show that the reference signal received power gain, signal-to-noise ratio, and information average rate gain of the uplink station are optimized with the proposed interference suppression, and the maximum received power gain is up to 53.79 dB.
    Keywords: locally weighted regression; ultra-dense network; uplink transmission; interference suppression; residual neural network; signal-to-noise ratio.
    DOI: 10.1504/IJCSE.2025.10073102
     
  • Evaluating the effectiveness of large language models in detecting mental health disorders from social media   Order a copy of this article
    by Weili Zhao, Yuan Xu 
    Abstract: Mental health disorders affect over 25\% of the global population, making scalable detection essential for early intervention. Current social media datasets often use community-assigned or platform-inferred labels, which may lack semantic clarity and category consistency. This study explores whether large language models (LLMs), such as GPT-4, can generate more reliable annotations by leveraging prompt-based reasoning aligned with standardized symptom criteria. Using a Reddit dataset of 17,159 posts, we re-annotate entries using a Chain-of-Thought (CoT) framework guided by symptom profiles from screening instruments. We then evaluate these LLM-generated annotations against subreddit-derived labels via two downstream tasks: (1) classification performance under supervised learning, and (2) clustering under unsupervised methods. Results show that LLM-generated annotations yield higher consistency and improve downstream performance, particularly for depression and anxiety, demonstrating their potential to enhance mental health detection from online text.
    Keywords: large language models; LLMs; social media text analysis; mental health detection; chain-of-thought reasoning.
    DOI: 10.1504/IJCSE.2025.10073938
     
  • An enhanced traffic-aware multipath routing scheme in software-defined networks   Order a copy of this article
    by U. Prabu, R. Venkata Sai, Sanjaya Kumar Panda, V. Geetha 
    Abstract: The growing complexity of modern networks due to diverse Internet services challenges efficient routing and load balancing. Traditional single-path routing fails to handle such dynamic requirements effectively. This paper proposes an enhanced traffic-aware multipath routing (TAMR) scheme in software-defined networks (SDNs), which leverages the centralized programmability of SDNs to dynamically measure real-time network bandwidth and select multiple optimal paths using Yen's algorithm. The proposed TAMR scheme achieves superior bandwidth utilization, reduced transmission delay, and improved load balancing under varying traffic conditions by integrating a packet monitoring module to manage out-of-order packets. The innovation lies in its adaptive multipath routing framework, which combines real-time network awareness and packet monitoring to ensure reliable data delivery. The results demonstrate the industrial applicability of TAMR in datacenter networks and high-demand environments requiring low latency and high throughput, highlighting its potential for enhancing network performance and resilience.
    Keywords: software defined networks; multipath routing; equal-cost multipath routing; load balancing; network monitoring; bandwidth.
    DOI: 10.1504/IJCSE.2025.10073986
     
  • Feature selection using fossa optimisation algorithm for detection of epilepsy from EEG signal   Order a copy of this article
    by Pratiti Mishra, Hrishikesh Kumar, Himansu Das 
    Abstract: Epilepsy is a widespread neurological condition impacting people of all ages. Medical professionals use electroencephalography (EEG) as a monitoring tool to analyse neural activity and detect signs of epilepsy. This issue often stems from the inclusion of superfluous EEG features such as noise and irrelevant data that fail to support accurate diagnosis. Therefore, feature selection (FS) methods are necessary to filter out irrelevant features and retain the most diagnostically significant ones. This study proposes an innovative FS method using the fossa optimisation algorithm (FSFOA) to determine the most effective feature subset for improving classification accuracy. This method is compared with four widely recognised FS techniques: ACO, GA, PSO, and DE. The evaluation is conducted using five popular classifiers such as QDA, NB, DT, SVM and KNN. Experimental results reveal that the proposed FSFOA outperforms the aforementioned methods in selecting optimal features and enhancing classification performance.
    Keywords: epilepsy; electroencephalography; EEG; FSGA; FSPSO; FSDE; FSACO; FSFOA; ML classifiers.
    DOI: 10.1504/IJCSE.2025.10074012