Forthcoming and Online First Articles

International Journal of Intelligent Engineering Informatics

International Journal of Intelligent Engineering Informatics (IJIEI)

Forthcoming articles have been peer-reviewed and accepted for publication but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

Online First articles are published online here, before they appear in a journal issue. Online First articles are fully citeable, complete with a DOI. They can be cited, read, and downloaded. Online First articles are published as Open Access (OA) articles to make the latest research available as early as possible.

Open AccessArticles marked with this Open Access icon are Online First articles. They are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.

Register for our alerting service, which notifies you by email when new issues are published online.

International Journal of Intelligent Engineering Informatics (8 papers in press)

Regular Issues

  • "Cloud Gaming: The Future of Gaming Infrastructure"   Order a copy of this article
    by Shrikant Harle, Pradeep Bhaduria, Amol Bhagat, Shrikant Bhuskade, RAJAN Wankhade, Milind Mohod 
    Abstract: This paper delves into the transformative impact of cloud gaming on the gaming industry. The problem statement outlined in the paper revolves around the changing landscape of gaming infrastructure, specifically focusing on the shift towards cloud-based platforms. The study emphasises the need to understand how cloud gaming affects various aspects of the gaming ecosystem, including accessibility, gameplay experience, and game development processes. The findings of the study highlight several key aspects of cloud gaming. Firstly, the paper identifies the significant benefit of reduced entry barriers for gamers, as they no longer need to invest in expensive gaming hardware. This increased accessibility has led to a broader gaming community and improved access to high-quality gaming experiences. Additionally, the study emphasises the advantages of cross-platform compatibility, allowing gamers to seamlessly switch between devices without losing progress.
    Keywords: cloud gaming; gaming industry; bandwidth; virtual reality; VR.
    DOI: 10.1504/IJIEI.2024.10064799
     
  • Efficient Authentication Framework with Blake2s and a Hash-based Signature Scheme for Industry 4.0 Applications   Order a copy of this article
    by Purvi Tandel, Jitendra Nasriwala 
    Abstract: Industry 4.0, the new production standard, integrates small to medium-level devices for efficient automation. Secure communication among these devices requires protection from external attacks in IoT applications. The imminent threat of quantum attacks to prevailing public-key approaches necessitates a secure authentication architecture tailored for IoT devices. Furthermore, IoT devices' limited computing, storage, and energy resources demand a lightweight authentication mechanism. In response, hash-based signatures have been proposed as a post-quantum solution for Industry 4.0. Presented approach involves an in-depth analysis of collision-resistant hash functions for faster, memory-optimised authentication. Implementing a hash-based signature scheme through experiments, a 27.18% improvement has been achieved in key generation speed, particularly with Blake2s over popularly used SHA-256. These results affirm the efficiency of the proposed hash-based signature scheme, offering superior performance in time and memory utilisation for Industry 4.0 applications.
    Keywords: authentication mechanism; hash-based signature scheme; IoT applications; hash functions; SHA-256; Blake2s; SHA-3; collision resistant hash-function.
    DOI: 10.1504/IJIEI.2024.10064828
     
  • Ensemble of Deep Features and Classifiers Approach for MRI Brain Tumor Classification   Order a copy of this article
    by Sathees Kumar  
    Abstract: Medical professionals identify and classify brain tumours to save lives. This innovative study applies prominent machine learning classifiers to varied deep brain imaging features extracted by a pre-trained convolution neural network. Several machine learning classifiers use a pre-trained deep convolutional neural network's deep features to classify MRI images. Famous pre-trained networks extract MRI brain imaging properties. Multiple machine learning classifiers validate extracted traits. The finest deep features from numerous ML classifiers are assembled into feature sets and fed into multiple classifiers to predict classification. Pre-trained deep feature mining, machine learning classifiers, and brain tumour categorisation ensemble features are tested on BraTS-19, Figshare, and Kaggle datasets. Classifying brain tumour images as malignant or benign is difficult. To speed up categorisation, use ensemble deep features and a pre-trained model. Extraction of deep features from MRI images using transfer learning (EfficientNet-B4, Inception-V3, and VGG-19) is applied to popular classifiers (SVM, AdaBoost, Na
    Keywords: deep learning; ensemble learning; transfer learning; machine learning; brain tumour classification; pre-trained deep convolutional neural network; recognition and categorisation.
    DOI: 10.1504/IJIEI.2024.10065363
     
  • A variable block range fractal method for image compression   Order a copy of this article
    by Ghousia Anjum Shaik, T. Bhaskara Reddy, B. Mohammed Ismail, Mansooor Alam 
    Abstract: This paper presents and implements a variable block range fractal (VBRF) method for image compression on RGB images of different categories. The proposed technique shows improvement in compression ratio (CR), peak signal noise ratio (PSNR), similarity index (SI) and compression time (CT) on applying to digital images. Mean square error (MSE), entropy and coding redundancy issues are addressed for improvements. Standard test sample images are divided into varying blocks of three categories of maximum and minimum range (Ra) block of 16 × 4, 16 × 8 and 8 × 4 for implementation. Relative fractal affine transforms are used to form iterative ranges with varying blocks reconstructing corresponding eight inverse transforms. The proposed VBRF method is applied to set of test images like Lena, satellite urban and rural, MRI, rose, bird, Zelda, pepper, God hills, etc., and improvement in compression parameters is obtained with a rate of 8% to 10%. The results obtained on CR, PSNR, SI and CT shows the effectiveness of method in improving compression rate and noise mitigation in the compressed images. Proposed method implementation parameters are compared and validated with the other popular methods of fractal compression showing a considerable improvement in performance.
    Keywords: image compression; block ranges; compression ratio; peak signal noise ratio; PSNR; entropy; fractal compression and affine transforms; variable block range fractal; VBRF; mean square error; MSE.
    DOI: 10.1504/IJISC.2020.10036957
     
  • Deep learning-based concrete compressive strength prediction with modified resilient backpropagation training   Order a copy of this article
    by M. Adams Joe, J. Sahaya Ruben, M. Prem Anand, M. Anand 
    Abstract: This article proposes a novel approach for predicting concrete compressive strength using deep learning techniques. It overcomes limitations of traditional methods like memory footprint, training time, and computational requirements for predicting concrete compressive strength. Currently various filter pruning techniques are used to compress models by removing irrelevant information, but they cannot decrease memory consumption due to their large parameters. So, the entropy-based filter pruning is suggested to reduce the complexity of the model by decreasing the parameters. Then for training the CNN model, the modified resilient backpropagation technique (MRPROP) is suggested, because the previous backpropagation techniques take more time for training and also it loss the accuracy. This MRPROP improve the efficiency and convergence of CNN training and also it updates the models weight. The proposed approach demonstrated superior performance in mean squared error, root mean squared error, loss function, and regression analysis, as per the experimental results.
    Keywords: machine learning; deep learning; convolutional neural network; CNN; pruning technique; backpropagation.
    DOI: 10.1504/IJIEI.2024.10063960
     
  • Coati optimisation algorithm based hyperparameter tuned attention B-BiLTF model for spectrum prediction   Order a copy of this article
    by Avani Vithalani 
    Abstract: The burgeoning demand for spectrum in the 5G era and internet of things underscores the critical need for accurate spectrum prediction models. Existing methods grapple with challenges, particularly the inability to capture frequency band features at specific times. This research introduces the coati optimisation algorithm-based attention B-BiLTF model, addressing the pervasive issue of gradient disappearance in spectrum prediction. Combining bidirectional long short-term memory (BiLSTM) and backpropagation (BP) neural networks, the B-BiLTF algorithm achieves enhanced convergence speed and overall prediction accuracy. The attention B-BiLTF mechanism mitigates the impact of sequence length changes on performance. Leveraging the coati optimisation algorithm ensures systematic hyperparameter optimisation, outperforming existing approaches across diverse signal-to-noise ratio conditions and sequence lengths. Experimental results on RML2016.10a dataset demonstrate superior accuracy, RMSE, MAE, and Haversine distance, affirming the model's reliability and robustness modulation modes and signal to noise ratio (SNR) levels. This research contributes an efficient approach to spectrum prediction, advancing cognitive radio systems, and optimising spectrum utilisation.
    Keywords: spectrum prediction; coati optimisation algorithm; attention-B-BiLTF; deep learning.
    DOI: 10.1504/IJIEI.2024.10064201
     
  • OHON4D: optimised histogram of 4D normals for human behaviour recognition in depth sequences   Order a copy of this article
    by Mourad Bouzegza, Ammar Belatreche, Ahmed Bouridane, Mohamed Elarbi-Boudihir 
    Abstract: Understanding human behaviour in video streams is one of the most active areas in computer vision research. Its purpose is to automatically detect, track and describe human activities in a sequence of image frames. The challenges that researchers have to face are numerous and complex so that building a faithful feature vector that describes and identifies the human behaviour remains a crucial aspect. This paper presents a geometry-based descriptor whose features are extracted from data acquired by depth sensors. It uses a heuristic approach to optimise the histogram of oriented 4D normals (HON4D) descriptor proposed by O. Oreifej and Z. Liu. The latter used a histogram to describe the depth sequence by extracting the normal orientation of the surface distribution in the 4D space of time, depth, and spatial coordinates. The proposed approach in this paper, called optimised histogram of 4D normals (OHON4D), enhances the HON4D method by considering only four projectors to represent a 4D normal instead of 120. We obtained a similar accuracy while saving approximately half of the computational time.
    Keywords: computer vision; optimised histrogram; 4D normals; human behaviour recognition; HAR; video streams; geometry based descriptor; Kinect depth sensors.
    DOI: 10.1504/IJIEI.2024.10064833
     
  • Harnessing the power of hugging face's multilingual transformers: unravelling the code-mixed named entity recognition enigma   Order a copy of this article
    by Rejuwan Shamim, Asadullah Shaikh 
    Abstract: Named entity recognition (NER) in code-mixed documents, which have different languages, is hard for natural language processing. In this paper, we use hugging face's multilingual transformers to come up with a way to do code-mixed NER without any problems. Our work tries to solve the problems that come up when you try to recognise named entities in more than one language within the same text. We did thorough tests by fine-tuning the multilingual transformer model on a dataset with mixed codes. With an F1-score of 0.85, we got great results. This works better than previous methods and proves that our model can accurately find named items. We also examine at how well the model works with other language pairs and code-mixed patterns. This shows how well the model can handle different language situations. Our study helps us understand how to handle data in multiple languages, makes code-mixed NER techniques better, and shows how multilingual transformers can help break down language barriers. The research has implications for areas that need to understand more than one language, such as analysing social media, creating language-specific customer service systems, and finding information across languages. Speaking different languages can communicate more easily and effectively in these fields, which encourage inclusion.
    Keywords: named entity recognition; NER; code-mixed texts; hugging face's multilingual transformers; fine-tuning; evaluation metrics; cross-lingual knowledge transfer.
    DOI: 10.1504/IJIEI.2024.10065522