Forthcoming and Online First Articles

International Journal of Intelligent Engineering Informatics

International Journal of Intelligent Engineering Informatics (IJIEI)

Forthcoming articles have been peer-reviewed and accepted for publication but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

Online First articles are published online here, before they appear in a journal issue. Online First articles are fully citeable, complete with a DOI. They can be cited, read, and downloaded. Online First articles are published as Open Access (OA) articles to make the latest research available as early as possible.

Open AccessArticles marked with this Open Access icon are Online First articles. They are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.

Register for our alerting service, which notifies you by email when new issues are published online.

We also offer which provide timely updates of tables of contents, newly published articles and calls for papers.

International Journal of Intelligent Engineering Informatics (2 papers in press)

Regular Issues

  • Dynamic video summarization using stacked encoder-decoder architecture with residual learning network   Order a copy of this article
    by Dhanushree M. -, Priya R, Aruna P, Bhavani Rajaram 
    Abstract: In the past decade, video summarisation has emerged as one of the most challenging research fields in video understanding. Video summarisation is abstracting an original video by extracting the most informative parts or key events. In particular, generic video summarisation is challenging as the key events do not contain specific activities. In such circumstances, extensive spatial features are needed to identify video events. Thus, a stacked encoder-decoder architecture with a residual learning network (SERNet) model is proposed for generating dynamic summaries of generic videos. GoogleNet characteristics are extracted for each frame in the proposed model. After the bi-directional gated recurrent unit encodes video features, the gated recurrent unit decodes them. Both the encoder and decoder architectures leverage residual learning to extract hierarchical dense spatial features to increase video summarisation F-scores. SumMe and TVSum are used for experiments. Experimental results demonstrate that the suggested SERNet model has an F-score of 55.6 and 64.23 for SumMe and TVSum. Comparing the proposed SERNet model against state-of-the-art approaches indicates its robustness.
    Keywords: video abstraction; dynamic video summarisation; deep learning; residual learning; skip connections; GoogleNet; long-term memory; gated recurrent unit; stacked encoder; key shot selection.
    DOI: 10.1504/IJIEI.2024.10062166
     
  • New Approaches to Epileptic Seizure Prediction Based on EEG Signals Using Hybrid CNNs   Order a copy of this article
    by Majid Nour, Bahadır Arabacı, Hakan Öcal, Kemal Polat 
    Abstract: This study employs the University of Bonn dataset to address the importance of frequency information in EEG data and introduces a methodology utilizing the Short-Time Fourier Transform. The proposed method transforms conventional 1D EEG signals into informative 2D spectrograms, offering a approach for advancing the detection of neurological diseases. Integrating advanced CNN architectures with the conversion of EEG signals into 2D spectrograms forms the foundation of our proposed methodology. The 1D CNN model utilised in this study demonstrates exceptional performance metrics, achieving a specificity of 0.996, an overall test accuracy of 0.991, a sensitivity of 0.987, and an F1 score of 0.989. Shifting to the 2D approach discloses a slight reduction in accuracy to 0.987, sensitivity of 0.976, specificity of 0.988, and an F1 score of 0.97. This analysis provides nuanced insights into the performance of 1D and 2D CNNs, clarifying respective strengths in the context of neurological disease detection.
    Keywords: seizure prediction; epilepsy; EEG signals; 1D convolutional neural network; deep learning; classification.
    DOI: 10.1504/IJIEI.2024.10062353