Forthcoming and Online First Articles

International Journal of Signal and Imaging Systems Engineering

International Journal of Signal and Imaging Systems Engineering (IJSISE)

Forthcoming articles have been peer-reviewed and accepted for publication but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

Online First articles are published online here, before they appear in a journal issue. Online First articles are fully citeable, complete with a DOI. They can be cited, read, and downloaded. Online First articles are published as Open Access (OA) articles to make the latest research available as early as possible.

Open AccessArticles marked with this Open Access icon are Online First articles. They are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.

Register for our alerting service, which notifies you by email when new issues are published online.

We also offer which provide timely updates of tables of contents, newly published articles and calls for papers.

International Journal of Signal and Imaging Systems Engineering (4 papers in press)

Regular Issues

  • GPU-based Video-Processing Traffic Signals for High-Density Vehicle Areas   Order a copy of this article
    by Suvarna Kadam, Sheetal Bhandari, Prakash Sontakke, Sonali Sawant 
    Abstract: This work focuses on developing a traffic management system that utilises video camera inputs and real-time analysis to improve signal control and enhance traffic conditions. The system captures live video feeds from cameras installed at traffic junctions and employs graphics processing unit-based image processing techniques to calculate the real-time vehicle density on each side of the road. An algorithm dynamically adjusts the timing of traffic lights based on the congestion levels of different roads to optimise traffic flow, reduce congestion, and enhance road safety. Real-time decision-making capabilities for traffic control improve transportation efficiency, reduce fuel consumption, and minimise waiting time. The system provides valuable data for future road planning and research. The system further improves traffic flow and minimises traffic jams by synchronising multiple traffic lights using video and image processing technologies in traffic management systems to enhance overall traffic conditions and promote safer and more efficient transportation.
    Keywords: GPU; Intelligent Traffic system; Video Processing.
    DOI: 10.1504/IJSISE.2024.10060146
     
  • Performance Analysis of Object Detection and Tracking Methodology for Video Synopsis   Order a copy of this article
    by Swati Jagtap, Nilkanth Chopade 
    Abstract: The enormous amount of data produced by 24/7 surveillance cameras is challenging for retrieval and browsing of video. The challenges can be overcome by reducing the video size through video condensation methods without affecting the information. Video synopsis is a condensation technique where the long video is represented in shorter form by reducing the spatial and temporal redundancy based upon the occurrence of activity that eases the video browsing and retrieval. The detection and tracking an object in a surveillance camera are essential steps in video synopsis. The proposed research compares different detection and tracking algorithms used as a first stage for video synopsis. The condensation ratio get affected due to improper detection and tracking algorithm selection. Based on evaluating both quantitative and qualitative parameters, the You Only Look Once version 4 (YOLOV4) network outperforms the Gaussian mixture model (GMM) and SSDMobileNet in detecting multiple objects within video surveillance datasets. This research will be helpful to the researcher in identifying the correct pre-processing steps in the domain of video synopsis. In future research, incorporating an auto-learning anchor model could significantly enhance accuracy.
    Keywords: Video Synopsis; Object Detection; Object Tracking; YOLOV4; Gaussian Mixture Model; Video condensation.
    DOI: 10.1504/IJSISE.2024.10060320
     
  • Fire detection in Nano-Satellite Imagery using Mask R-CNN   Order a copy of this article
    by Aditi Jahagirdar, Neha Sathe, Sneh Thorat, Saloni Saxena 
    Abstract: Increasing availability of satellite imagery has made it possible to detect forest fires through satellite imagery. This research aims at investigating early forest detection approaches using deep learning and satellite image segmentation. The algorithms implemented in this work are Mask Region Based Convolutional Neural Network (Mask R-CNN), UNet and Deep Residual U-NET (ResUNet). The experimentation is carried out on publically available satellite image data having challenges like presence of clouds, snow, rivers and sand, which gets confused with the smoke from the fire. The methods implemented here can successfully distinguished between these natural entities and the smoke emitted from the fire. It is seen that Mask R-CNN has an IoU of 0.925, whereas UNet and Res-UNet have IoUs of 0.30 and 0.35, respectively. The results clearly indicate that Mask RCNN is both more time effective and precise and can be used in forest fire detection systems.
    Keywords: Satellite images; Image Segmentation; Mask R-CNN; ResUNet; UNet; Deep Learning.
    DOI: 10.1504/IJSISE.2024.10060648
     
  • Intelligent fault diagnosis of multi-sensor rolling bearings based on variational mode extraction and a lightweight deep neural network   Order a copy of this article
    by Shouqi Wang, Zhigang Feng 
    Abstract: After a rolling bearing failure in an industrial complex environment, the vibration signals collected by the sensors can easily be corrupted by a wide range of noise information, affecting the effectiveness of the feature extraction process. Although deep learning models can extract fault features better, most of the models currently used have complex structures with many parameters and cannot be deployed in embedded environments. In this paper, we propose an intelligent fault diagnosis method combining variational mode extraction (VME) with lightweight deep neural networks, which has the advantages of anti-noise robustness and model lightweight. Firstly, VME is used to process the vibration signals of different sensors to obtain the required modal component signals and convert them into grayscale images. Subsequently, the improved lightweight deep neural network Bypass-SqueezeNet is used for fault diagnosis. Several experiments are conducted on the experimental dataset, and the final experimental results prove that the method proposed in this paper possesses more satisfactory diagnostic performance.
    Keywords: Rolling bearing; Intelligent fault diagnosis; Variational mode extraction; Lightweight deep neural network.
    DOI: 10.1504/IJSISE.2024.10061695