Forthcoming articles


International Journal of Signal and Imaging Systems Engineering


These articles have been peer-reviewed and accepted for publication in IJSISE, but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.


Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.


Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.


Articles marked with this Open Access icon are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.


Register for our alerting service, which notifies you by email when new issues of IJSISE are published online.


We also offer RSS feeds which provide timely updates of tables of contents, newly published articles and calls for papers.


International Journal of Signal and Imaging Systems Engineering (16 papers in press)


Regular Issues


  • A new filter for dimensionality reduction and classification of Hyperspectral images using GLCM features and mutual information   Order a copy of this article
    Abstract: Dimensionality reduction is an important preprocessing step of the hyperspectral images classification (HSI), it is inevitable task. Some methods use feature selection or extraction algorithms based on spectral and spatial information. In this paper, we introduce a new methodology for dimensionality reduction and classification of HSI taking into account both spectral and spatial information based on mutual information. We characterize the spatial information by the texture features extracted from the Gray Level Co-occurrence Matrix (GLCM); we use Homogeneity, Contrast, Correlation and Energy. For classification, we use Support vector machine (SVM). The experiments are performed on three well-known hyperspectral benchmark datasets captured by the Airborne Visible/Infrared Imaging Spectrometer Sensor (AVIRIS) and the Reflective Optics System Imaging Spectrometer sensor (ROSIS-03). The proposed algorithm is compared with the state of the art methods. The obtained results of this fusion show that our method outperforms the other approaches by increasing the classification accuracy in a good timing. This method may be improved for more performance.
    Keywords: Hyperspectral images; Classification; Spectral and Spatial features; GLCM; Mutual Information; SVM.

    by Nasr Gharaibeh, Obaida Al-Hazaimeh, Bassam Al-Naami 
    Abstract: Diabetic retinopathy (i.e. DR), is an eye disorder caused by diabetes, diabetic retinopathy detection is an important task in retinal fundus images due the early detection and treatment can potentially reduce the risk of blindness. Retinal fundus images play an important role in diabetic retinopathy through disease diagnosis, disease recognition (i.e. by ophthalmologists), and treatment. The current state-of-the-art techniques are not satisfied with sensitivity and specificity. In fact, there are still other issues to be resolved in state-of-the-art techniques such as performances, accuracy, and easily identify the DR disease effectively. Therefore, this paper proposes an effective image processing method for detection of diabetic retinopathy diseases from retinal fundus images that will satisfy the performance metrics (i.e. sensitivity, specificity, accuracy). The proposed automatic screening system for diabetic retinopathy was conducted in several steps: Pre-processing, optic disc detection and removal, blood vessel segmentation and removal, elimination of fovea, feature extraction (i.e. Micro-aneurysm, retinal hemorrhage, and exudates), feature selection and classification. Finally, a software-based simulation using MATLAB was performed using DIARETDB1 dataset and the obtained results are validated by comparing with expert ophthalmologists. The results of the conducted experiments showed an efficient and effective in sensitivity, specificity and accuracy.
    Keywords: Diabetic retinopathy;Micro-aneurysms;Retinal hemorrhage; Exudates ;DIARETDB1.

  • A Thresholding Scheme of Eliminating False Detections on Vehicles in Wide-Area Aerial Imagery   Order a copy of this article
    by Xin Gao 
    Abstract: Post-processings are usually necessary to reduce false detections on vehicles in wide-area aerial imagery. In order to improve the performance of vehicle detection, we propose a two-stage scheme, which consists of a thresholding method by constructing a pixel-weight based thresholding policy to classify pixels in the grayscale feature map of an automatic detection algorithm followed by morphological filtering. We use two aerial videos for performance evaluation, and compare the automatic detection results with the ground-truth objects. We compute average F-score and percentage of wrong classifications towards six detection algorithms before and after applying the proposed scheme. We measure the variation of overlap ratios from detections to objects, and establish sensitivity analysis to evaluate the performance of proposed scheme by combining it on each of two representative algorithms. Simulation results verify both validity and efficiency of the proposed thresholding scheme, also display the difference of detection performance between datasets and among algorithms.
    Keywords: Vehicle detection; thresholding; false positive; wide-area aerial imagery.

  • Robust Breast Cancer Detection by utilizing the multi-resolution features   Order a copy of this article
    by Thankappan Gopalakrishnan, Jayapathy Rajeesh, Suyambumuthu Palanikumar 
    Abstract: Breast Cancer can be said as a malignant growth of cells in the breast which can affect other parts of the body if left untreated. The use of Computer Assisted Diagnosis (CAD) is that it provides the pathologist more accurate diagnosis information and helps to reduce the limitations of human observations. Our method proposed to create an accurate technique for automated diagnosis of breast cancerous cells on histopathology images. The dataset used for our purpose is BreaKHis v1. The method consists of pre-processing, K-means segmentation, post-processing, feature vector extraction and classification. The texture and intensity feature vectors of the histopathology image is extracted and is combined and tested with multi resolution features such as wavelet, contourlet transform and wave atom features. Further for classification, several classifiers are tested .The result showed that wave atom feature produced superior result and the best classifier is ensemble classifier providing an overall accuracy of 94.5%.
    Keywords: Computer Assisted Diagnosis; histopathology images; feature vector; multi-resolution features; classi er.

  • Multi Resolution Feature Combined with ODBTC Technique for Robust CBIR System   Order a copy of this article
    by Velayuthan Pillai Gopinathan Ranjith, Muthayyan Kamalam Jeyakumar, Suyambumuthu Palanikumar 
    Abstract: Content Based Image Retrieval (CBIR) is a system that retrieves a set of images that most resembles the query image. The technology is used in many applications. Currently used image content retrieval method is Ordered-Dither Block Truncation Coding (ODBTC). This method is used to produce image content descriptors. In this system, it gives only an average accuracy of 70.5%. Our aim is to create a more robust and accurate system for CBIR. For this purpose in addition to CCF and BPF, contourlet and wavelet features from the query image is extracted for image retrieval process. In our experiment the system is first tested with ODBTC and wavelet and then ODBTC and contourlet. The results obtained with ODBTC and contourlet is more accurate and produced accuracy 91.5%. The dataset used for our experiment is CorelDB.
    Keywords: Content Based Image Retrieval (CBIR); Ordered-Dither Block Truncation Coding (ODBTC); Color Co-occurrence Features (CCF); and Bit Pattern Features(BPF).

  • Deterministic Initialization Principle for Normalized subband Adaptive Filtering   Order a copy of this article
    by Samuyelu B., Rajesh Kumar P. 
    Abstract: System identification is a technique for constructing mathematical designs of dynamic systems using evaluations of the system's input and output signals. The process of system identification needs the input and output signals from the system in frequency or time domain. The conventional paradigm of system identification utilizes prior information on system structures and environments and input/output observation data to explain the designs of systems. Large improvement and research on its methods, algorithms, theoretical foundation, applications and verifications over the past half century have introduced a mature field with a rich literature and substantial benchmark significances. However, rapid improvements in technology, engineering, science and social media has ushered in a new period of systems science and control in which limitations and opportunities are abundant for system identification. In this sense, system identification remains an exciting, young, viable, and critical field that mandates new paradigms to meet such challenges. In this paper, the proposed D-MVS-SNSAF offers improvement in the system identification by initializing the weight factor, which is obtained by taking the number of transitions in the input output characteristics of the system, through the polynomial model. The proposed D-MVS-SNSAF method is compared with the conventional techniques such as, NSAF, VS-NSAF, VS-SNSAF, SS-NSAF and MVS-SNSAF. The obtained results show the improvement of the adopted D-MVS-SNSAF method.
    Keywords: System Identification; NSAF approach; Weight initialization; D-MVS-SNSAF;Stability.

  • A high-throughput system for automated bottle mouth defects inspection   Order a copy of this article
    by Bowen Zhou, Yanbin Li, Ming Lu, Lianghong Wu 
    Abstract: Bottle mouth defects inspection is very important for the production line of beverage and medicine. In this paper, an intelligent inspection system for bottle mouth defects is presented. The linear convection mechanical structure and electrical control system based on Industrial Personal Computer (IPC), motion card and data I/O card are firstly illustrated in detail. Thereafter, a method using the high-speed camera is applied to obtain the bottle mouth image. To find the center of bottle mouth, a novel multi-search-orientation algorithm is proposed, and then the differential detection method based on ring scanning tangent is used to identify the cracks of the bottle mouth. The experimental results show that the detection algorithm is effective and the system is reliable.
    Keywords: Visual inspection; Bottle mouth defects inspection; Multi-search-orientation; Circle tangent scan.

  • Robust and Effective Clothes Recognition System Based on Fusion of Haralick and HOG Features   Order a copy of this article
    by Kriti Bansal, Anand Singh Jalal 
    Abstract: In todays modern era, when the computer has become a necessity of an individual, shopping has shifted from shop to online shopping. We are facilitated with many online sites to search and purchase various types of clothes according to our demands. This kind of clothes classification is used for knowing the name of the cloth that we have seen any movie, serial or anywhere else. It is also used to identify the clothes for our knowledge, to buy it from the market or to discuss with friends. In this paper, we present an efficient method to recognize the clothes in natural scenes as well as in the cluttered background. The proposed approach includes three phases: Extraction of Region of interest (ROI); Construction of Feature Vector; Classification. In the first phase, we detect the face to extract and segment the clothes to mark them as ROI. In the second phase, we have computed feature vector by combining these multiple features for further processing. In the third phase, classification is performed by using Support Vector Machine (SVM) classifier, which classifies the categories of clothes. We have validated the proposed approach using our dataset which contains cluttered background images as well as on Deep Fashion standard dataset. The proposed method successfully resolved the issues of misclassification of clothes in the cluttered background with different illumination conditions. Experimental results show that the proposed technique successfully achieved 88.36% clothes recognition rate.
    Keywords: Clothes recognition; Histogram of Oriented Gradients (HOG); Haralick.

  • Nakagami-m channel secrecy: a receiver assisted scheme   Order a copy of this article
    by Lukman A. Olawoyin, Oloyede A. Abdulkarim, Faruk Nasir, Munzali A. Abana 
    Abstract: This paper considers the effect of an end-user assisted jamming scheme to achieve a secure transmission of information over Nakagami-m fading channel in wireless broadcast system. In this work, full duplex (FD) antenna at the legitimate receiver (LR) was introduced in a way that the LR will contribute additional noise to the system to degrade the eavesdropper's channel. The approach achieves secured transmission with improved secrecy when compared to the conventional method that artificial noise (AN) scheme is transmitted from the transmitter only. Numerical analysis and simulation results show an improvement in the secrecy performance metrics.
    Keywords: artificial noise; full-duplex; eavesdropper; physical layer; secrecy capacity; secrecy outage probability.
    DOI: 10.1504/IJSISE.2018.10014294
  • Comparison of rotation invariant local frequency, LBP and SFTA methods for breast abnormality classification   Order a copy of this article
    by Spandana Paramkusham, Kunda M.M. Rao, B.V.V.S.N. Prabhakar Rao 
    Abstract: Breast cancer is the second most prominent cancer diagnosed among women. Digital mammography is one of the effective imaging modalities used to detect breast cancer in early stages. Computer-aided detection systems help radiologists to detect and diagnose abnormalities earlier and faster in a mammogram. In this paper, a comprehensive study is carried out on different feature extraction methods for classification of abnormal areas in a mammogram. The prominent techniques used for feature extraction in this study are local binary pattern (LBP), rotation invariant local frequency (RILF) and segmented fractal texture analysis (SFTA). Features extracted from these techniques are then fed to a support vector machine (SVM) classifier for further classification via 10-fold cross-validation method. The evaluation is performed using image retrieval in medical applications (IRMA) database for feature extraction. Our statistical analysis shows that the RILF technique outperforms the LBP and SFTA techniques.
    Keywords: breast cancer; mammograms; masses; microcalcification; feature extraction; SVM; support vector machine.
    DOI: 10.1504/IJSISE.2018.10014295
  • IF-RD optimisation for bandwidth compression in video HEVC and congestion control in wireless networks using dolphin echolocation optimisation with FEC   Order a copy of this article
    by B.S. Sunil Kumar 
    Abstract: At present, the latest standard for coding the videos is high-efficiency video coding (HEVC) technique. Owing to limitations of wireless channels on time variability and bandwidth, video transmission to multiple mobile users is difficult. Inter frame-rate distortion (IF-RD) is the technique used in this paper, and it provides more clear videos with a reduction in motion signal by compressing bandwidth and enhancing optimisation in HEVC. First, the multi-resolution encoder encodes the video by employing a fusion strategy called IF-RD optimisation strategy. In order to improve performance and achieve noiseless transmission, the forward error correction (FEC) and dolphin echolocation optimisation techniques are integrated. Token-based congestion control (TBCC) reduces messages that are dropped due to congestion in the network. It also provides computation time. By adopting HEVC, the system performs well and achieves better compression in bandwidth and saves better Peak signal-to-noise ratio (PSNR) data rate with no extra time for encoding.
    Keywords: video coding; HEVC; high-efficiency video coding; IF-RD; inter frame-rate distortion; optimisation; multi-resolution encoding; structural similarity index; congestion control.
    DOI: 10.1504/IJSISE.2018.10014296
  • Evaluating compressive sensing algorithms in through-the-wall radar via F1-score   Order a copy of this article
    by Ali A. AlBeladi, Ali H. Muqaibel 
    Abstract: To achieve high resolution through-the-wall radar imaging (TWRI), long wideband antenna arrays need to be considered, thus resulting in massive amounts of data. Compressive sensing (CS) techniques resolve this issue by allowing image reconstruction using much fewer measurements. The performance of different CS algorithms, when applied to TWRI, has not been investigated in a comprehensive and comparative manner. In this paper, popular CS algorithms are evaluated, to see which are most suitable for TWRI applications. As for the evaluation criteria, the notion of F1-score is adopted and used in the context of TWRI; thus emphasising the algorithms ability to reconstruct an image with correctly detected targets. Algorithms responses to different levels of signal-to-noise ratio (SNR) and compression rate are evaluated. Numerical results show that for systems with low SNR, alternating direction based algorithms work better than others. When the SNR is high, algorithms depending on spectral gradient-projection methods give good results even with high compression rates.
    Keywords: TWRI; through-the-wall radar imaging; compressive sensing; F1-score.
    DOI: 10.1504/IJSISE.2018.10014297
  • Analysis and estimation of traffic density: an efficient real time approach using image processing   Order a copy of this article
    by T. Shreekanth, M. Madhukumar 
    Abstract: Nowadays, traffic density is very high in most of the urban areas, because of the increase in the number of vehicles. Traffic congestion is a very common problem that leads to more lay-out time in traffic. In order to address this issue, an algorithm has been proposed in this work for traffic flow monitoring and analysis in real time based on image processing techniques. This paper describes a method of real time area and frame based traffic density estimation using edge detection for intelligent traffic control system. Area occupied by the edges of vehicles will be considered to estimate traffic density. The system will automatically estimate the traffic density of each road by calculating the area occupied by traffic which in turn will help to determine the duration of each traffic light. The main role of this study lies in the development of a new technique that detects traffic density according to the area occupied by the edges of vehicles for controlling traffic congestion. The proposed algorithm was evaluated on a 30 s video dataset which was sampled into 8 frames and yielded an average accuracy of 98.07%. This is comparable with the existing algorithms in the literature.
    Keywords: image processing; image cropping; canny edge detection; traffic velocity; traffic density; intelligent traffic control.
    DOI: 10.1504/IJSISE.2018.10014298
  • A restoration and binarization method for multi-spectral damaged document image   Order a copy of this article
    by Naouel Ouafek, Mohamed-Khireddine Kholladi 
    Abstract: Recently, researchers have started to focus on the use of multi-spectral system for document image analysis. This system provides different visions of the same document which brings to surface every hidden text. This paper suggests a new methodology dealing with document binarization and restoration problems. For document binarization, a couple of algorithms are performed to create two binary images. The first algorithm is based on a statistical feature and the second one, aims to fix the area of degradation in the document. These two binarization outputs are combined to generate a single binary mask. This mask is used in the second part of the methodology, which is the inpainting process to restore the initial look of the document image before degradation. The proposed methodology has been tested over two multi-spectral datasets. The obtained results show the effectiveness of the suggested method.
    Keywords: multi-spectral image; degraded document; interpolation inpainting; exemplar based inpainting; historical document binarization; historical document restoration.
    DOI: 10.1504/IJSISE.2018.10014299

Special Issue on: Future Directions in Signal and Image Systems

  • Improvement of Image Compression Approach using Dynamic Quantization based on HVS   Order a copy of this article
    by Mourad Rahali, Habiba Loukil, Med Salim Bouhlel 
    Abstract: Digital-image compression can reduce the overall volume of the image by keeping the original image with the minimum degradation in the level of the reconstructed image quality; in other words, here, we speak about compression with loss. This work comes up with an improvement in an image compression method using the discrete wavelet transform (DWT) and neural networks. To improve this technique, we have added a new phase based on the Human Visual System (HVS) and the Weber-Fechner law to dynamically quantify the image signal. Such a new phase can improve the quality of compression by dynamically quantifying each pixel value of the original image compared to the values of the neighbor pixels according to a luminance detection threshold. This threshold is known as Weber constant.
    Keywords: Image compression; Human Visual System; Dynamic Quantization; Weber-Fechner law; Weber constant.

    by Selvakumar Subramaniam, Hannah Inbarani, SENTHIL KUMAR 
    Abstract: In the previous couple of years numerous frameworks for taking decision support from samples were established. As various frameworks permit distinctive kinds of results while classification new cases, it is hard to fittingly assess the frameworks' classification control in examination with other order frameworks or in correlation with human specialists. Order precision is normally utilized as a measure of classification execution. A novel hybrid Improved Monkey based search (IMS) and support vector machine (SVM) technique for the location of arrhythmia in long span ECGs is proposed. It incorporates noise handling, include extraction, rule based beat classification, sliding window arrangement and heart arrhythmia recognizable proof, all coordinated in a grouping system. It can be executed continuously and can give clarifications to the analytic choices got. The strategy was tried on the UCI ECG database and high scores were acquired for both sensitivity and specificity (98.1% and 98.5% correspondingly using collective accuracy gross information, and 98.8% using aggregate average statistics.
    Keywords: ECG; Cardiac arrhythmia; IMS; SVM; Classification.