Forthcoming articles

International Journal of Computational Systems Engineering

International Journal of Computational Systems Engineering (IJCSysE)

These articles have been peer-reviewed and accepted for publication but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

Register for our alerting service, which notifies you by email when new issues are published online.

Open AccessArticles marked with this Open Access icon are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.
We also offer which provide timely updates of tables of contents, newly published articles and calls for papers.

International Journal of Computational Systems Engineering (8 papers in press)

Regular Issues

  • Representative Model to Discover Knowledge and Analytics from CVE Database   Order a copy of this article
    by Gatha Tanwar, Ritu Chauhan 
    Abstract: Common Vulnerabilities and Exposure (CVE) is a formal dictionary of vulnerabilities and associated weaknesses reported by the community. With the evolution of programming practices, availability of new platforms, hardware and better networking capabilities, the trends in reported vulnerabilities have also changed. In this paper, we focused on vulnerabilities that resulted in information disclosure and how their characteristics changed over the course of two decades i.e. from the years 1999 till 2020. The focus time period was divided into two decades spanning over the years between 1999 till 2010, and 2010 till 2020. To appreciate the vulnerabilities that were reported in the first decade and had remained popular in the second decade, the crawled CVEs were filtered based on the publication and update dates. The analysis led to the discovery that the execution of arbitrary code has remained a favourite with the hackers over the two decade long focus period. As skills of attackers have gotten better, restrictions bypass and memory violations have grown in number. We implemented Python scripts for knowledge discovery and comprehensible representations of the quantifying factors behind severity of a reported CVE. Furthermore, this study enforced the reciprocal relationship between software development strategies and minimization of the exploit potential of a computing system.
    Keywords: Common Vulnerabilities and Exposure;Exploratory Data Analysis; OWASP; Information Disclosure.

  • A Framework for Detecting the Diurnal Activities that Happen in a Room   Order a copy of this article
    by Vikas Tripathi, Manish Mahajan, Rijwan Khan 
    Abstract: This paper is based on Human activities detection in a particular room. Human activity recognition is the basic of this analysis and it has been used to detect and identify different day-to-day activities happening inside a room such as brushing hair, chewing, clapping, eating, hugging, kissing, talking, laughing, smoking. We have used the Human Motion Detection database for this analysis and the accuracy obtained is 71.81 %.
    Keywords: CNN; Tweets; Sentiment analysis; Machine Learning.

  • An Investigation into the Provision of a Decision Support System to Evaluate Software Performance under Uncertainty.   Order a copy of this article
    by Md.Mahashin Mia, Mohammad Shahadat Hossain, Rashed Mustafa, Atiqur Rahman 
    Abstract: The performance of the software is disturbing, which has the capability to stop the everyday life activities of a certain area. Therefore, an earlier prediction of software performance could play an important role to save human time as well as daily life activities. The signs of efficiency along with coverage and reliability in the system could be considered as a way to predict software performance. These factors cannot be determined accurately because of the presence of different categories of uncertainties. Therefore, this article presents a belief rule-based expert system (BRBES) that has the capability to predict software performance under uncertainty. Historical data of various software performances of the world with specific reference to efficiency as well as coverage and reliability have been considered invalidating the BRBES. The dependability of our proposed BRBESs output is measured in comparison with Fuzzy Logic Based Expert System (FLBES) and Artificial Neural Networks (ANN) based system, whereas our BRBESs results are found more reliable than that of FLBES and ANN. Therefore, this BRBES can be considered to predict the incidence of software performance in an area by taking account of the data, related to the efficiency, coverage, and reliability.
    Keywords: Software; Uncertainty; Prediction; Expert system; Belief rule base.

  • Validation of non-functional scalability requirement in the development of Versat Sarasola software   Order a copy of this article
    by Yuliet Fernández Lavalle, Zoila Esther Morales Tabares 
    Abstract: All software that works with databases achieves one of its major objectives if they have the non-functional requirement of scalability. Financial accounting technology achieves a great goal by working with scalable software programs, but it does not guarantee quality in them. The Cuban accounting software system Versat Sarasola, despite being certified, is a scalable software with low quality for clients and specialists. This article describes a testing strategy for the non-functional requirement of scalability in the development of Versat Sarasola software, based on a set of international standards and models, with the aim of obtaining a correct and adequate quality in the Versat Sarasola software managing the scalability in its development.
    Keywords: software; technology; non-functional requirement; scalable; quality.

Special Issue on: ISPR 2020 Recent Advances in Intelligent Systems and Pattern Recognition

  • Structural Refinement of Manually created Bayesian Network for Prostate Cancer Diagnosis   Order a copy of this article
    by NAVEEN KUMAR BHIMAGAVNI, ADILAKSHMI THONDEPU 
    Abstract: In general, Structure of a Bayesian network can be learnt from thernavailable data. In some domains like medicine, Bayesian network can be manuallyrncreated by domain experts and statistical methods can be applied to refine thernstructure based on the data. As the data is continuously getting evolved in many realworld applications, refinement of expert network structure is unavoidable. Existingrntechniques refine the manually constructed Bayesian network either by verifying thernrelation of a node with the remaining nodes in the network (Expert Bayes) or byrnexamining a node only with its parents (MDL Principle). In this work, we propose anrnalgorithm that verifies relation of a node only with its non-descendant nodes that arernidentified with Markov Assumption. Proposed Algorithm performs small changes tornthe original network and proves that a smaller number of operations are required tornfind the best network structure. Maximum Likelihood Estimation (MLE) isrnconsidered as a scoring function to calculate score for each candidate structure andrnselects the network with the highest score. Manually created Bayesian Network hasrnbeen collected for the widespread disease Prostate cancer and proposed algorithm refines thernnetwork structure.
    Keywords: Bayesian network; Prostate cancer; Markov Assumption; Maximum Likelihood Estimation (MLE); Probabilistic Graphical Model (PGM); Refinement algorithm.

  • Selection of statistical Wavelet features using wrapper approach for Electrical Appliances identification based on KNN classifier combined with voting rules method   Order a copy of this article
    by Ghazali Fateh, Abdenour Hacine Gharbi, Philippe Ravier 
    Abstract: This work is an extended version of paper presented in the international conference on Intelligent Systems and Patterns Recognition where the authors have proposed a compact features representation based on the estimation of statistical features using discrete wavelet transform for electrical appliances identification based on K Nearest Neighbor classifier combined with voting rule strategy. The results have shown that the Wavelet Cepstral Coefficients (WCC) descriptor presents highest performance with 98.13% classification rate (CR). In this work, we propose many extensions: (i) the logarithm energy (LOG_E) is used as additional descriptor; (ii) the relevance of the wavelet based features combined with LOG_E descriptor is investigated using feature selection based on wrappers approach; (iii) deep performances evaluation is carried out using five additional metrics. The results show that the selection of four features of WCC combined with LOG_E improves the CR at 98.51%
    Keywords: Electrical Appliances Identification; Statistical Feature Extraction; Discrete Wavelet Analysis; K Nearest Neighbor classifier; voting rule method; wrapper feature selection approach.

  • Finger Vein Biometric Scanner Design Using Raspberry Pi   Order a copy of this article
    by Sara Daas, Amira Yahi, Mohamed Boughazi, El-Bay Bourennane 
    Abstract: Finger vein biometric systems gained a lot of attention in recent years due to the increasing demand for high-security systems. The biometric device captured the human finger vein image and used it for security such as authentication, verification and identification. Most of the existing finger vein capturing devices are not suitable for any research, development because of their private verification software. For that reason, this paper focuses on designing and developing a finger vein biometric system based on an Arduino and Raspberry Pi board. The proposed finger vein device is based on near-infrared light (NIR). The Arduino Microcontroller is used to automatically control the brightness and determine the impact of NIR lighting on the captured images Raspberry Pi board commanded all external peripherals of the system. The effectiveness of the proposed design has evaluated using objective Image Quality Assessment (IQA) metrics, i.e. MSE, PSNR, IQI, AD, NK, SC, MD, LMSE and NAE. Experimental results improve high performance with an MSE increase of 61.39% and an important PSNR reaching 33.73% compared with the existing state-of-the-art designs.
    Keywords: Finger vein; Arduino; Raspberry Pi; near-infrared light; two-dimensional entropy; PWM; Image Quality Assessment (IQA).

  • Combination of a DAE-CNN and OC-SVDD for Intrusion Detection   Order a copy of this article
    by Hamza Frihia, Halima Bahi, Djamel Eddine Mahrougui 
    Abstract: The extensive use of the Internet has favoured the emergence of intrusion detection systems (IDSs) that scan the network traffic to detect potential attacks. The detection of malicious events requires the learning of the patterns representing the attacks; meanwhile, new threats appear regularly. Thus, it is of amount importance to develop intrusion detection systems that do not depend on malicious patterns. In this paper, we leverage advances made in deep learning and in the one-class classification approach to build an IDS. The proposed IDS is based on the use of a deep auto-encoder (DAE) to extract robust features from an event description, and the use of the One-Class Support Vector Data Description (OC-SVDD) method, a modified version of the well-known OC-SVM (One-Class Support Vector Machine), to detect the intrusion; the DAE layers consist of convolution layers. The DAE is trained exclusively on normal patterns and is expected to extract robust features representing the normal traffic. The OC-SVDD is trained based on these features, thus, during the test stage, malicious events are classified as outliers. We report experiments on the well-known NSL-KDD dataset. The experimental results show an accuracy of about 97.73% and prove the potential of the proposed approach to distinguish between normal and malicious traffic.
    Keywords: Computer security; Intrusion Detection System; One Class Support Vector Data Description (OC-SVDD); Deep AutoEncoder; Convolutional Neural Network; NSL-KDD dataset,.