Forthcoming and Online First Articles

International Journal of Computer Applications in Technology

International Journal of Computer Applications in Technology (IJCAT)

Forthcoming articles have been peer-reviewed and accepted for publication but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

Online First articles are published online here, before they appear in a journal issue. Online First articles are fully citeable, complete with a DOI. They can be cited, read, and downloaded. Online First articles are published as Open Access (OA) articles to make the latest research available as early as possible.

Open AccessArticles marked with this Open Access icon are Online First articles. They are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.

Register for our alerting service, which notifies you by email when new issues are published online.

We also offer which provide timely updates of tables of contents, newly published articles and calls for papers.

International Journal of Computer Applications in Technology (130 papers in press)

Regular Issues

  • Simulation and visualisation approach for accidents in chemical plants   Order a copy of this article
    by Feng Ting-Fan, Tan Jing, Liu Jin, Deng Wensheng 
    Abstract: A new general approach to lay the foundation for building a more effective and real-time evacuation system for accidents in chemical plants is presented. In this work, we build the mathematical models and realise automatic grid generating based on the physical models stored in advance with several algorithms in jMonkeyEngine environment. Meanwhile, the results of the simulation data through finite difference method (FDM) are visualised coupling with the physical models. Taking fire as an example, including fire with single and multiple ignition sources, shows the feasibility of the presented approach. Furthermore, a coarse alarm and evacuation system from fire have been developed with a multiple SceneNode and roam system, which also includes the making and importing of the physical models. However, to improve the accuracy of the mathematical models, adaptability and refinement of the grids and universality of the evacuation system is the direction of efforts.
    Keywords: simulation; chemical accidents; alarm and evacuation system; jMonkeyEngine.

  • Detecting occluded faces in unconstrained crowd digital pictures   Order a copy of this article
    by Chandana Withana, S. Janahiram, Abeer Alsadoon, A.M.S. Rahma 
    Abstract: Face detection and recognition mechanisms, a concept known as face detection, are widely used in various multimedia and security devices. There are significant numbers of studies into face recognition, particularly for image processing and computer vision. However, there remain significant challenges in existing systems owing to limitations behind algorithms. Viola Jones and Cascade Classifier are considered the best algorithms from existing systems. They can detect faces in an unconstrained crowd scene with half and full face detection methods. However, limitations of these systems are affecting accuracy and processing time. This project proposes a solution called Viola Jones and Cascade (VJaC), based on the study of current systems, features and limitations. This system considered three main factors: processing time, accuracy and training. These factors are tested on different sample images, and compared with current systems.
    Keywords: face detection; unconstrained crowd digital pictures; face recognition.

  • Comprehensive survey of user behaviour analysis on social networking sites   Order a copy of this article
    by Pramod Bide, Sudhir Dhage 
    Abstract: Social networking sites play an important role in every persons life. Users start expressing their emotions online whenever any humanitarian or crisis-like event occurs. A lot of sub-events are stirred up and the internet gets flooded with people tweeting/posting their opinions. Identifying user behaviours, their content and their interaction with others can help in event prediction, cross-event detection, user preferences, etc. For these reasons, our research process was divided into studying user behaviour with respect to content-centric and probabilistic approaches and a hybrid incorporating the two. We further investigate the existence of multiple OSNs and how they affect user behaviour. The purpose of this paper is to investigate the existing research methodologies and techniques along with discussion and comparative studies. User behaviour analysis is carried out based on content centric, probabilistic and hybrid approach. Content centric analysis dealt with analysis of the content posted which gives rise to various applications, such as gender prediction, malicious users, real-time user preferences, emotional content influence on users, etc. It is observed that in the probabilistic approach, most of the papers addressed employed clustering mechanisms followed by probability distribution for the analysis of user behaviour.
    Keywords: social media; user behaviour; content centric features; probabilistic features; hybrid features.

  • A neural adaptive level set method for wildland forest fire tracking   Order a copy of this article
    by Aymen Mouelhi, Moez Bouchouicha, Mounir Sayadi, Eric Moreau 
    Abstract: Tracking of smoke and fire in videos can provide helpful regional measures to evaluate precisely damages caused by fires. In security applications, real-time video segmentation of both fire and smoke regions represents a crucial operation to avoid disaster. In this paper, we propose a robust tracking method for fire regions in forest wildfire videos using neural pixel classification approach combined with a nonlinear adaptive level set method based on the Bayesian rule. Firstly, an estimation function is built with chromatic and statistical features using linear discriminant analysis and a trained multilayer neural network in order to get a preliminary fire localisation in each frame. This function is used to compute an initial curve and the level set evolution parameters, thus providing fast refined fire segmentation in each processed frame. The experimental results of the proposed method prove its accuracy and robustness when tested on different varieties of wildfire-smoke scenarios.
    Keywords: fire detection; linear discriminant analysis; neural networks; active contour; level set; Bayesian criterion.

  • A less computational complexity clustering algorithm based on dynamic K-means for increasing lifetime of wireless sensor networks   Order a copy of this article
    by Anupam Choudhary, Sapna Jain, Abhishek Badholia, Anurag Sharma, Brijesh Patel 
    Abstract: Clustering in wireless sensor networks is a critical issue based on network lifetime, energy efficiency, connectivity and scalability. Sensor nodes are capable to collect data from any geographical region using routing protocol. This research endeavours to design a less complex computational time clustering algorithm for hierarchical homogeneous wireless sensor networks to extend network lifetime. It forms an optimal number of clusters and reduces the data communication span of sensor nodes using dynamic K-means algorithm. Selection of a suitable cluster head is based on the ratio of the remaining energy of the sensor node to its distance from the centre of the cluster. The simulation results prove that algorithm that has been presented achieves better energy efficiency when compared with other hierarchical homogeneous cluster-based algorithms. It increases the network lifetime, the number of alive nodes per round, the data delivered to the base station, the time of the first node, middle node and last node to die for scalable situations in terms of node density and size of the sensing region.
    Keywords: wireless sensor network; sensor node; hierarchical homogeneous cluster-based protocols; cluster Head; base station; network lifetime.

  • Reduced-order modelling of parameterised incompressible and compressible unsteady flow problems using deep neural networks   Order a copy of this article
    by Oliviu Sugar-Gabor 
    Abstract: A non-intrusive reduced-order model for nonlinear parametric flow problems is developed. It is based on extracting a reduced-order basis from full-order snapshots via proper orthogonal decomposition and using both deep and shallow neural network architectures to learn the reduced-order coefficients variation in time and over the parameter space. Even though the focus of the paper lies in approximating flow problems of engineering interest, the methodology is generic and can be used for the order reduction of arbitrary time-dependent parametric systems. Since it is non-intrusive, it is independent of the full-order computational method and can be used together with black-box commercial solvers. An adaptive sampling strategy is proposed to increase the quality of the neural network predictions while minimising the required number of parameter samples. Numerical studies are presented for unsteady incompressible laminar flow around a circular cylinder, transonic inviscid flow around a pitching NACA0012 aerofoil and a gust response for a modified NACA0012 in subsonic compressible flow. Results show that the proposed methodology can be used as a predictive tool for unsteady parameter-dependent flow problems.
    Keywords: non-intrusive parameterised reduced-order model; artificial neural networks; proper orthogonal decomposition; incompressible and compressible flow model order reduction.

  • Fusion-based Gaussian mixture model for background subtraction from videos   Order a copy of this article
    by T. Subetha, S. Chitrakala, M. Uday Theja 
    Abstract: Human Activity Recognition (HAR) aims at realising and interpreting the activities of humans from videos and it comprises background subtraction, feature extraction and classification stages. Among those stages, the background subtraction stage is mandatory to achieve a better recognition rate while analysing the videos. The proposed Fusion-based Gaussian Mixture Model (FGMM) background subtraction algorithm extracts the foreground from videos, which is invariant to illumination, shadows, and the dynamic background. The proposed FGMM algorithm consists of three stages: background detection, colour similarity, and colour distortion calculation. Here, the Jefries-Matusita distance measure is used to check whether the current pixel matches the Gaussian distribution and, by using this value, the background model is updated. Weighted Euclidean-based colour similarity measure is used to eliminate shadows and a colour distortion measure is adopted to handle illumination variations. The extracted foreground is binarised to easily extract the interest points and the foreground, which has white pixel is stored into the frame. This algorithm is tested over test sets gathered from publicly available benchmark datasets, including the Kth dataset, Weizmann dataset, PETS dataset, and change detection dataset. Results have proved that the proposed FGMM exhibits better accuracy in foreground detection, with an increased accuracy compared with the prevailing approaches.
    Keywords: human activity recognition; Gaussian mixture model; fusion-based Gaussian mixture model; background subtraction.

  • Design and analysis of search group algorithm-based PD-PID controller plus redox flow battery for automatic generation control problem   Order a copy of this article
    by Ramana Pilla, Tulasichandra Sekhar Gorripotu, Ahmad Taher Azar 
    Abstract: The ability of a redox flow battery (RFB) is analysed in the present paper to minimise the tie-line power and frequency deviations of the five-area thermal power system. Initially, a power system network with five areas and a nonlinearity of generation rate constraint is designed in MATLAB/SIMULINK environment. After that a proportional derivative-proportional integral derivative (PD-PID) controller is evaluated for the proposed system. Finally, the RFB is installed in area-1, area-2, area-3, area-4 and area-5 for dynamic response enhancement. Results of simulation show that better transient response characteristics can be obtained by using PD-PID controller along with RFB in area-1. The robust analysis is also performed to show the capability of the proposed method.
    Keywords: dynamic response; generation rate constraint; PD-PID controller; redox flow battery; search group algorithm; transient response.

  • Accurate detection of network anomalies within SNMP-MIB dataset using deep learning   Order a copy of this article
    by Ghazi Al-Naymat, Hanan Hussain, Mouhammd Al-Kasassbeh, Nidal Al-Dmour 
    Abstract: An efficient algorithm for supporting the intrusion detection system (IDS) is required for identifying unauthorised access that attempts to collapse the confidentiality, integrity, and availability of computer networks. The machine learning approaches such as (a) multilayer perceptron, (b) support vector machines, (c) nearest neighbour classifiers and (d) ensemble classifiers, such as Random Forest (RF) show higher accuracy only when the additional feature selection techniques such as Infogain, ReliefF, or Genetic Search are used. When the data gathered for training and testing is huge with a greater number of features, the extra computation of feature selection will result in a higher consumption of hardware resources (CPU, memory, and bandwidth). On the other hand, another subset of the machine learning approach called the Deep Learning (DL) algorithm does the feature selection, automatically to overcome this limitation. In this paper, a deep learning method called Stacked Autoencoder (SAE) is proposed for detecting seven different types of network anomaly using the SNMP-MIB dataset. The autoencoder is a variant of the neural network, which transforms the set of n inputs to a different set of m reduced number of outputs (encoding). Previous outputs are then processed by the decoding part to get the desired output of n dimensions, which is identical with the initial input. They are stacked one by one to form a deep SAE. Parameters of the model are selected by trial and error method to get the best training functions, activation functions, learning rate, etc. The proposed deep learning method attains a high accuracy of 100% and saves the extra computations and resources spent on feature selection. The proposed model is also compared with 22 prominent machine learning techniques from the following categories: (i) decision trees, (ii) discriminant analysis, (iii) support vector machines, (iv) nearest neighbour classifiers and (v) ensemble classifiers. It is found that our model outperforms all other machine learning algorithms in terms of accuracy, precision, and recall.
    Keywords: deep learning; DoS; network anomalies; SNMP-MIB; detection.

  • An interpolation algorithm of B-spline curve based on S-curve acceleration/deceleration with interference pre-treatment   Order a copy of this article
    by Guirong Wang, Qi Wang 
    Abstract: A B-spline curve interpolation algorithm based on S-curve acceleration/deceleration (ACC/DEC) with interference pretreatment is proposed to achieve a smooth transition of feed-rate, and to reduce the impact with acceleration mutation in computer numerical control (CNC) machining. According to the demand of chord error, the algorithm can adaptively adjust the feed-rate of each interpolation point, and divide a B-spline curve by velocity cusps. The interference points of the whole curve can be found out by using the S-curve ACC/DEC calculation for the velocity cusps of the whole curve from the forward and reverse directions. Then, the feed-rate of interference points is re-determined to avoid jerk overrun on the curve segments between mutual interference points, so as to improve the processing stability of the machine tool. The simulation and experiment results demonstrate that the algorithm can obtain the smooth transition of feed-rate and acceleration in CNC machining, and ensure that the jerk can meet the ACC/DEC of system. The CNC system can use this method for high-precision and high-speed machining of complex products.
    Keywords: S-curve ACC/DEC; interference pre-treatment; piecewise curve; interference points; feed-rate scheduling; CNC machine tools; interpolation algorithm.

  • A new multistable jerk system with Hopf bifurcations, its electronic circuit simulation and an application to image encryption   Order a copy of this article
    by Sundarapandian Vaidyanathan, Irene M. Moroz, Ahmed A. Abd El-Latif, Bassem Abd-El-Atty, Aceng Sambas 
    Abstract: In this work, we announce a new 3-D jerk system and show that it is chaotic and dissipative with the calculation of the Lyapunov exponents of the system. By performing a detailed bifurcation analysis, we observe that the new jerk system exhibits Hopf bifurcations. It is also shown that the new jerk system exhibits multistability behaviour with two coexisting chaotic attractors. An electronic circuit simulation of the jerk system is built using Multisim. Finally, based on the benefits of our proposed chaotic jerk system, we design a new approach to image encryption as a cryptographic application of our chaotic jerk system. The simulation outcomes prove the efficiency of the proposed encryption scheme with high security.
    Keywords: bifurcations; chaos; chaotic systems; circuit design; jerk systems; image encryption.

  • Compensation of variability using median and i-Vector+PLDA for speaker identification of whispering sound   Order a copy of this article
    by Vijay Sardar 
    Abstract: Speaker identification from the whispered voice is troublesome contrasted with neutral as the voiced phonations are missing in the whisper. The success of the speaker identification system mainly depends on the selection of appropriate audio features. The various available audio features are explored here and it is shown that the timbre features are able to identify the whispering speaker. Only the well-performing, and thus limited timbre, features are sorted by the hybrid selection algorithm. The timbre features named brightness, roughness, roll-off, MFCC and irregularity using CHAIN database offer improvement in the identification outcomes by 5.8% over the baseline system. The framework ought to be robust enough to repay intra-speaker and inter-speaker variability, including channel impacts. The analysis using timbre features based on median value predicted that the intra-speaker variability is being compensated. The use of median timbre features reported further enhancement of 1.12% compared with using timbre features and a further decline in False Negative Rate (FNR). The use of i-Vector + probabilistic discriminant analysis (PLDA) and Support Vector Machine (SVM - cosine kernel) have contributed a relative improvement in accuracy of 8.13%. The reductions in False Positive Rate (FPR) and False Negative Rate (FNR) confirm better variability compensation.
    Keywords: whispered speaker; median timbre feature; i-Vector; cosine kernel; support vector machine.

  • A model predictive control strategy for field-circuit coupled model of PMSM   Order a copy of this article
    by Zhiyan Zhang, Pengyao Guo, Yan Liu, Hang Shi, Yingjie Zhu, Hua Liu 
    Abstract: Based on the analysis of the mathematical equations and drive circuit of permanent magnet synchronous motor (PMSM), a model predictive control strategy for the controller of PMSM is proposed. The stator current discretization model and the cost function of model predictive control are established, and voltage vector selection is derived. Then, the coupling mechanism among motor, driver and controller is analyzed?and the field-circuit coupled model of a 1kW PMSM using model predictive control is set up. Next, the starting performance, load characteristics and electromagnetic field of the motor are obtained. Good speed and electromagnetic characteristics verify the effectiveness of the PMSM control strategy and the correctness of the PMSM field-circuit coupled model. Finally, the back EMF waveforms and its harmonics of the field-circuit coupled model and the finite-element model without drive circuit and controller are compared and analyzed. The simulation results show that the amplitude of back EMF in both models is basically the same while the field-circuit coupled model has high THD value, which can simulate the practical conditions.
    Keywords: PMSM; model predictive control; voltage vector selection; field-circuit coupled model.

  • Mechanics of the tubing string for supercritical CO2 fracturing   Order a copy of this article
    by Wenguang Duan, Baojiang Sun, Deng Pan, Hui Cai 
    Abstract: Supercritical CO2 fracturing is one of the most efficient ways for increasing petroleum productivity. The tubing string for the fracturing is necessary and plays an in important role in the fracturing process. A mechanical model of the tubing string in the well for fracturing is set up. The forces on the tubing string are analysed. The mechanical formulas are derived. The stresses on the tubing string are calculated and the strength of the tubing string is checked. The running accessibility of the tubing string through the well for fracturing is studied. The equations for calculating the critical force on the tubing string for fracturing causing sinusoidal bucking and helical bucking are given. Based on the finite element method, a model is set up. The stress and deformation of the tubing string in the horizontal and deviation well sections are calculated. Results show that under the given conditions, the tubing string is safe and efficient.
    Keywords: supercritical CO2; fracturing; tubing string; running accessibility; mechanics.

  • Ontology-based broker system for interoperability of federated cloud computing platforms   Order a copy of this article
    by Surachai Huapai, Unnadathorn Moonpen, Thepparit Banditwattanawong 
    Abstract: This paper presents an ontology-based broker system for the interoperability of federated clouds. The system can provision cloud infrastructure resources from different platforms to meet the users requirements of Infrastructure as a Service (IaaS). The system engaged an ontology to enable the interoperability of heterogeneous IaaS management platforms, OpenStack, Apache CloudStack, and VMware ESXi. The system provisioned appropriate cloud-infrastructure resources from available platforms based on a vector-space algorithm. Evaluation results relying on the two datasets of non-scheduled and scheduled IaaS-user requests show that our system is practical in that average latencies to generate REST commands for virtual machine provisioning take less than a second per request and are linearly proportional to the number of provisioned servers.
    Keywords: federated-cloud computing; cloud broker; cloud ontology; infrastructure as a service; interoperability.

  • On the estimation of makespan in runtime systems of enterprise application integration platforms: a mathematical modelling approach   Order a copy of this article
    by Fernando Parahyba, Rafael Z. Frantz, Fabricia Roos-Frantz 
    Abstract: Integration platforms are tools developed to support the modelling, implementation and execution of the integration processes, so that data and functionality from applications in software ecosystems can be reused. The runtime system is a key piece of software in an integration platform and it is directly related to its performance; and, makespan is a metric used to measure performance in this systems. In this paper we propose a mathematical model to estimate the makespan for integration processes that run on application integration platforms built based on the theoretical task-based model. Our model has shown to be accurate and viable to assist software engineers in the configuration and deployment process of integration processes on an actual integration platform. The model was validated by means of a set of experiments, which we report in the paper.
    Keywords: enterprise application integration; makespan; runtime system; mathematical modelling; integration platforms.

  • Gradient iterative based kernel method for exponential autoregressive models   Order a copy of this article
    by Jianwei Lu 
    Abstract: Two kernel method based gradient iterative algorithms are proposed for exponential autoregressive (ExpAR) models in this study. A polynomial kernel function is utilized to transform the ExpAR model into a linear-parameter model. Since the order of the linear-parameter model is large, a momentum stochastic gradient algorithm and an adaptive step-length gradient iterative algorithm are developed. Both these two algorithms can estimate the parameters with less computational efforts. Finally, a simulation example shows that the proposed algorithms are effective.
    Keywords: ExpAR model; kernel method; linear-parameter model; momentum stochastic gradient algorithm; adaptive step-length; gradient iterative algorithm.

  • Ontology-based data integration for the internet of things in a scientific software ecosystem   Order a copy of this article
    by Jade Ferreira, José Maria N. David, Regina Braga, Fernanda Campos, Victor Stroele, Leonardo De Aguiar 
    Abstract: The Internet of Things (IoT) enables a smart observation of the environment, producing a large amount of heterogeneous data. On the one hand, it allows the remote collection of data, either providing a ready field dataset compilation or serving as a secondary source of information to better analyse the research context. On the other hand, all the raw data generated by disparate sensors need to be integrated to leverage the power of IoT in scientific experiments. This paper proposes an ontology-based data integration architecture that allows data from different sources, formats, and semantics to be integrated and organized by a mediated ontology that provides knowledge inference. The architecture is thus evaluated as a use case testing in a scientific software ecosystem that supports all stages of the experiment life cycle.
    Keywords: ontology; internet of things; data integration; scientific software ecosystem.

  • New media art design in commercial public space   Order a copy of this article
    by Zhigang Wang, Y.E. Wang, Y.U. Sun 
    Abstract: New media art in commercial public space is very beneficial for art communication and commercial transformation. The mass communication awareness can help to maximize the value of new media art and even strengthen peoples public awareness. The article mainly includes two aspectsexperience design and the impact on the peoples lifestyleto understand the impact of new media art in the commercial public space. The fundamental of experience design is to let the people participate in interactive experience activities. The content includes the combination of art and technology, the combination of the public space environment and the form of new media art, the people\'s experience cognition and emotional cognition. The new media art of commercial space will build a multi-dimensional cultural consumption place from material to symbol to spiritual level. In order to stimulate the inner demand and resonance between people and goods and give deeper cultural significance to consumer activities.
    Keywords: commercial public space; new media art; mass communication.

  • A vision of 6G: technology trends, potential applications, challenges and future roadmap   Order a copy of this article
    by Syed Agha Hassnain Mohsan 
    Abstract: The ongoing research on fifth-generation (5G) has exposed many inherent drawbacks in this technology. These limitations of 5G have spurred global research activities to focus on future sixth-generation (6G) technology. The fundamental architecture and performance requirements of 6G are yet to be explored. In present world, the academic research and industrial synergy is accelerating to conceptualize 6G. The widespread applications of blockchain, internet-of-things (IoT), artificial intelligence (AI), augmented reality (AR), virtual reality (VR) and extended reality (XR) have driven the need of emerging 6G technology. 6G technology will put profound impact on ubiquitous connectivity, deep connectivity and intelligent connectivity. We envisage 6G as an ultradense heterogeneous, highly dynamic and innately intelligent network. Thus, the current upsurge of diversified mobile networks has spurred heated discussion on evolution of 6G. In this study, we have outlined a holistic vision which enables tenets of 6G. We opine it will bring technological trends by exciting services and applications. 6G is envisaged to revolutionize several allied technologies and applications. Furthermore, it will enable Internet of Everything (IoE) which will put profound impact of Quality of Experiences (QoE) and Quality of Services (QoS). Integration of IoE and 6G will provide better performance of flying sites, smart cities, robotic communication, vehicular networks and remote surgical operations. In this review, we have envisioned potential applications and challenges in future 6G technologies. Generally, the intent of this extant study is to lay a foundation for out-of-the-box research around 6G applications. In the roadmap review, 6G applications as well as potential challenges are highlighted. In this regard, we believe this review will be helpful to aggregate the research efforts and eliminate the technical uncertainties towards breakthrough novelties of 6G.
    Keywords: 5G; 6G; Blockchain; IoT; mobile networks; internet of everything; ubiquitous connectivity.

  • Fast position tracking control of PMSM under the high frequency and variable load   Order a copy of this article
    by Jiafeng Zhang, Jinghua Wang, Yang Liu 
    Abstract: According to the force characteristics and deflection characteristics of the rudder surface of a supercavitating vehicle, the problem of PMSM's fast position tracking under high-frequency variable load conditions is proposed. In order to solve the problem of poor tracking effect of the "traditional three closed loop" position tracker under the condition of high-frequency variable load, base on the feedforward control theory and the "traditional three closed loop" position tracker, a position tracking control strategy of "three closed loops + speed loop fuzzy feedforward compensation + current loop feedforward compensation" is proposed. First, feedforward compensation for the speed loop reference input improves the response speed of the system and improves the fast position tracking accuracy. Second the feedforward compensation of the current loop reference input effectively overcomes the influence of high-frequency variable loads on the position tracking effect, and further improves the position tracking accuracy. Then the theoretical analysis shows that the two feed-forward links do not change the stability of the traditional three loop position tracker, and the design method of two feedforward coefficients is given. Finally, three simulation comparison experiments respectively illustrate the effectiveness of speed loop feedforward compensation, current loop feedforward compensation and fuzzy control for improving the accuracy of PMSM fast position tracking under high-frequency variable load conditions. The simulation results also verify that the position tracking control strategy proposed in this paper has better response speed, position tracking accuracy and anti-interference performance than the "traditional three closed loop" and "three closed loop + speed loop feedforward compensation" position trackers.
    Keywords: permanent magnet synchronous motor; high frequency variable load; fast position tracking; three closed-loop control; fuzzy feedforward compensation; load observer.

  • A chaos-enhanced accelerated PSO algorithm in reliable tracking of mobile objects   Order a copy of this article
    by Sahar Teymori, Peyman Babaei 
    Abstract: Object tracking in monitoring applications is one of the topics in The Internet of Things (IoT) issues. Most important key challenges in object tracking have concentrated on energy consumption and service quality and reliability. In the proposed approach the good performance of the PSO algorithm in global optimization, and its weakness in local optimization, the PSO algorithm is combined with the chaos operator to overcome this flaw. The purpose of the present research is to improve energy consumption and service quality and increase system reliability in mobile object tracking in WSNs. The goal is also to improve the algorithm performance. According to the results of the simulations, the energy consumption in the proposed method has been improved due to the optimal selection of the cluster heads. Also, the proposed method is stable while improving the reliability and increasing the quality of sensor network services.
    Keywords: object tracking; wireless sensor network; chaos theory; particle swarm optimisation.

  • Formal specification at model-Level of model-driven engineering using modelling techniques   Order a copy of this article
    by Jnanamurthy HK, Frans Henskens, David Paul, Mark Wallis 
    Abstract: Nowadays, model-driven engineering (MDE) is gaining more popularity owing to high-level development leading to a faster generation of executable code, which reduces manual intervention. Verification is crucial at different levels of model-based development. Model-based development, along with formal verification process, assures the developed model satisfies software requirements described in formal specifications. Owing to inadequate knowledge of formal methods (complex mathematical theory), software developers are not adopting formal methods during software development. There are several approaches in the literature available to transform MDE models into formal models directly for formal verification, and these approaches require an additional input of formal specifications to verification tools for formal verification. But these methods have not addressed the problem of formal specifications at the model level. In this paper, we design a modelling framework using modelling techniques, which allows specifying formal properties at the model level, automatically extracting formal specifications and formal models from developed application models, which are used for formal verification. The proposed method allows full automation and reduces the time for formal verification process during the development life-cycle. Furthermore, the method reduces the complexity of learning formal specification notations (specifications specified at the model level are automatically converted into formal specifications), which are required to input verification tools for formal verification.
    Keywords: model-driven development; formal specification; formal verification; temporal logic; model-driven architecture;.

  • Design, optimisation and implementation of a DCT/IDCT based image processing system on FPGA   Order a copy of this article
    by Shensheng Tang, Monali Sinare, Yi Zheng 
    Abstract: In this paper, a discrete cosine transform (DCT) and its inverse transform IDCT are designed and optimised for FPGA using the Xilinx VIVADO High-Level Synthesis (HLS) tool. The DCT and IDCT algorithms, along with a filter logic written by C/C++, are simulated for functional verification and optimised through HLS and packaged as custom IPs. The IPs are incorporated into a VIVADO project to form an image processing system for hardware validation. The VIVADO design, along with a Xilinx SDK application written by C language, is implemented on a Zynq FPGA development board, Zedboard. A C# GUI is developed to transfer image data to/from the FPGA and display the original and processed images. Experimental results are presented with discussion. The FPGA development method, including the DCT/IDCT IP design, optimisation and implementation via HLS as well as the VIVADO project integration, can be extended to a wider range of FPGA applications.
    Keywords: DCT; IDCT; FPGA; VIVADO HLS; IP; Zedboard; GUI; C/C++; Verilog; C#; Optimisation; C/RTL co-simulation; hardware validation.

  • Interactive smart home technologies for users with visual disabilities: a systematic mapping of the literature   Order a copy of this article
    by Otávio Oliveira, André Freire, Raphael De Bettio 
    Abstract: This paper presents a systematic mapping of the literature concerning interactive technologies for smart homes targeting users with visual disabilities. The analysis stemmed from a search resulting in 265 papers, of which 25 were selected. The results show the main types of interaction mode reported, including voice, gesture, touch, keyboard and ambient sensors. Technological approaches included desktop computers, mobile devices, embedded systems, and stand-alone smart devices. The studies showed important features to aggregate different interaction modalities and provide accessible interfaces in mobile and desktop devices to interact with the home. This paper provides valuable insight into the implications for the design of smart home technologies for users with visual disabilities and shows the significant research gaps to be investigated in the future, including overcoming barriers with legacy inaccessible utilities and methodologies to enhance user research in the area.
    Keywords: smart homes; ambient-assisted living; visually impaired user; blind user.

  • Confirmed quality aware recommendations using collaborative filtering and review analysis   Order a copy of this article
    by Seema Nehete, Satish Devane 
    Abstract: Recommendation Systems (RS) save the time of users in their hectic life schedules for purchasing their products. RS faces challenges of data sparsity, cold start, efficiency of prediction of products and hence the proposed system is making use of Multi-kernel Fuzzy C Means (MKFCM) clustering to group together users having similar age, occupation, and gender into clusters. Clusters of similar users are optimised using the Fruit Fly (FF) optimisation algorithm, which gives high cluster accuracy and dynamically created subclusters of similar users and their favorite products, overcoming sparsity issues which make the analysis easy. Collaborative Filtering (CF), one of the filtering method of RS, is used to predict products for target users. This RS gains users faith by additionally performing analysis of textual reviews using optimised Artificial Neuron Network (ANN) to recommend the highest quality products, thus dual tested and quality confirmed products are recommended to the user. Experimentation is done on a standard Movilense dataset used by many researchers to prove the efficiency of this RS, and reviews of all users are extracted from online search engines for product quality analysis before recommendation. Experimentation proves higher recall and accuracy than existing recommendation systems.
    Keywords: clustering; recommendation systems; collaborative filtering; artificial neural network.

  • A combined solution for flexible control of poultry houses   Order a copy of this article
    by Lucas Schmidt, Dalcimar Casanova, Richardson Ribeiro, Marcelo Teixeira 
    Abstract: In poultry houses, thermal comfort is decisive for maximizing the feed conversion rate, a measure for successful production. As there are different automatic control options, that result in variable performance indexes, this paper reproduces, tests, and compares two of them: a reactive, that applies event-driven methods, and a bio-inspired, which is based on artificial intelligence techniques. As each approach adds specific advantages to the process control, we combine them into a single framework that gathers their best features. Simulations using real data show that temperature and humidity have been reproduced with 97% and 80% precision, respectively. In comparison, reactive and bio-inspired approaches show respectively accuracies of 90% and 82% for temperature, and 60% and 66% for humidity. Therefore, we conclude that our approach can improve both reactive and bio-inspired control, standing as a feasible and flexible alternative for the control of poultry houses.
    Keywords: intelligent systems; poultry houses; automatic control; formal modelling.

  • Asynchronous dynamic arbiter for network on chip   Order a copy of this article
    by Abdelkrim Zitouni, Bouraoui Chemli 
    Abstract: In modern Network on Chip (NoC), communicating blocks are synchronised with different clock rates. However, system performances may present a bottleneck that can be remedied only by considering the notion of communication asynchrony. The implementation of a high-performance asynchronous NoC router requires the design of dynamic arbitration structures to lower packet latency, and thus increase throughput. Also, the dissipated power needs to be as small as possible. This paper presents a design approach of asynchronous dynamic arbiters to be implemented in NoC routers. The design steps begin by State Transition Graph (STG) as specification model and generate a Quasi-Delay-Insensitive (QDI) arbiter implemented by C-element gates. The designed arbiter communicates with the shared resources by using a four-phases (Req/Ack) handshaking protocol. Arbiter performances have been evaluated through the implementation of an asynchronous 2D-Mesh NoC router in FPGA (Virtex 5) and ASIC (28 nm) technologies. Experimental results show that the proposed router exhibits better performances compared with its counterparts. In ASIC design, this router achieves low power (3.8 mW), low area (0.009 mm2), low latency (1.53 ns), and high packet throughput (1562 Mflit/s).
    Keywords: asynchronous dynamic arbiter; STG; C-element; NoC router; FPGA/ASIC designs.

  • SCATAA-CT: smart course attendance tracking android application in classroom teaching   Order a copy of this article
    by Saadeh Z. Sweidan, Sondos M. Alshareef, Khalid Darabkh 
    Abstract: Tracking students attendance manually is an exhausting and time-consuming process for both instructors and students in all universities around the world. However, instructors are obligated to report students who exceed the allowed absence limit and take legal measurements against them. On the other hand, the number of smartphone users has rapidly increased in the last decade owing to their attractive features and affordable prices. With the fast spread of smartphones, the importance of their related applications (apps) has increased to become the most reliable way to provide any service. Today, apps are being used in all fields of life, such as social activities, formal government work, and even entertainment, which has motivated us to introduce Smart Course Attendance Tracking Android Application in Classroom Teaching (SCATAA-CT). This app aims to enhance the process of tracking attendance in terms of time and effort in large universities where the number of students in a class is high and attendance is mandatory (based on a universitys rules and regulations). Using SCATAA-CT is simple, a course instructor generates a Quick Response (QR) code during a lecture and views it for a short time to the students who in turn scan the code and send back attendance requests to the instructors device. To add credibility to the scan process, a fingerprint authentication is required in addition to other restrictions that all prevent any possible manipulation. Besides generating QR codes, SCATAA-CT allows instructors to show their courses details, read attendance reports, cancel lectures, and block/unblock students based on their absences. On the other side, students can also use the app to show the details of their attendance reports. Moreover, SCATAA-CT exploits notifications efficiently to send users important pre lecture alerts and post lecture updates. Our app was practically tested in a number of courses during the academic year 2019/2020, where it showed efficiency and credibility in tracking students attendance. Even more, the involved students were asked to answer an evaluation survey and the results were very positive with a number of useful feedbacks to consider.
    Keywords: engineering education; attendance tracking; Android application; classroom teaching; fingerprints; biometric authentication.

  • A novel multichannel UART design with FPGA-based implementation   Order a copy of this article
    by Ngoc Pham Thai, Bao Ho Ngoc, Tan Do Duy, Phuc Truong Quang, Ca Phan Van 
    Abstract: Universal Asynchronous Receiver and Transmitter (UART) is a popular asynchronous serial communication standard. Although the transmission speed is not too high, UART has the advantage of simplicity, ease of implementation and low power consumption. Therefore, UART is still used in various digital modules that do not require high communication speed, such as SIM module, Bluetooth, GPS, etc. However, communication with many low-speed peripherals can reduce the efficiency of data bus usage and the processor's performance. In this paper, we propose a multichannel UART design to efficiently use the Advanced Peripheral Bus (APB) standard data bus in order to support simultaneously multiple transmission data frames with different rates. Then, we evaluate the performance of our multichannel UART design by means of simulations and practical implementation using field-programmable gate array boards. The evaluation results show that our proposed multi-channel UART module ensures stable operation while guaranteeing proper transmission to/from multiple devices following UART standard with different configurations.
    Keywords: UART; multichannel; AMBA 3 APB; testbench; field-programmable gate array.

  • Sensor device scheduling based cuckoo algorithm for enhancing lifetime of cluster-based wireless sensor networks   Order a copy of this article
    by Mazin Kadhum Hameed, Ali Kadhum Idrees 
    Abstract: Among the more complicated aspects of Wireless Sensor Networks (WSNs) is developing an efficient topology control technique for saving energy of the network, as well as increasing its lifespan. This study proposes Sensor Device Scheduling-based Cuckoo Algorithm (SeDeSCA) for Enhancing Lifetime of Cluster-based WSNs. The SeDeSCA technique consists of two phases: clustering and scheduling. The WSN is clustered into clusters using DBSCAN algorithm in the first phase. The scheduling phase is periodic and composed of three steps: cluster head polling, scheduling decision-based optimisation, and covering. The sensor nodes in each cluster choose their cluster head. The elected cluster head executes Cuckoo Algorithm (CA) to select the suitable schedule of sensor nodes that take the mission of sense during the current period. The major aim of scheduling algorithm-based CA is minimising the amount of energy consumption and ensuring sufficient coverage for the monitored area while maximising the network lifespan for WSN. The fourth step is to cover the area of interest by the sensor nodes that are scheduled to be active during this period. The simulation results show that the SeDeSCA technique does indeed improve the network lifespan and global coverage ratio, and improve the lifespan of WSNs.
    Keywords: wireless sensor networks; cuckoo algorithm; DBSCAN; lifetime enhancement; scheduling algorithms.

  • An analysis of real-time traffic congestion optimisation through VTL in VANETs   Order a copy of this article
    by Parul Choudhary, Umang Singh, Rakesh Dweidi 
    Abstract: Traffic congestion is a daunting phenomenon that affects thousands of people worldwide in their everyday lives. Owing to the rapid proliferation of technologies, the demand for VANET technology is increasing expeditiously to create an environment for a virtual traffic light (VTL) to minimise traffic congestion. The replacement of conventional physical traffic light systems with VTL is cost-efficiently achieved across vehicle networks. In this paper, we summarise the recent state-of-the-art methods of VANETs by discussing the importance of VTL in VANET, its architecture, and real-life applications. Further, work is focused on challenges, characteristics, and related domains of allied VANETs applications by filling the gaps of existing surveys along with the latest trends incorporating the concept of VTL. This attempt presents the effectiveness of VTL by including recent work in real scenarios according to research findings. This paper offers a systematic review of current VTL methodologies that promise to show an impactful result in the future. Finally, this attempt comprehensively covers the entire VANET system and highlights certain research gaps of VTL that are still to be explored. This work will support researchers of this domain by analysing the literature on VTL in VANET during the period 2007-2019.
    Keywords: traffic congestion; VANETs; virtual traffic light; real life applications.

  • Capacity configuration optimisation of hybrid renewable energy system using improved grey wolf optimiser   Order a copy of this article
    by Huili Wei Wei, Tianhong Pan, Jun Tao, Mingxing Zhu 
    Abstract: An appropriate capacity configuration of the hybrid renewable energy system (HRES) contributes to reduce the equipment cost of the system configuration, and improve the operational reliability of the system. Aiming at minimising the annualised cost and the loss of power supply probability, a capacity configuration optimisation model of a PVwind HRES is set up in this study. An improved Grey Wolf Optimizer (iGWO) is proposed to optimise the systems configuration. First, the Tent chaotic strategy is used to initialise the population. Then, the convergence factor is modified to balance the local and global search ability of GWO. Finally, the meteorological data of the wind speed and solar radiation in a typical year in Zhenjiang, China, are taken as a case to verify the economy and feasibility of the optimal configuration. The results show that the proposed method not only ensures the operation reliability, but also improves the economic performance of HRES.
    Keywords: hybrid renewable energy system; optimisation; capacity configuration; improved grey wolf Optimiser.

  • Metamodel extension approach applied to the model-driven development of mobile applications   Order a copy of this article
    by Ayoub Sabraoui, Anas Abouzahra, Karim Afdel 
    Abstract: Mobile application development is one of the most promising domains in the software industry. The rapid growth of hardware and emerging technologies has resulted in a large number of mobile platforms, which constitutes a challenge that developers must face when they build applications for different platforms. The model-based artifacts co-evolution is another challenge when metamodels evolve. This paper introduces a model-driven approach for managing the metamodel evolution in the context of cross-platform mobile application development. Firstly, we propose an MDD approach, based on a generic DSL, and a set of code generators to generate platform-specific source code. Secondly, the approach provides a graphical framework: i) to extend the original metamodel through a set of rules, ii) to define the mapping between the newly added meta-elements and their corresponding elements in the target platforms, and iii) to automatically update existing code generators. This paper demonstrates the potentials and limits of our approach through a concrete case study.
    Keywords: cross-platform mobile development; model-driven development; metamodel evolution; model co-evolution; code generation; domain-specific language.

  • Integration of ubiquitous specifications in the conception of objects system   Order a copy of this article
    by Sonia Aimene, Rassoul Idir 
    Abstract: This paper proposes an approach called Ubi-SO (Ubiquitous System Object) based on standard life cycle. This approach aims to identify, to analyse and to design ubiquitous requirements that can be incorporated into a traditional system engineering process. The approach is modelled with the BPMN (Business Process Management Notation) method, which is adapted using the Bizagi Modeler tool. Ubi-SO separates functional, technical and contextual ubiquitous needs in the conceptualisation phase. It is based on an extended sequence diagram in the analysis phase and on an extended class diagram in the design phase using Uml profile for the best adaptation to ubiquitous domain. Compared with a lot of works, this solution offers guidelines for studying and spreading ubiquitous needs. To demonstrate the feasibility of our work, the approach is verified by translating the BPMN into formal LOTOS (Language of Temporal Ordering Specification) and then validated by the CADP tool.
    Keywords: modelling; context-awareness; ubiquitous computing; pervasive applications; mobility; lotos; BPMN; UML profile; life cycle.

  • Numerical simulation of full-scale steel-frame undergoing soil pressure   Order a copy of this article
    by Bayang Zhang, Yaohong Zhu, Fachao Li, Jiawei Wang, Jue Zhu 
    Abstract: During the excavation of a connecting channel between the main tunnels using the mechanical shield method or pipe jacking method, the loading situation on the T-joint is very complicated. A vertical loaded full-scale steel-frame that can imitate the surrounding soil loading is invented to ascertain the action mechanism of each part. It can burden seven-ring segments and concern the soil spring effect. Prior to the field test, the entire installation of the facility is modelled, simulated and analysed to evaluate the safety of the construction. The analysis results show that the steel frame rotates seriously when the steel frame is not welded to the base. After welding the steel frame with the base, the rotation phenomenon of the steel frame can be effectively eliminated, and the stress and displacement of each part are in line with the standard for design of steel structures.
    Keywords: steel frame; vertical loaded; soil spring; earth pressure; seven-ring.

  • SiNoptiC: swarm intelligence optimisation of convolutional neural network architectures for text classification   Order a copy of this article
    by Imen Ferjani, Minyar Sassi Hidri, Ali Frihida 
    Abstract: Although many rules have been suggested by several researchers for designing deep neural architectures, trial-and-error techniques are often exploited in practice to find the optimal model for a given problem. Thus the automation of deep neural architecture search methods is highly recommended. In this work, we address this problem by proposing a hybrid coupling of Convolutional Neural Networks (CNNs) architectures with swarm intelligence, especially the Fish School Search (FSS) algorithm. This coupling is capable of discovering a promising architecture of a CNN on handling text classification tasks. The proposed method allows users to provide training data as input, and receive a CNN model as an output. It is completely automatic and capable of fast convergence. Computational results show the effectiveness of the proposed method in achieving the best classification loss among manually designed CNNs. This is the first work using FSS for automatically designing the architectures of CNNs.
    Keywords: deep learning; convolutional neural network; swarm intelligence; FSS; text classification; NLP.

  • Improved image matching algorithm based on LK optical flow and grid motion statistics   Order a copy of this article
    by Qunpo Liu, Xiulei Xi, Weicun Zhang, Lingxiao Yang, Naohiko Hanajima 
    Abstract: In order to solve the problems of low accuracy and long time-consuming of the AKAZE algorithm in the image matching of glass encapsulated electrical connectors, an improved image matching algorithm based on LK optical flow and grid motion statistics is proposed in this paper. to address the multi-solution problem arising from the construction of the nonlinear scale space in the AKAZE algorithm, this paper proposes an improved AKAZE algorithm fused with LK optical flow method. Matching points are obtained by calculating the matching area which are made up of the feature points for conditional constraints. In the local feature matching algorithm, the large amount of calculation is an urgent problem because the sparse neighborhood consistency feature cannot define adjacent areas well. The false matching points are removed by an improved grid motion statistics algorithm based on integrated FLANN algorithm, and then match computation time is reduced. The performance of the algorithm is verified by experiments based on the Mikolajczyk and actual scene data. Experiment results show that the proposed algorithm can handle the actual scene data, the CMR reaches over 93%, and the time taken is within 0.4s.
    Keywords: positioning; glass-encapsulated electrical connector; AKAZE algorithm; LK optical flow; feature matching; grid motion statistics.

  • Vulnerability assessment of services deployed in internet of things based smart cities   Order a copy of this article
    by Aymen Belghith 
    Abstract: The Internet of Things (IoT) is an innovative and revolutionary technology. It promises to serve humanity and improve our lives by integrating a huge number of heterogeneous end devices and provide diverse interesting services. However, security remains the main key for the success of any technology. In this paper, we aim to provide structural guidelines to secure IoT-based smart cities. We should consider the IoT-based functional requirements in parallel with the security requirements based on the security triad: confidentiality, integrity, and availability. Moreover, we should consider the security challenges of smart cities based on the cube: big data, resource limitation, interoperability, and scalability. We then present solutions to the main threats. Finally, we assess the vulnerabilities of some services offered in an IoT-based smart cities context. We show that current available services still suffer from several vulnerabilities and need associated security countermeasures for the success of the IoT-based smart cities.
    Keywords: IoT; smart cities; security; countermeasures; vulnerabilities; assessment.

  • Research on semantic similarity calculation methods in Chinese financial intelligent customer service   Order a copy of this article
    by Mingyu Ji, Xinhai Zhang 
    Abstract: Semantic similarity computing is an important task in NLP, which aims to calculate the semantic similarity between texts. In order to solve the problem of low accuracy of existing calculation methods in Chinese financial intelligent customer service, we propose a text similarity calculation method combining Capsule-BiLSTM network and Chinese part of speech (POS) correction. In the text preprocessing stage, POS correction is performed on financial professional words and ambiguous words in the dataset to reduce the influence of Chinese word segmentation errors on similarity judgements. Capsule network and BiLSTM were used to obtain local and global information respectively, and the two similarity matrices obtained are fused to determine the similarity. Experimental results show that the proposed method compared with other conventional methods has high precision and F1 value in the ATEC dataset.
    Keywords: semantic similarity; capsule network; BiLSTM; part of speech correction.

  • Hierarchical structure modelling in uncertain emergency location-routing problem using combined genetic algorithm and simulated annealing   Order a copy of this article
    by Bijan Nahavandi, Mahdi Homayounfar, Amir Daneshvar, Mohammad Shokouhifar 
    Abstract: The emergency location routing problem (ELRP) is a strategic issue in different areas especially in healthcare systems. Owing to the importance of the problem, this study aims to develop a multiobjective evolutionary model that determines the location of facilities in a way that facilities in higher levels can serve lower-level facilities efficiently. This study presents a comprehensive literature review to provide the research contribution including: multiple failure; multiple coverage radius; hybrid transportation system; and demand uncertainty in a ELRP. A mathematical model is presented for the ELRP comprising two objective functions. The first objective is to maximize the provided services to demand nodes, assuming proper operation of emergency facilities, and the second objective is to maximise the reliability of the facility system to respond the patients demands even if some facilities fail to operate. To achieve this purpose, a backup system with a hybrid ambulance-helicopter transportation system is developed for the situations where the primary system cannot properly serve the patients. The failure probability of the emergency facilities is analysed via robust optimisation. In terms of data analysis, the NP-hard ELRP problem, a balanced exploration-exploitation metaheuristic algorithm based on genetic algorithm (GA) and simulated annealing (SA), named GASA, is proposed. Research findings by comparison of the GASA simulation results with a commercial solver demonstrates the higher efficiency of the proposed method.
    Keywords: emergency location routing problem; backup facilities; robust optimisation; genetic algorithm; simulated annealing.

  • Optical satellite image MTF compensation for remote-sensing data production   Order a copy of this article
    by Chao Wang, Ruifei Zhu 
    Abstract: Modulation transfer function (MTF) compensation has been widely used in remote-sensing (RS) imagery processing for texture feature enhancement and the restoration ofhigh-frequency details. This paper proposes an optical satellite MTF compensation (MTFC) method for engineering application and RS data production. The framework mainly includes point spread function (PSF) model estimation and non-blind image deconvolution. In PSF estimation, we employ the iterative total variation algorithm and 2D Gaussian model fitting to reduce the dependence on slant-edge quality and improve the efficiency of PSF measurement. The PSF model estimation error is less than 3% using the results from standard commercial software as an evaluation benchmark. In image deconvolution, we adopt a regularisation based a non-blind deconvolution method using hyper-Laplacian prior. Compared with the widely used methods, the MTF result improved more than double in the high-frequency part. Its processing efficiency and image quality can meet the requirements of RS image products.
    Keywords: optical remote-sensing imagery; MTF compensation; PSF estimation; remote-sensing data production.

  • Geometric distortion correction for projections on non-planar and deformable surfaces based on displacement of peripheral image points   Order a copy of this article
    by Onoise Kio, Lik-kwan Shark 
    Abstract: In the delivery of media content involving projections of images on non-planar and deformable surfaces, there is an inevitable problem of geometric distortion that causes visually incorrect images to be produced in the display region. Presented in this paper is a simple and computation efficient method for geometric distortion correction. While the simplicity comes from the use of a single uncalibrated camera to capture the projected image, the computation efficiency comes from the use of a small number of image points along the periphery of the display region to estimate the geometric distortion. In particular, cylindrical surface deformation is used to model the geometric displacements of peripheral image points, and bilinear approximation is used to model the geometric displacements of image points lying between the peripheral image points. Results of tests on static and deformable projection surfaces show this distortion-correction technique improved the normalised geometric similarity measure of distorted displayed images by as much as 31%.
    Keywords: geometric distortion correction; non-planar projection; projector-camera systems; deformable projection surface; RBF warping.

  • Cross event detection and topic evolution analysis in cross events for man-made disasters in social media streams   Order a copy of this article
    by Pramod Bide, Sudhir Dhage 
    Abstract: Terrorist attacks, chemical attacks, rapes and other socially sensitive incidents are shared on social media to gain attention from the world. Microblogging sites like Twitter become flooded with these root events and their sub-events as they evolve over time; such events are referred to as cross events. Cross event detection is critical in determining the nature of events. The event detection is based on tweet segmentation using the Wikipedia title database. Segment clustering is done based on a similarity measure by encoding the tweets in the form of vectors using the BERT model. We proposed the cross event evolution detection framework, which detects cross events that are similar in their temporal nature and result from main events. The experimental results on a real Twitter data collection show the effectiveness of our proposed framework for both cross event detection and topic evolution algorithm during the evolution of topics and cross events.
    Keywords: cross event detection; Twitter; topic evolution; BERT.

  • Convolutional neural network model for an intelligent solution to crack detection in pavement images   Order a copy of this article
    by Aaron Rababaah, James Wolfer 
    Abstract: This paper presents a deep learning solution using convolutional neural networks for pavement crack detection. The advancements in machine learning and machine vision open new opportunities for researchers to explore the power of deep learning instead of classical machine learning to solve old and new problems. We propose a convolutional neural network model to detect cracks in pavement. Our solution is based on a multi-layer model that encompasses a raw image input layer, convolutional layers, activation layers, max-pooling layers, a flattening layer, and a multi-perceptron neural network as classification layers. Matlab was our development platform to create and test the solution. Five hundred sample images were collected from publicly-available sources. Sixteen different experiments were conducted to determine the best configuration for the proposed model in terms of the number of features. The results of the experiments suggest that the proposed model is effective with a detection accuracy of 96.6% when correctly configured.
    Keywords: deep learning; convolutional neural networks; pavement images; crack classification; machine vision.

  • DDoS attack detection and defence mechanism based on second-order exponential smoothing the Holt model   Order a copy of this article
    by Rachana Patil 
    Abstract: Technological progress and digitisation are greatly assisted by the growth of the internet. The web has now become a national asset, and all national security relies on it as well. But these emerging developments have also brought with them unparalleled network threats. Among them, a strong and more powerful attack on the internet is a distributed denial of service attack. This work proposes a novel framework for DDoS detection. To detect anomalous variations in mean distance values, the technique of second order exponential smoothing (Holts method) is used. In the proposed context, the DDoS defence module is based on the principle of the rate limitation of incoming traffic on the basis of bandwidth and demand rates from a device connected. The experimental verification of the proposed method is done using NS2 simulator, and the results are evaluated with detection rate, throughput, and false positive rate.
    Keywords: DDoS attack; rate limiting; network security; Holt model.

  • Cyber attacks visualisation and prediction in complex multi-stage networks   Order a copy of this article
    by Shailendra Mishra, Waleed Bander Alotaibi, Mohammed AlShehri, Sharad Saxena 
    Abstract: In network security, various protocols exist, but these cannot be said to be secure. Many hackers and illegal agents try to take advantage of the vulnerabilities through various incremental penetrations that can compromise critical systems. The conventional tools available for this purpose are not enough to handle things as desired. Risks are always present with dynamically evolving networks and are very likely to lead to serious incidents. This research work proposes a model to visualise and predict cyber attacks in complex, multilayered networks. All the available network security conditions and the possible places where an attacker can exploit the system are summarised. The vulnerability-based multi-graph technique for the attacker is presented. Also, an attack graph algorithm is proposed, leading to identifying all the vulnerable paths that can be used to harden the network by placing sensors at the desired locations and is used for vulnerability assessment of multi-stage cyber attacks.
    Keywords: network vulnerability; attack graph; adjacency matrix; clustering technique; cyber defence.

  • Overheat protection for motor crane hoist using internet of things   Order a copy of this article
    by Paduloh Paduloh, Rifki Muhendra 
    Abstract: Crane hoist is a material-moving tool and work tool for the production process. The hoist often stops suddenly owing to overheating; this condition impacts the production process and safety. This study aims to design a safety device to anticipate the occurrence of overheating in the hoist. The research began with brainstorming, prioritising AHP and designing products using QFD, and designing systems using UML. The product is designed to use a microcontroller, Arduino, fan, and GSM to control the motor temperature and to transmit temperature information to the user. The motor cooler will operate if there is a notification of a motor temperature rise. The novelty of this research lies in the decision-making system, product design, and information system design so that this research can produce a safety device that suits company needs and is easy to operate. This tool is also able to prevent overheating effectively.
    Keywords: hoist crane; QFD; UML; Arduino; IoT.

  • Important extrema points extraction-based data aggregation approach for elongating the WSN lifetime   Order a copy of this article
    by Ali Kadhum M. Al-Qurabat, Hussein M. Salman, Abd Alnasir Riyadh Finjan 
    Abstract: Energy conservation is one of the most basic problems of wireless sensor networks. The energy of sensor nodes is limited, so effective energy usage is important. Data aggregation helps to minimize the volume of data communicated across the network while preserving information quality and decreasing energy waste, thereby enhancing the lifetime of the network. In this paper, we propose a data aggregation approach based on the important extrema points extraction for elongating the WSN lifetime (IEEDA). Rather than transmitting all the set of collected measures at the end of every time period, we propose transmitting the extracted important extrema measures of sensor nodes. Using real-world data sets with radically different properties, we tested our method against two protocols ATP and PFF. The proposed method resulted in a reduction in the amount of the following: data remaining up to 95%, data sent up to 80%, and energy consumed up to 77%.
    Keywords: data aggregation; important extrema points; lifetime; WSN.

  • Digital forensics evidence management based on proxy re-encryption   Order a copy of this article
    by Rachana Patil 
    Abstract: The growing world of digitisation has given rise to cybercrimes. Digital forensics is the process of collecting lawful evidence. Such evidence plays a very crucial role in the court of law to demonstrate the fact explicitly against the crime of the suspect. To ensure the admissibility of evidence in the court of law during trials, it is important to maintain evidence using a proper evidence management system. This paper proposes the use of a unidirectional multi-hop proxy re-encryption scheme for authority delegation. This system will help in securely delegating access to digital evidence. The re-encryption scheme provides a clearer perception of security and validates the usefulness of proxy re-encryption as a method of adding access control to a secure evidence management system. The correctness analysis of the proposed scheme is validated by using BAN logic. The security analysis using the AVISPA tool shows that the proposed scheme is safe against various security attacks.
    Keywords: digital forensics; evidence management; proxy re-encryption; cybercrime; BAN logic; AVISPA.

  • A Serious Game for the Responsible Use of Fossil Fuel-powered Vehicles
    by Francisco Javier Moreno Arboleda, Javier Esteban Parra Romero, Agnieszka Szczesna 
    Abstract: Serious gaming has gained increasing prominence in climate change communication, and provides an opportunity to engage people in topics related to environmental protection. This paper presents the design and evaluation of a serious game that concerns pollution generated by fossil fuel-powered vehicles. Serious games might be an effective and motivational tool in that field. The game’s intention is to motivate the participants to change towards sustainable lifestyles. To achieve this goal, an enhanced game design methodology was proposed and a serious mini-game prototype was developed
    Keywords: Serious gaming; game design; sustainable transport; vehicular pollution.

  • Biased compensation adaptive gradient algorithm for rational model with time-delay using self-organising maps   Order a copy of this article
    by Yanxin Zhang, Jing Chen, Yan Pu 
    Abstract: This paper develops a biased compensation adaptive gradient descent algorithm for rational models with unknown time-delay. Owing to the unknown time-delay, traditional identification methods cannot be directly applied for such models. To overcome this difficulty, the self-organised maps are proposed, which can obtain the estimates of the time-delay based on the residual errors. Then, an adaptive gradient descent algorithm is introduced to obtain the parameter estimates. Compared with the traditional gradient descent and redundant rule-based methods, the proposed method has two advantages: (1) each element in the parameter vector has its own step-size, thus it is more effective than the traditional gradient descent method; (2) the number of the unknown parameters is unchanged, therefore, it has needs less computational effort than the redundant rule-based method. Finally, a simulation experiment is given to show the excellent accuracy of the proposed algorithm.
    Keywords: self-organised maps; biased compensation; adaptive gradient descent; parameter estimation; time-delay.

  • FPGA Based DFT System Design, Optimization and Implementation Using High-Level Synthesis
    by Shensheng Tang, Monali Sinare, Yi Xie 
    Abstract: In this paper, a discrete Fourier transform (DFT) algorithm is designed and optimized for the FPGA implementation using the Xilinx VIVADO High-Level Synthesis (HLS) tool. The DFT algorithm is written by C++ programming and simulated for functional verification in the HLS and MATLAB. For hardware validation, the DFT module is packaged as an IP core and tested in a VIVADO project. A Xilinx SDK application written by C language is developed and used for testing the DFT module on a Zynq FPGA development board, ZedBoard. For visualization of the DFT magnitude spectrum generated in FPGA, a GUI is developed by C# programming and related commands/data can be communicated between the GUI and ZedBoard over the serial port. Experimental results are presented with discussion. The DFT module design, optimization and implementation as well as the VIVADO project development methods can be extended to other FPGA applications.
    Keywords: FPGA; DFT; IP core; VIVADO HLS; C/C++; Verilog; C#; Optimization; Hardware Validation

  • QSFN:QoS Aware Fog Node Provisioning in Fog Computing
    by Ashish Chandak, Niranjan Kumar Ray, Deepak Puthal 
    Abstract: The quantity of IoT gadgets is consistently expanding and these gadgets are delay-sensitive and require a speedy reaction. These gadgets are connected with a cloud for the computation of requests yet there may be a delay in computation. To beat the present circumstance, fog nodes are kept at the edge of the IoT gadgets to perform speedy calculation for delay-delicate applications. IoT devices generate tremendous number of tasks and if the task number is increased then the number of fog nodes needs to be increased for immediate processing. Here QoS aware fog node (QSFN) provisioning algorithm has been proposed in which the number of fog nodes is automatically increased based on CPU utilization, queue length, and the number of available resources. We evaluate the performance of QoS-aware fog node provisioning with Bee Swarm and Concurrent algorithm based on makespan, average execution, flowtime, success execution rate, and average response time. Simulation results demonstrate that the proposed QSFN algorithm performs better in comparison with other algorithms.
    Keywords: Fog Computing; Scalability; IoT; Fog Node.

  • Design of a FPGA accelerator for the FIVE fuzzy interpolation method   Order a copy of this article
    by Roland Bartók, József Vásárhelyi 
    Abstract: Complex control systems are difficult to describe with mathematical models owing to the complexity of the system. The control of these systems can be solved with a solution that does not require accurate knowledge of the system model. The most common solution in such cases is the use of neural networks. With complex description of rule-based behaviour, complex control system can also be implemented. Using fuzzy logic is an effective way to describe the behaviour of a system. The problems encountered when using classical fuzzy logic are avoided by using a fuzzy interpolation method such as the Fuzzy Interpolation in Vague Environment (FIVE) method. Compared with classic fuzzy logic, FIVE has the advantage of computing acceleration, through algorithm parallelisation, which is important in the case of real-time calculations and in tuning procedures. The paper presents a flexible parameterisable hardware-accelerator implementation of the FIVE method using field programmable gate array.
    Keywords: fuzzy; FPGA; hardware accelerator; etorobotics; robotics; behaviour-based control.

  • Two-Degree-of-Freedom Tilt Integral Derivative Controller based Firefly Optimization for Automatic Generation Control of Restructured Power System
    by G. Tulasichandra Sekhar, Ramana Pilla, Ahmad Taher Azar, Mudadla Dhananjaya 
    Abstract: The present work proposes a two degree of freedom tilt integral derivative (2- DOFTID) controller tuned with a firefly algorithm (FA) for a two-area automatic generation control (AGC) power system. Initially, a standard two-area power system is tested to show the superior output of the proposed controller relative to other control strategies. After that, the 2-DOFTID controller is continued for the next test system i.e. restructured power system as a secondary controller. Further, the operation of the Unified Power Flow Controller (UPFC) is analyzed by assimilation in the tie-line. In addition, the redox flow battery (RFB) is used in area-1 to track the frequency variation and thus increases the system's transient responses. Finally, a robustness study was carried out to analyze the capacity of the proposed controller in a poolco-based transaction with the UPFC and RFB system integration by different system parameters and to consider unpredicted load disturbances.
    Keywords: Automatic Generation Control (AGC); Firefly Algorithm (FA); Random Load Disturbance; Redox Flow Battery (RFB); Two Degree of Freedom Tilt Integral Derivative (2-DOFTID) controller; Unified Power Flow Controller (UPFC).

  • Multi-label legal text classification with BiLSTM and Attention layer   Order a copy of this article
    by Liriam Enamoto, Andre Santos, Ricardo Maia, Li Weigang, Geraldo Pereira Rocha Filho 
    Abstract: Like many other knowledge fields, the legal area has experienced an information-overloaded scenario. However, to extract data from legal documents is a challenge owing to the complexity of legal concepts and terms. This work aims to address Bidirectional Long Short-Term Memory (BiLSTM) to perform Portuguese legal text classification to solve such challenges. The proposed model is a shallow network with one BiLSTM layer and one Attention layer trained over two small datasets extracted from two Brazilian courts: the Superior Labour Court (TST) and 1st Region Labour Court. The experimental results show that combining the BiLSTM layer and the Attention layer for long judicial texts helps to capture the past and future contexts and extract multiple tags. As the main contribution of this research, the proposed model can quickly process multi-label and multi-class datasets and adapt to new contexts in different languages.
    Keywords: legal text; multi-label; text classification; BiLSTM; Attention layer.

  • Network Intrusion Detection using Fusion Features and Convolutional Bidirectional Recurrent Neural Network
    by Jagruthi H, Kavitha C, Manjunath Mulimani 
    Abstract: In this paper, novel fusion features to train Convolutional Bidirectional Recurrent Neural Network (CBRNN) are proposed for network intrusion detection. UNSW-NB15 dataset's attack behaviors (input features) are fused with their first and second-order derivatives at different stages to get fusion features. In this work, we have taken architectural advantage and combine both Convolutional Neural Network (CNN) and bidirectional Long Sort-Term Memory (LSTM) as Recurrent Neural Network (RNN) to get CBRNN. The input features and their first and second-order derivatives are fused and considered as input to CNN and this fusion is known as early fusion. Outputs of the CNN layers are fused and used as input to the bidirectional LSTM, this fusion is known as late fusion. The performance of the early and late fusion features is evaluated on the publicly available UNSW-NB15 dataset. Results show that late fusion features are more suitable for intrusion detection and outperform the state-of-the-art approaches with average recognition accuracies of 98.00% and 91.50% for binary and multiclass classification configurations, respectively.
    Keywords: Intrusion detection; fusion features; Convolutional Neural Network (CNN); bidirectional Long Sort-Term Memory (LSTM); Convolutional Bidirectional Recurrent Neural Network (CBRNN); UNSW-NB15 dataset

  • A Brief Survey for Person Re-identification based on Deep Learning
    by Li Liu, Xi Li, Xuemei lei 
    Abstract: Person re-identification (Re-ID) has been paid more attention due to its wide application in intelligent surveillance systems. Finding the same person from other non-overlapping cameras when a specific image of the pedestrian is given, which is a challenging problem for the reason of viewpoint variation, clothes changing, low resolution, etc. In this paper, we motivate reviewing for deep learning-based methods of Person Re-ID. We present a detailed survey of the state of the art in terms of the description and analysis of supervised-based and unsupervised-based network and their performance evaluation in the commonly used datasets. Finally, we analyze the challenging problems and discuss future works in this area.
    Keywords: Person Re-Identification, Deep Learning, Literature Survey, Evaluation Metric

  • Parameter identification of fractional order CARMA model based on least squares principle
    by Jiali Rui, Junhong Li 
    Abstract: The fractional order model is more accurate than the integer order model when describing the actual system. This paper studies the fractional order CARMA model identification, and derives the identification expression of the fractional order CARMA model through the definition of Grünwald-Letnikov fractional order differentiation. In order to identify the unknown parameters of the model with colored noise, the least squares based iterative identification algorithm and the recursive extended least squares algorithm are respectively derived based on the least squares principle. Then two simulation examples are given. The simulation results show that the errors of the parameter estimation obtained by the two algorithms are small, which proves the effectiveness of the proposed algorithms.
    Keywords: fractional order model; parameter estimation; system identification; least squares; colored noise.

  • On the Use of Emerging Decentralised Technologies for Supporting Software Factories Coopetition
    by Fábio Paulo Basso, Diego Kreutz, Carlos Molina-Jiménez, Rafael Z. Frantz 
    Abstract: Besides the recent adoption of outsourcing and open source business models, software factory is still a centralised process. In spite of the advantages of centralisation, it is widely accepted that decentralised systems are better alternatives; for instance, they are more scalable and reliable, and more suitable for market sharing. The implementation of decentralised software factories demands solution to several technical challenges that conventional technology cannot solve. Emerging decentralised technologies (e.g., blockchain and smart contracts) can help to solve these challenges. Decentralised technologies are only emerging, consequently, its potential use to support software factories is a new research avenue. To cast some light on this emerging topic, in this article we provide an analysis of some centralized and decentralized architectures and raise research questions that need attention from the perspective of architecture selection. To frame the discussion, we focus our attention on software architectures for Model Driven Engineering, Asset Specifications and Integration Tools, resulting in a characterization and analysis of possibilities for implementation of heterogeneous blockchain-oriented repositories.
    Keywords: MDE; Software Ecosystems; Smart Contracts; Systems of Systems; Blockchain; MDE as a Service; Pivot Language

  • A novel approach for Decision Support System in Cricket using Machine Learning
    by Sudan Jha, Sidheswar Routray, Hikmat A. M. Abdeljaber, Sultan Ahmad 
    Abstract: In shorter format of Cricket, the choice of a bowler has three main parameters namely: economy, strike rate and dot balls delivered. In most of the cases, the most hitting parameters are economy rate and number of wickets taken, which again are inter related with the dot balls delivered. This paper presents a survey operational linear approach which comparative analyse the above-cited three parameters and suggests a solution-based approach to choose a best bowler in “Playing Eleven” with highest preference to the dot balls delivered. The bowler with highest dot ball delivered is considered as highest preference bowler. The inter-relationship among these parameters are established based on collected data. The proposed indicator is proved useful while making decisions. A software-based architecture is also proposed relating to decision support system for selecting a bowler in playing eleven using past data.
    Keywords: Twenty twenty match; cricket; bowler selection; indicator; parameter, decision tree

  • Apple Image Fusion Algorithm based on Binocular Acquisition System
    by Liqun Liu, Yubo Zhou , Renyuan Gu 
    Abstract: To solve the problem that single natural scene image acquisition information in orchard cannot meet the requirements of accurate fruit recognition and target positioning, a new apple image fusion algorithm based on the binocular acquisition system named New Non-subsampled Contourlet Transform algorithm is proposed to obtain a high-quality fusion image. The binocular acquisition system is constructed with the Time of Flight industrial camera and the color camera. In order to achieve a better fusion effect, a parameter optimization algorithm based on an Artificial Bee Colony algorithm with discard strategy for brightness saliency function is proposed to optimize the low-frequency component parameters of the new fusion coefficient rule. The experiments were taken on series of apple images under three sunlight conditions in the orchard. The experimental results show that the six evaluation indicators obtained by the new algorithm achieve the expected fusion image effect under three different types of sunlight conditions.
    Keywords: Binocular Acquisition System; ToF; Camera Calibration; Image Fusion

  • A Flexible Mobile Application for Image Classification Using Deep Learning: A Case Study on COVID-19 and X-Ray Images
    by Omar Andres Carmona Cortes 
    Abstract: This paper proposes a flexible mobile application for embedding any CNN-image-based classification model, providing a computer application to assist health professionals. Two approaches are suggested: an embedded offline and a running online model via web API. To preset the applicability of the mobile software, we used a CNN COVID-19 classification based on x-ray images as a case study. Still, any other image-based classification application could have been used. We used a popular Kaggle database consisting of 7,178 X-ray images divided into three classes: Normal, COVID-19, and Viral Pneumonia. We tested 14 state-of-art CNNs to decided which one to embed. The VGG16 achieved the best performance metrics; therefore, the VGG16 was embedded. The software production methodology was applied based on the built model, class diagram, use cases, and execution flow, besides designing a web API to execute the back-end classification model.
    Keywords: Mobile Application; Medicine 4.0; CNN; COVID-19; x-ray.

  • Stacking-based modeling for improved over-indebtedness predictions
    by Suleiman Ali Alsaif, Adel Hidri, Minyar Sassi Hidri 
    Abstract: In a world now starkly divided into pre- and post-COVID times, it's imperative to examine the impact of this public health crisis on the banking functions - particularly over-indebtedness risks. In this work, a flexible analytics-based model is proposed to improve the banking process of detecting customers who are likely to have difficulty in managing their debt. The proposed model assists the banks in improving their predictions. The proposed meta-model extracts information from existing data to determine patterns and predict future outcomes and trends. We test and evaluate a large variety of Machine Learning Algorithms (MLAs) by using new techniques like feature selection. Moreover, models of previous months are combined in order to build a meta-model representing several months using the stacked generalization technique. The new model will identify 91% of the customers potentially unable to repay their debt six months ahead and enable the bank to implement targeted collections strategies.
    Keywords: Over-indebtedness; Predictive analytics; Machine Learning; Features selection; Stacked generalization.

  • FIRE ANT OPTIMIZATION FEATURE SELECTION METHOD FOR BREAST CANCER PREDICTION
    by Vijayalakshmi S, MohanaPriya D, Poonguzhali Narasingam 
    Abstract: Breast cancer is common disease in the today’s world and many techniques are used to extract the cancer cell from the breast image. Most of the systems use features extracted from the images and these images are selected using feature selection techniques. The feature selection techniques helps to a greater level in removal of irrelevant data from huge amount of data and fine tune the identification process and accuracy of relevant data. But still the prediction accuracy and number of the features selection factors are not yielding 100% prediction result. This work, Nearest Density Fire Ant (NDFA) is proposed for breast cancer prediction and used for feature selection techniques. This technique is used for diagnosing and producing results compared to the previous methods such as Random Forest, Ant Colony and Genetic Algorithm. The proposed techniques are inspected on Wisconsin Breast Cancer Database (WBCD) and Breast Cancer Wisconsin WDBC datasets. The experimental result shows that proposed NDFA using Fire Ant optimization technique produces better results in prediction and diagnosing of Breast Cancer.
    Keywords: Breast cancer, feature selection classification, Nearest Neighborhood search, Fire ant, Optimization.

  • Multi-scale Super-pixels Based Passive Forensics for Copy-move Forgery Using Local Characteristics
    by Wensheng Yan, Hong Bing Lv 
    Abstract: Copy-move forgery is the most common kind of tampering technique for digital images. This paper presents a novel hybrid approach, which uses the speeded-up robust features (SURF) point and characteristics of local feature regions (LFRs) matching. First, the multi-scale super-pixels algorithm adaptively divides the suspected image into irregular blocks according to the texture level of host images. Then, the improved SURF detector is adopted in extracting feature points from each super-pixel and the feature point threshold is related to the entropy of each super-pixel block. Next, LFRs are defined, and a robust feature descriptor is extracted from each LFR as a vector field. Last, the matching LFRs are found by using Euclidean locality sensitive hashing; the removal of falsely matched pairs is realized by using the random sample consensus (RANSAC) algorithm. Comparing with the leading-edge block-matching methods and point-based methods, our method can produce far better detection results.
    Keywords: Digital image forensics; multi-scale super-pixels; local feature region; polar harmonic transform (PHT); duplicated region detection

  • Prediction of Right Crop for the Right Soil and Recommendation of Fertilizer usage by Machine Learning Algorithm
    by Rubini PE, Kavitha P 
    Abstract: Crop production is a crucial aspect of farming and it depends on many factors like soil nutrients, fertilizer usage, water resources, etc. The critical factor for effective agriculture is soil. The composition of soil varies from one land to another which muddles the farmers to choose the appropriate crop for their farmland. The proposed study focuses on recommending the right crop for the right soil and also signifies the required composition of fertilizer. Nonetheless, the work requires analysis of a huge volume of data which can be accomplished by applying five machine learning techniques and to enhance the accuracy and precision in the prediction of the crop, the solution of all these algorithms is integrated into a proposed model through ensemble learning which provides the aggregated output i.e., the recommended crop and fertilizer dosage. The intention of the proposed model is to improvise farmer's growth by increasing their productivity and profit.
    Keywords: Agriculture, Machine learning, Fertilizer Usage, Ensemble learning, Prediction, Crop Productivity.

Special Issue on: Intelligent Healthcare Systems for Sustainable Development

  • Prediction of diabetic patients using various machine learning techniques   Order a copy of this article
    by Shalli RANI, Manpreet Kaur, Deepali Gupta, Amit Kumar Manocha 
    Abstract: The growth of technology and digitisation of several areas has made the world more successful in reaching solutions to remote problems. Large amounts of health records are also available in digital storage. Machine learning plays an important role for uncovering the health issues from the digital records or for diagnosis of various diseases. In this paper, we present the introduction to recommender system (RS) with respect to diabetic patients after the rigorous review of existing literature. An experiment analysis is performed in Python with the help of machine learning classifiers, such as logistic regression, averaged perception, Bayes point, boosted decision tree, neural network, decision forest, two class support vector machine and locally deep support vector machine on Pima Indian Diabetes Database. We conducted an experiment on 23K diabetic patients dataset. The results from all the classifiers reveal that the logistic regression performs best, with an accuracy of 78% and predicting the accurate results with a specificity of 92%.
    Keywords: collaborative filtering; diabetic patients; diabetes mellitus; machine learning.

  • Multisensor fusion approach: a case study on human physiological factor-based emotion recognition and classification   Order a copy of this article
    by A. Reyana, P. Vijayalakshmi, Sandeep Kautish 
    Abstract: In people's daily life, human emotion plays an essential role, and the mental state accompanied by physiological changes. Experts have always seen that monitoring the perception of emotional changes at an early stage is a matter of concern. Within the next few years, emotion recognition and classification is destined to become an important component in human-machine interaction. Today's medical field makes much use of physiological signals for detection of heart sounds and identifying heart diseases. Thus the parameters temperature and heartbeat can identify the major health risks. This paper takes a new look at the development of an emotion recognition system using physiological signals. In this context, the signals are obtained from the body sensors such as muscle pressure sensor, heartbeat sensor, accelerometer, and capacitive sensor. The emotions observed are happy (excited), sad, angry, and neutral (relaxed). The results of the proposed system shows an accuracy percentage for the emotional states as follows: happy 80%, sad 70%, angry 90%, and neutral 100%.
    Keywords: emotion; recognition; multisensor fusion; body sensors; mental state.

  • LabVIEW based cardiac risk assessment of fetal ECG signal extracted from maternal abdominal signal   Order a copy of this article
    by Prabhjot Kaur, Lillie Dewan 
    Abstract: In recent years, the inclination toward the automated analysis of the fetal ECG signal has become a trend. Mathematical computational processing of abdominal fetal ECG has proved to be beneficial in the crucial diagnosis of complex cardiac diseases. To arrive at the diagnosis, a cardiologist needs to observe the variations critically in the duration and amplitude of different waves and segments of the ECG. In the case of a fetus, a preliminary diagnosis of these deviations helps to have a valid and appropriate intervention, which may otherwise result in permanent damage to the brain and nervous system. For this reason, the fetal cardiac signal has been efficiently extracted from a composite abdominal signal in this paper. The signal extraction has been accomplished in the LabVIEW environment using the Independent Component Analysis (ICA) approach, implemented after the application of hybrid filters, employed for removing noise and artifacts within the signal taken from PhysioNet Database. By proper selection of the cut-off frequency of filters, the denoised signal is approximately 99% accurate. Statistical features, such as the signal-to-noise ratio, standard deviation, error, and accuracy, have been computed as well as morphological features including heart rate, time and amplitude of QRS complex with a duration of the PR interval, RR interval, and QT interval. Results obtained demonstrate that the implementation of ICA for fetal ECG signal extraction helps to determine fetal heart rate accurately with low computational complexity. The performance of the proposed algorithm has also been explored in the case of twin pregnancy. The estimated heart rate is comparable to the actual heart rate, which validates the algorithm's accuracy. The results also indicate the feasibility of real-time application of data acquisition and analysis.
    Keywords: electrocardiogram; independent component analysis; sinus rhythm; tachycardia; bradycardia; denoising filters; signal-to-noise ratio; standard deviation; accuracy; LabVIEW.

  • Impact of feature extraction techniques on cardiac arrhythmia classification: experimental approach   Order a copy of this article
    by Manisha Jangra, Sanjeev Kumar Dhull, Krishna Kant Singh 
    Abstract: This paper provides comparative analysis of state-of-the-art feature extraction techniques in the context of ECG arrhythmia classification. In addition, the authors examine a linear heuristic function LW-index as an indirect measure for separability of feature sets. Seven feature sets are extracted using state-of-the-art feature extraction techniques. These include temporal features, morphological features, EMD-based features, wavelet transform based features, DCT features, Hjorth parameters, and convolutional features. The feature sets performance is evaluated using SVM classifier. The experimental setup is designed to classify ECG signals into four types of arrhythmic beat, which are normal (N), ventricular ectopic beat (VEB), supraventricular ectopic beat (SVEB) and fusion beat (F). A PSO-based feature selection method is used for dimensionality reduction using the LW-index as cost function. The results validate the hypothesis that convolutional features have better discrimination capability as compared with other state-of-the-art features. This paper can resolve the hassles for new researchers related to performance efficacy of individual feature extraction techniques. The work offers an inexpensive methodology and measure to indirectly evaluate and compare the performance of feature sets.
    Keywords: ECG; feature extraction; validity index; feature selection; CNN; PSO; DWT; DCT; Hjorth parameters; EMD; temporal features; MIT-BIH database; SVM.

  • IoT-based automatic intravenous fluid monitoring system for smart medical environment   Order a copy of this article
    by Harsha Chauhan, Vishal Verma, Deepali Gupta, Sheifali Gupta 
    Abstract: Over the last few years, hospitals and other healthcare centres are adopting advances in many sophisticated technologies in order to assure the fast recovery of patients. In almost all hospitals, a caretaker/nurse is responsible for the monitoring of intravenous fluid levels. Usually most of the caretakers forget to change the bottle at the correct time owing to their busy schedule, as a result of which the patient may face problems of reverse flow of blood towards the bottle. To overcome this critical issue, this paper proposes an IoT-based automatic intravenous fluid monitoring system. The proposed device consists of Arduino UNO (i.e. ATMega328 microcontroller), liquid crystal display, solenoid actuator, force sensitive resistor 0.5, ESP8266, a buzzer and LED lights. The authors have used FSR (Force Sensitive Resistor) sensor to monitor the weight of bottle. With the installation of the proposed device, the constant need of monitoring will be reduced by the staff, especially during night hours, thus decreasing the chance to harm the patient and increase the accuracy of healthcare in hospitals. Also, this system will avoid the fatal risk of air embolisms entering the patients bloodstream, which leads to immediate death. To analyse the performance of the proposed system, the authors have done a sample test, by taking time as a parameter to analyse how much time the intravenous fluid bottle is taking to get empty. The results have shown a promising future aspect of the proposed device in order to enhance the healthcare services.
    Keywords: drip monitoring system; IoT; healthcare; intravenous fluid; wearable electronics; ESP8266; FSR sensor.

  • Artificial intelligence based algorithm to track the probable COVID-19 cases using contact history of virus-infected persons   Order a copy of this article
    by Javed Shaikh, R.S. Singh, Demissie Jobir Gelmecha, Tadesse Hailu Ayane 
    Abstract: Currently, the world is facing major challenges in tackling COVID-19. It has affected many countries of the world in terms of human lives, economy and so many other aspects. Many organisations and scientists are working to find ways in which the spread of the COVID-19 can be minimised. One technology that can be effective in tackling this virus is Artificial Intelligence (AI), which can help in many ways. The foremost requirement of this situation is to find the cases of infection as early as possible so that it will not spread rapidly. In this paper, an AI-based algorithm is proposed for the tracking of probable COVID-19 cases. The algorithm uses the mobile numbers of coronavirus-infected persons as data for forecasting. This technique will find the probable infected cases and help in controlling the rapid spread of the virus. This method will provide information regarding an infected person who had contact with other persons by using a forecasting method. As this is an automated tracking system it will help in finding the probable virus-infected cases in a very short time.
    Keywords: COVID-19; artificial intelligence; machine learning; forecasting methods.

  • Prevention of utopsy by establishing a cause-effect relationship between pulmonary embolism and heart failure using machine learning   Order a copy of this article
    by Naira Firdous, Sushil Bhardwaj, Amjad Hussain Bhat 
    Abstract: This paper presents a cause-effect relationship between heart failure and pulmonary embolism, using machine learning. The proposed method is divided into two parts. The first part includes the establishment of connectivity between the two medical fields, which is done by finding out the relationship between the pulse pressure and the stroke volume. The second phase includes the implementation of machine learning on the above-formed connectivity. A univariate technique of feature selection is performed initially in order to get the most relevant attributes. The overfitting problem has been addressed by formulating an ensemble model using hard and soft voting classifiers. Also, the efficiency has been checked by increasing the number of hidden layers of a neural network.
    Keywords: pulmonary embolism; stroke volume; pulse pressure; systolic; diastolic; overfitting; ensemble classifiers; neural network.

  • Tool-based persona for designing user interfaces in healthcare   Order a copy of this article
    by Hanaa Alzahrani, Reem Alnanih 
    Abstract: Technology devices such as smartphones, tablets, and computers have become an intrinsic part of modern life, as this form of technology has entered all businesses and fields such as healthcare. Health sites (HSs) impact healthcare delivery by using technology to improve healthcare outcomes, reduce costs and errors, and increase patient and information safety. Among the available website builders, none has been developed for healthcare sites or designed based on healthcare persona. This is a challenge when designing a specific HS for a particular target group of users such as doctors. System complexity, and the difficulty for doctors to deal with those systems, made it necessary to consider persona that help to understand the mental language of the target users, making the whole systemic experience quite human. The purpose of this paper is to create a new health site design (HSD) tool for designing a User Interface (UI) based User Experience (UX). The tool is designed based on doctors behaviour, personae and real-life scenarios. The applicability of this tool is explored as well as its usability, especially for those with no background in web design. The tool was tested by participants from designing perspectives randomly divided into two groups: control group, who were asked to follow all the instructions in terms of watching and attending the tutorial session and then perform the tasks; and study group, who were asked to perform the tasks directly. The study results show that there is no significant difference between participants in the two groups for effectiveness and efficiency. However, for the cognitive load, the study group was better than the control group. All of the participants were able to complete all the tasks successfully with a minimum amount of time, clicks, and errors. In addition, user satisfaction yielded a score of 84.6 on the System Usability Scale (SUS), mapping it in the A Grade.
    Keywords: health systems design tool; website builders; user experience; persona; experimental design; usability evaluation; system usability scale.

  • RC-DBSCAN: redundancy controlled DBSCAN algorithms for densely deployed wireless sensor network to prolong the network lifespan   Order a copy of this article
    by Tripti Sharma, Amar Mohapatra, Geetam Singh Tomar 
    Abstract: In a wireless sensor network, the nodes are spatially distributed and spread over application-specific experimental fields. The primary role of these nodes is to gather the information for various intended fields such as sound, temperature, vibration, etc. In this proposed algorithm efforts have been made to prolong the network lifespan by decreasing the nodes' energy consumption by considering the critical issues of dense deployment. Every node will limit its chance of participation in any cluster based on the local sensor density. The network area is divided into high- and low-density regions using the DBSCAN algorithm. The nodes in low-density areas are considered critical because there is very little probability for sensing and broadcasting the redundant data by these nodes. The division of high- and low-density regions by applying DBSCAN helps in sleep management. Sleep management helps in energy optimisation in dense areas and thus prolongs network lifetime with the improved stable region. It has been observed through computer simulation that RC-DBSCAN is more energy-efficient than IC-ACO and LEACH in densely deployed network areas in terms of total data packets received by the base station, prolonged network lifespan and improved stability period.
    Keywords: DBSCAN; WSN; fuzzy; sleep management.

  • Coronary artery disease diagnosis using extra tree support vector machine: ET-SVMRBF   Order a copy of this article
    by Pooja Rani, Rajneesh Kumar, Anurag Jain 
    Abstract: Coronary artery disease (CAD) is a type of cardiovascular disease that can lead to cardiac arrest if not diagnosed timely. Angiography is a standard method adopted to diagnose CAD. This method is an invasive method having certain side-effects. So there is a need for non-invasive methods to diagnose CAD using clinical data. In this paper, the authors propose a methodology ET-SVMRBF (Extra Tree SVM-RBF) to diagnose CAD using clinical data. The Z-Alizadeh Sani CAD dataset available on UCI (University of California, Irvine) has been used for validating this methodology. The class imbalance problem in this dataset has been resolved using SMOTE (Synthetic Minority OverSampling Technique). Relevant features are selected using the extra tree feature selection method. The performances of different classifiers XGBoost (Extreme Gradient Boosting), KNN (K-Nearest Neighbour), SVM-Linear (Support Vector Machine-Linear), and SVM-RBF (Support Vector Machine-Radial Basis Function) on the dataset have been evaluated. GridSearch optimisation method was used for hyperparameter optimisation. Accuracy of 95.16% was achieved by ET-SVMRBF, which is higher than recent existing work in the literature.
    Keywords: coronary artery disease; cardiovascular disease; extra tree; support vector machine; XGBoost; K-nearest neighbour.

  • Prediction of cardiac disease using online extreme learning machine   Order a copy of this article
    by Sulekha Saxena, Vijay Kumar Gupta, P.N. Hrisheekesha, R.S. Singh 
    Abstract: This paper presents an automated machine learning (ML) algorithm to detect the coronary disease-like congestive heart failure (CHF) and coronary artery disease (CAD). The proposed automated ML has been employed as a combination of nonlinear features extraction methods, online sequential machine (OS-ELM) and linear discriminate analysis (LDA) as well as generalised discriminate analysis (GDA) as feature reduction algorithms. The dimension reduction of nonlinear features was done by LDA and GDA with Gaussian or radial basis function kernel (RBF), and OSELM binary classifier with an activation function, such as Sigmoid, Hardlim, or RBF, has been used to detect CHF and CAD subjects. For training and validation of ML, twelve nonlinear features were extracted from heart rate variability (HRV) signals. The HRV standard databases have obtained from normal young and elderly CHF and CAD subjects. The numerical experiments were carried out on the sets as CAD-CHF, young-elderly-CAD and young-elderly-CHF subjects. The numerical simulation results clearly have shown when GDA with Gaussian or RBF kernel function is combined with OS-ELM having Sigmoid, Hardlim and RBF activation function, the proposed scheme achieved better detection performance compared with OSELM. To test the robustness of proposed method the classification performances including accuracy, positive prediction value, sensitivity and specificity were calculated on a 100 trial, and it achieved average performance accuracy of 99.77% for young-elderly-CAD and 100% overall performance for CAD-CHF and young-elderly-CHF subjects.
    Keywords: Lempel-Ziv; Poincare plot; OSELM; sample entropy; dimension reduction method; detrended fluctuation analysis.

  • Digitisation of paper-ECG using column-median approach   Order a copy of this article
    by Priyanka Gautam, Ramesh Kumar Sunkaria, Lakhan Dev Sharma 
    Abstract: Usually, ECG (electrocardiogram) signals are recorded on standard grid paper to determine the potential of cardiac disorders in hospitals. In the current technological era, existing records of paper-ECG are needed to be converted into digital forms as it is the most effective technique to analyse, process, store and communicate attributes of ECG (features/quality, etc.) for clinical uses. The present work introduces a novel technique for the digitisation of paper-ECG (column-median approach). This paper uses correlation and heart rate as parameters to validate the proposed methodology. To observe the precision of the proposed algorithm, the accuracy of the heart rate is also calculated. The overall correlation and percentage error carried out in 50 different signals are 0.86 and 0.79%, respectively. The overall accuracy obtained for 50 different ECG signals is 99.21%, which shows that the methodology works effectively.
    Keywords: paper-ECG; column-median approach; biomedical image processing.

  • Dermoscopic image segmentation method based on convolutional neural networks   Order a copy of this article
    by Dang N. H. Thanh, Le Thi Thanh, Ugur Erkan, Aditya Khamparia, V. B. Surya Prasath 
    Abstract: In this paper, we present an efficient dermoscopic image segmentation method based on the linearisation of gamma-correction and convolutional neural networks. Linearisation of gamma-correction is helpful to enhance low-intensity regions of skin lesion areas. Therefore, postprocessing tasks can work more effectively. The proposed convolutional neural network architecture for the segmentation method is based on the VGG-19 network. The acquired training results are convenient to apply the semantic segmentation method. Experimental results are conducted on the public ISIC-2017 dataset. To assess the quality of obtained segmentations, we make use of standard error metrics, such as the Jaccard and Dice, which are based on the overlap with ground truth, along with other measures such as the accuracy, sensitivity, and specificity. Moreover, we provide a comparison of our segmentation results with other similar methods. From experimental results, we infer that our method obtains excellent results in all the metrics and obtains competitive performance over other current and state of the art models for dermoscopic image segmentation.
    Keywords: dermoscopic images; deep CNNs; machine learning; skin lesions; image segmentation; skin cancer.

Special Issue on: The Significance of Machine Learning for COVID-19

  • Analysis of some topological nodes using the adaptive control based on 9-D, hypothesis theoretical to COVID-19   Order a copy of this article
    by Abdulsattar Abdullah Hamad, M.Lellis Thivagar, K. Martin Sagayam 
    Abstract: This work is an extension based on a new model previously proposed where the Hamiltonian, synchronisation, Lyapunov expansion, equilibrium, and stability of the proposed model for the same authors were analysed. In this paper we present a broader analysis to develop receiving network nodes faster. The analysis and study have demonstrated how to determine the basic structure and content of the Sym in theory, an attempt to identify objects that have a fundamental engineering role for the model after confirming the performance and results, we can suggest it to determine the spread of coronavirus.
    Keywords: lu; Hamiltonian; synchronisation; Lyapunov expansion; equilibrium; topological nodes.

  • An ensemble approach to forecast COVID-19 incidences using linear and nonlinear statistical models   Order a copy of this article
    by Asmita Mahajan, Nonita Sharma, Firas Husham AlMukhtar, Monika Mangla, Krishna Pal Sharma, Rajneesh Rani 
    Abstract: Coronavirus 2019, also known as COVID-19, is currently a global epidemic. This pandemic has infected more than 100 countries all over the globe and is continuously spreading and endangering the human species. Researchers are perpetually trying to discover a permanent antidote for the virus, but presently no particular medication is available. As a result, health sectors worldwide are experiencing an unexpected rise in cases each day. Hence, it becomes necessary to predict the spread of the disease so as to enable public health sectors to improve their control capabilities in order to mitigate the spread of the infections. This manuscript proposes a stacked ensemble model for accurately forecasting the future occurrences of COVID-19. The proposed ensemble model uses Exponential Smoothing (ETS), Autoregressive Integrated Moving Average (ARIMA), and Neural Network Autoregression (NNAR) as the base models. Each base model is trained individually on the disease dataset, whose regress values are then used to train the Multilayer perceptron (MLP) model. The stacked model gives better predictions compared with all the other four forecasting models. It is validated that the proposed model outperforms the base models. This validation is established through error metrics such as Root Mean Square Error (RMSE) and Mean Absolute Error (MAE). The results conclude that the ensemble model is highly robust and reliable in forecasting future COVID incidences in comparison to other statistical time series models.
    Keywords: COVID-19; pandemic; forecasting; ensemble approach; stacking; autoregressive integrated moving average; exponential smoothing; neural network; multilayer perceptron.

  • Simple program for computing objective optical properties of magnetic lenses   Order a copy of this article
    by R.Y. J. Al-Salih, Abdullah E. M. AlAbdulla, Ezaldeen Mahmood Abdalla Alkattan 
    Abstract: This paper describes the basic features of a new program denoted MELOP (Magnetic Electron Lens Optical Properties), primarily intended for providing a free simple way to calculate the objective focal properties of rotationally-symmetric electron magnetic lenses in the presence of an axial magnetic field distribution. The calculation is done by solving the paraxial ray equation using the fourth-order Runge-Kutta formula. For a specific beam voltage, the program computes the excitation parameter, the object or image plane, the objective principal plane, the objective focal length, the objective magnification, the spherical aberration coefficient, the chromatic aberration coefficient, and the magnetic flux density at the object or image plane. These parameters are solved for zero, low, high or infinite magnification condition. The program can handle instantaneously plotting the variations of the calculated parameters relatively. In addition to that, the data can be transferred to xlsx or txt file format.
    Keywords: electron lenses design; electron objective focal properties; fourth-order Runge-Kutta formula.

  • The impact of oil exports on consumer imports in the Iraqi economy during the COVID-19 period: a theoretical study   Order a copy of this article
    by Mustafa Kamil Rasheed, Ali Mahdi Abbas Al-Bairmani, Abir Mohammed Jasim Al-Hussaini 
    Abstract: Exports and imports of foreign trade are widely considered to be the most important contribution to the economic development of society. Especially, the potential and competitiveness of exports are realised, result from that an import capacity that supports growth and balance in all economic sectors. Particularly, the foreign exchange revenues come up with increasing exports, which tend to finance investment projects as well as encourage the importation of developed means of production that contribute to increase productivity and achieve economic efficiency, but this is rarely achieved in developing countries including Iraq. In spite of the high amount of oil exports, there are a large proportion of revenues from these exports that go towards import consumer goods, hence do not create a stimulating environment for production and investment. On the contrary, they stimulate the investment multiplier in the exporting partner countries, which stimulates their investment activity. The hypothesis of this study refers to how the direct relationship between oil exports and consumer imports disrupts the economy and output and weakens its performance. The most important finding of the study is that oil exports in Iraq directly link with consumer imports, which leads to the loss of the Iraqi economy's financial resource and stimulating economic activity. The study recommends the need to adopt economic diversification to overcome the unilateral Iraqi economy, as well as the optimal use of financial resources to support the national economy.
    Keywords: oil exports; consumer imports; total exports; production activities; COVID-19.

  • Evaluation of the impact parameters of nano Al2O3 dielectric in wire cut-electrical release machining in the COVID-19 environment   Order a copy of this article
    by Farook Nehad Abed, Azwan Bin Sapit, Saad Kariem Shather 
    Abstract: This paper focuses on wire electric discharge machine in the COVID-19 environment. It can be considered as an attempt to develop models of response variables. Using a different liquid, one of which is nanoparticle (Al2O3), in the ratio of (2 mg) and the function of comparison in both cases is the rate of material removal, in the wire electric discharge machine process using the response surface methodology. The pilot plan is based on the concept of the Box-Behnken, and the study conveys the six main parameters. To evaluate the value of the advanced model, ANOVA was applied; the test results support the validity and suitability of the advanced RSM model. Optimum settings for the parameters are improved work safety in the COVID-19 environment.
    Keywords: wire electric discharge machine; titanium; MRR; RSM; COVID-19.

  • Analysis of convolutional recurrent neural network classifier for COVID-19 symptoms over computerised tomography images   Order a copy of this article
    by Srihari Kannan, N. Yuvaraj, Barzan Abdulazeez Idrees, P. Arulprakash, Vijayakumar Ranganathan, Udayakumar E., P. Dhinakar 
    Abstract: In this paper, a Convolutional Recurrent Neural Network (CRNN) model is designed to classify the patients with COVID-19 infections. The CRNN model is designed to identify the Computerised Tomography (CT) images. The processing of CRNN is modelled with input image processing and feature extraction using CNN and prediction by RNN model that quickens the entire process. The simulation is carried with a set of 226 CT images by varying the training-testing accuracy on a 10-fold cross-validation. The accuracy in estimating the image samples is increased with increased training data. The results of the simulation show that the proposed method has higher accuracy and reduced MSE with higher training data than other methods.
    Keywords: image classification; COVID-19; medical imaging; convolutional recurrent neural network; 10-fold cross-validation.

  • An empirical validation of learning from home: a case study of COVID-19 catalysed online distance learning in India and Morocco   Order a copy of this article
    by Gabriel A. Ogunmola, Wegayehu Enbeyele, Wissale Mahdaoui 
    Abstract: The world as we know it has changed over a short period of time, with the rise and spread of the deadly novel coronavirus known as COVID19 and will never be the same again. This study explores the devastating effects of the novel virus pandemic and the resulting lockdown, thus the need to transform the offline classroom into an online classroom. It explores and describes the numerous online teaching platforms, study materials, techniques, and technologies being used to ensure that educating the students does not stop. Furthermore, it identifies the platforms and technologies that can be used to conduct online examinations in a safe environment devoid of cheating. Additionally, it explores the challenges facing the deployment of online teaching methods. On the basis of literature review, a framework is proposed to deliver superior online classroom experience for the students, so that online classroom is as effective as or even better than offline classrooms. The identified variables were empirically tested with the aid of a structured questionnaire: there were 340 respondents who were purposefully sampled. The result indicates that students prefer online teaching when such sessions are enhanced with multimedia presentations. The study recommends that instructors need to train in the use of technology enhanced learning if learning from home is going to be effective.
    Keywords: COVID–19; online classroom; Zoom; lockdown; MOOC; iCloud; proportional odds model.

  • An empirical study on social contact tracing of COVID-19 from a classification erspective   Order a copy of this article
    by Mohammed Gouse Galety, Elham Tahsin Yasin, Abdellah Behri Awol, Lubab Talib 
    Abstract: The staggering emergency of COVID-19 is a pandemic and irresistible without the antibody and cure. This uprising issue needs the preventive controls through creation of awareness and implementation of the contact tracing process. The procedure of contact tracing is to determine the infected individual or the men and women who have had contact with infected people to be indexed and dealt with carefully. This device has its usage to lessen the infections with the information described at the infectious disease and to reduce the spread of the infection with precautionary measures by means of creating awareness. Awareness creation demands various tools for its installation whereas social media networking is a knowledge set of the current market coverage of maximum sociology of the planet to see, analyse and interpret the present the market knowledge with the support of classifiers and applied math learning ways of Artificial Intelligence (AI). This paper derives the obtainable data with its analysis and widespread contract tracing through the use of social media dataset, similarly as math learning ways of AI are applied to determine the infected COVID-19 and infers the adequate action and preventive measures for the reduction of the expansion of COVID19 infections by the controller's segment of the said method.
    Keywords: COVID-19; infection; preventive controls; awareness creation; social media; contact tracing; artificial intelligence.

  • Analysis of the COVID-19 pandemic and forecasting using machine learning models   Order a copy of this article
    by Ekansh Chauhan, Manpreet Sirswal, Deepak Gupta, Ashish Khanna, Aditya Khamparia 
    Abstract: The coronavirus pandemic is rapid and universal, menacing thousands of lives and all economies. The full analysis of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is imperative as a deciding factor for the remedic actions. Machine learning is being used in every sphere to fight the coronavirus, be it understanding the biology of the virus in time, be it diagnosing the patients, or be it drug and vaccine development. It is also critical to predict the pandemic lifetime so to decide on opportune and remedic activities. Being able to accurately forecast the fate of an epidemic is a critical but difficult task. In this paper, based on public data available to the world and India, the estimation of pandemic parameters and the ten days ahead forecast of the coronavirus cases is proposed using Prophet, Polynomial Regression, Auto Arima and Support Vector Machine (SVM). The performances of all the models were motivating. MAE and RMSE of polynomial regression and SVM were convincingly low. Polynomial regression has predicted the highest number of cases for India and the lowest number of cases for the world, which depicts that according to polynomial regression the daily cases are going to spike in India and decline a little in the world. Prophet has forecast the lowest number of cases for India and the highest number of cases in the world, after SVM. The results of Arima are closest to the average of combined results by all of the four models. The only limitation is the lack of enough data, which creates high uncertainty in the forecast. The four factors, i.e. growth factor, growth ratio, growth rate and second derivative for the growth of coronavirus, in the USA and India are also calculated and compared. Several theories revolving around the origin of the coronavirus are also discussed in this paper. Under optimistic predictions, the results show that the pandemic in some countries is going to terminate soon, while in some countries it is going to increase at an alarming rate and the overall rate of growth of the coronavirus cases is decreasing in both the USA and India.
    Keywords: COVID-19; machine learning; novel coronavirus; classification; Technology.

  • A statistical analysis for COVID-19 as a contact tracing approach and social networking communication management   Order a copy of this article
    by Abdulsattar A. Hamad, Anasuya Swain, Suneeta Satpathy, Saibal Dutta 
    Abstract: The COVID19 outbreak caused by severe acute respiratory syndrome coronavirus 2 (SARSCoV2) has been declared as a global pandemic. The first case of coronavirus was detected in Wuhan city of China and later declared as pandemic by the World Health Organization. As of the first week of August 2020, more than 20 million cases of COVID19 have been reported globally, resulting in more than 700,000 deaths and around 12 million people have recovered. The medium of spread of viral infection is the droplets produced from nose and mouth by coughing, sneezing and talking or by small droplets which hang in the air. COVID-19 disease as yet has no vaccine or medication. Preventive measures for this infectious disease may include creating awareness and implementation of the contact tracing process. The process of contract tracing is to determine the infected person or the people who have had contact with infected people, to be listed and treated carefully. The basic aim is to reduce the infections with the detailed description of COVID-19 and minimise the spreading of the infections by creating awareness. Awareness creation demands the adoption of different tools, among which social media networking is considered as a fruitful medium to achieve the same. Further different classifiers and statistical learning methods are also used to analyse and interpret the social media networking data. This research study has derived the available information with the employment of statistical learning methods of artificial intelligence and successful contract tracing through the use of social media datasets to determine the infected COVID-19 data. In addition, this research also infers the adequate course of action and preventive measures for the reduction of the growth of COVID19 infections with the help of the controllers segment of the said method. The present work has also adopted the Natural Language Processing (NLP) method as an aid to process the social network data and find the solution to the inquiries. In addition, the work has also validated the relationship between the social networking and employment of artificial intelligence techniques as a contact tracing and awareness program with the help of statistical tools like regression, coefficient correlation and Anova. The main objective of the study is to reduce the pandemic infections by spreading awareness and generating detailed descriptive reports about COVID-19 with the usage of social media networking as well as artificial intelligence statistical learning methods.
    Keywords: COVID-19; infection; preventive controls; awareness creation; social media; contact tracing; artificial intelligence.

  • The degree of applying electronic learning in the Gifted School of Nineveh in Iraq, and what management provided to the students and its relationship to qualitative education during the COVID-19 pandemic.   Order a copy of this article
    by Ahmed S. Al-Obeidi, Nawar A. Sultan, Anas R. Obaid, Abdulsattar A. Hamad 
    Abstract: This paper discusses the most important pillars of e-learning and the distance learning process in a Gifted School in Nineveh. Through this study, we were able to identify the methods of conducting distance education under the information technology system and on the work and learning environment used in e-learning. Increasing the efficiency of the educational institution through distance e-learning just as the basics of building an e-learning system in various educational institutions. The types of program and the best-known of which are in the application of e-learning in educational institutions in general and in the Gifted School in particular are also discussed. A comparison is made between the two in terms of method and accuracy of using these programs.
    Keywords: electronic learning; Gifted School of Nineveh; COVID-19; distance education; hypothetical education.

  • Design and analysis on the molecular level of a biomedical event trigger extraction using recurrent neural network based particle swarm optimisation for COVID-19 research   Order a copy of this article
    by R.N. Devendra Kumar, Arvind Chakrapani, Srihari Kannan 
    Abstract: In this paper, rich extracted feature sets are fed to the deep learning classifier that estimates the optimal extraction of lung molecule triggered events for COVID-19 infections. The feature extraction is carried out using a Recurrent Neural Network (RNN) that effectively extracts the features from the rich datasets. Secondly, a particle swarm optimisation (PSO) algorithm is used to classify the extracted features of COVID-19 infections. The rule set for the feature extractor is supplied by the fuzzy logic rule set. The simulation shows that the RNN-PSO, which is the combination of two algorithms, offers improved performance over other machine learning classifiers.
    Keywords: event triggers; COVID-19; lung molecules; feature extraction; classification; particle swarm optimisation; recurrent neural network.

  • Multivariate economic analysis of the government policies and COVID-19 on the financial sector   Order a copy of this article
    by Monika Mangla, Nonita Sharma, Sourabh Yadav, Vaishali Mehta, Deepti Kakkar, Prabakar Kandukuri 
    Abstract: The whole world is experiencing a sudden pandemic outbreak of COVID-19. In the absence of any specific treatment or vaccine, social distancing has proved to be an effective strategy in containing the outbreak. However, this has led to disruption in trade, travel, and commerce by halting manufacturing industries, by the closing of corporate offices, and all other sundry activities. The alarming pace of the virus spread and the increased uncertainty is quite concerning to the leading financial stakeholders. This has led to the customers, investors, and foreign trading partners fleeing away from new investments. Global markets plummeted, leading to erosion of more than the US $6 trillion within just one week from 24 to 28 February 2020. During the same week, the S&P 500 index alone experienced a loss of more than $5 trillion in the USA, while other top 10 companies in the S&P 500 suffered a combined loss of more than $1.4 trillion. This manuscript performs multivariate analysis of the financial markets during the COVID-19 period and thus correlates its impact on the worldwide economy. An empirical evaluation of the effect of containment policies on financial activity, stock market indices, purchasing manager index, and commodity prices is also carried out. The obtained results reveal that the number of lockdown days, fiscal stimulus, and overseas travel ban significantly influence the level of economic activity.
    Keywords: coronavirus; COVID-19; financial sector; forecasting; multivariate analysis; NIFTY indices; pandemic; regression model; stringency index.

  • COVID-19 suspected person detection and identification using thermal imaging based closed circuit television camera and tracking using drone in internet of things   Order a copy of this article
    by Pawan Singh Mehra, Yogita Bisht Mehra, Arvind Dagur, Anshu Kumar Dwivedi, M.N. Doja, Aatif Jamshed 
    Abstract: COVID-19 has emerged as a world-wide health concern where human to human transmission is described with an incubation period of 2-10 days. It is contagion by droplet and contaminated surfaces like hands. The sole way to detect a suspected person being infected with COVID-19 without COVID-19 testing kit is through thermal scanner. Since the disease is spreading at a vast rate, not only it is very hard to check or scan every individual manually but also there are chances of transmission of COVID-19 to the unsuspecting person. In this paper, we propound a system where the suspected person can be easily detected and identified for COVID-19 by using thermal imaging based closed circuit television (CCTV), which will automatically scan the people in the vicinity and capture a video/image of the suspected person. The system will raise an alarm in the vicinity so that people in that area can distance themselves from each other. The recorded video/image will be forwarded to base station and information about the suspected person will be fetched from the server. Meanwhile, drones will be used for tracking the suspected person until the nodal medical team diagnose the suspected person for confirmation. The proposed system can contribute significantly for curbing the rate of infected COVID-19 persons and prevent further spread of this pandemic disease.
    Keywords: coronavirus; COVID-19; face recognition; drone; internet of things; automation; deep learning.

  • Machine learning based classification: an analysis based on COVID-19 transmission electron microscopy images   Order a copy of this article
    by Kalyan Kumar Jena, Sourav Kumar Bhoi, Soumya Ranjan Nayak, Chinmaya Ranjan Pattanaik 
    Abstract: A virus is a type of microorganism which has an adverse effect on human society. Viruses replicate themselves within the human cells rapidly. Currently, the effects of very dangerous infectious viruses are a major issue throughout the globe. Coronavirus (CV) is considered as one of the dangerous infectious viruses for the entire world. So, it is very important to detect and classify this type of virus at the initial stage so that preventive measures can be taken as early as possible. In this work, a machine learning (ML) based approach is used for the type classification of CV such as alpha CV (ACV), beta CV (BCV) and gamma CV (GCV). The ML-based approach mainly focuses on several classification techniques, such as support vector machine (SVM), Random Forest (RF), AdaBoost (AB) and Decision Tree (DT) techniques by processing several CV images (CVIs). The performance of these techniques is analysed using a classification accuracy performance metric. The simulation of this work is carried out using Orange3-3.24.1.
    Keywords: COVID-19; machine learning; TEM CVIs; support vector machine; random forest; AdaBoost; decision tree.

  • Gradient and statistical features based prediction system for COVID-19 using chest X-ray images   Order a copy of this article
    by Anurag Jain, Shamik Tiwari, Tanupriya Choudhury, Bhupesh Kumar Dewangan 
    Abstract: As per data available on the WHO website, the count of COVID-19 patients on 20 June 2020 had surpassed the figure of 8.7 million globally and around 460,000 had lost their lives. The most common diagnostic test for COVID-19 detection is a Polymerase Chain Reaction (PCR) test. In highly populated developing countries such as Brazil, India etc., there has been a severe shortage of PCR test-kits. Furthermore, the PCR-test is very specific and has low sensitivity. This implies that the test can be negative even when the patient is infected. Moreover, it is expensive too. While efforts to intensify the volume and accuracy for PCR testing are in progress, medical practitioners are trying to develop alternative systems using medical imaging in the form of chest radiography or CT scans. In this research work, we have preferred chest X-rays for COVID-19 detection owing to wide availability of chest X-ray infrastructure in all over world. We have designed a decision support system based on statistical features and edge maps of X-ray images to detect COVID-19 virus in a patient. Online available datasets of chest X-ray images have been used to train and test decision tree, K-nearest neighbour, random forest, and multilayer perceptron machine learning classifiers. From the experimental results, it has been found that the multilayer perceptron achieved 94% accuracy, which is highest among the four classifiers.
    Keywords: COVID-19; chest X-ray; statistical features; image gradient; random forest; KNN; multilayer perceptron; decision tree.

  • Indian COVID-19 time series prediction using Facebooks Prophet model   Order a copy of this article
    by Mamata Garanayak, Goutam Sahu, Mohammad Gouse Baig, Sujata Chakravarty 
    Abstract: The entire world has been facing an unprecedented public health crisis due to COVID-19 pandemic for the last year. Meanwhile, more than one million people across the world have already died; many more millions are under treatment. Some countries in Europe have begun to experience the second wave of the pandemic too. This has put the entire health infrastructure of countries under severe strain and has led to downward spiral in the economy. The most worrisome part is the uncertainty as to the spread or arrest of the pandemic. In such a scenario, robust forecasting methods are needed to enable health professionals and governments to make necessary preparation in accordance with the situation. Artificial intelligence and machine learning techniques are useful tools not only for collection of accurate data but also for prediction. Studies show that time series forecasting techniques, such as Facebooks Prophet, have shown promising results. In this paper, time series techniques have been used to forecast the numbers of deaths, recovery and positive cases 60 days ahead. The experimental results demonstrate that machine learning techniques can be beneficial in forecasting the behaviour of the pandemic.
    Keywords: machine learning; Prophet; COVID-19; time series; coronavirus; prediction.

  • Transmission dynamics of COVID-19 outbreak in India and effectiveness of self-quarantine: a phase-wise data-driven analysis   Order a copy of this article
    by Sahil Khan, Md. Wasim Khan, Narendra Kumar, Ravins Dohare, Shweta Sankhwar 
    Abstract: The novel coronavirus disease referred as COVID-19 was declared as a pandemic by the World Health Organization. During this pandemic more than 988,172 lives were lost and 7,506,090 approximately active cases were found across the world by 25 September, 2020. To predict the novel coronavirus transmission dynamics in India, the SQEIHDR mathematical model is proposed. The model is an extension of basic SEIR mathematical model with additional compartments. These additional compartments include self-quarantine (Q), isolation (H) and deceased (D), which help to understand the COVID-19 outbreak in India in a more realistic way and is supposed to suppress the rise of transmission. The SQEIHDR model's simulation comprises ten phases (phases 0-9) with different COVID-19 preparedness and response plans. The simulation results show significant changes in the curve of infected population based on variation in compartment Q, which reveals the efficacy of imposed as well as proposed preparedness and response plans. The results of different conditions of preparedness and response plans highlight the key to reduce the outbreak, i.e. the rate of self-quarantine (Q) which includes general awareness, social distancing and food availability.
    Keywords: COVID-19; mathematical modelling; self-quarantine; transmission dynamics; preparedness; response plan.

  • COVID-19 outbreak in Orissa: MLR and H-SVR based modelling and forecasting   Order a copy of this article
    by Satyabrata Dash, Hemraj Saini, Sujata Chakravarty 
    Abstract: WHO declared COVID-19 to be a pandemic in early March, 2020, and by June it became a severe threat to the human community in almost every country. The present situation throughout the world is very tense and puts everyone at a high risk of infection and this further leads to the high mortality rate. Everyone in the related research community is using technology and trying to identify the time at which the pandemic might stop and make the world healthy again. Therefore, in this study, an attempt has been made to analyse and predict COVID-19 outbreak using Multiple Linear Regression (MLR) and Support Vector Regression (SVR). In this comparative analysis, MLR outperforms SVR. Hence, MLR can be used to predict COVID-19 outbreak in the real life applications.
    Keywords: novel coronavirus; COVID-19; linear multiple regression; support vector regression.

  • Prediction of COVID-19 epidemic curve in India using the supervised learning approach   Order a copy of this article
    by Shweta Mongia, N. Jaisankar, Sugandha Sharma, Manoj Kumar, Vasudha Arora, Thompson Stephan, Achyut Shankar, Pragya Gupta, Raghav Kachhawaha 
    Abstract: The COVID-19 pandemic, a neo zoonotic infectious disease, has caused high mortality worldwide. The need of the hour is to equip the governments with early detection, prevention, and mitigation of such contagious diseases. In this paper, a supervised learning approach of the polynomial regression model is used for the prediction of COVID-19 cases in terms of the number of Confirmed Cases (CC), Death Cases (DC), and Recovered Cases (RC) in India. As per the prediction model, the epidemic curve will reach its peak on 31 May 2020 when the predicted number of CC (148,276) will be almost equal to the sum of the number of DC (35,050) and RC (114,718) i.e. 149,768. This research is based on the data available till 25 April 2020 considering a strong preventive measure of nation-wide lockdown in India since 24 March 2020. Authors have also predicted death rates and recovery rates. As of 25 April 2020, the death rate stands at 3.068% and the predicted death rate for 1 June 2020 is 2.558%. The recovery rate on 25 April 2020 is 21.97% and it is predicted that by 1 June 2020 this rate will increase to 79%. In addition to this, the approach projected a monthly percentage increase in the number of CC from 1 May 2020 to 1 December 2020. This analysis would help and enable the concerned authorities in bringing effective preventive measures into action in the process of decision making.
    Keywords: supervised learning; polynomial regression model; COVID-19; prediction; epidemic curve.

Special Issue on: Signal and Information Processing in Sensor and Transducer Systems

  • Identification of Hammerstein-Wiener nonlinear dynamic models using conjugate gradient based iterative algorithm   Order a copy of this article
    by Xiangli Li, Lincheng Zhou 
    Abstract: This paper mainly studies the identification of a class of nonlinear dynamic models with Hammerstein-Wiener nonlinearity.Firstly, a special form of Hammerstein-Wiener polynomial model is constructed by using the key term decomposition technique to separate the model parameters to be estimated.On this basis, an iterative algorithm based on conjugate gradient (CGI) is proposed, which computes a new conjugate vector along the conjugate direction in each iteration step.Because the search direction of the CGI algorithm is conjugate with respect to the Hessian matrix of the cost function, the CGI algorithm can generally obtain the faster convergence rates than the gradient based iterative algorithm.By conjugating the search direction of the CGI algorithm with the Hessian matrix of the loss function, CGI algorithm has more advantages in convergence rates than the gradient based iterative algorithm. Finally, numerical examples are given to demonstrate the effectiveness of the proposed algorithm.
    Keywords: Hammerstein-Wiener model; conjugate gradient; key term decomposition; Hessian matrix; parameter estimation.

  • Multi-sensor temperature and humidity control system of wine cellar based on cooperative control of intelligent vehicle and UAV   Order a copy of this article
    by Yufan Wang 
    Abstract: Red wine has extremely strict requirements on its fermentation and long-term storage environment. The rapid change of cellar temperature will cause great damage to the taste of red wine. At present, wine cellars at home and abroad usually adopt pure manual management or lay a large number of sensors for monitoring to solve such problems. However, in the face of fire risks caused by specialized laboratories or excessive costs and a large number of aging production lines, it is obvious that the needs of wine cellar managers cannot be met. This paper designs and completes the multi-point data acquisition temperature and humidity adjustment sensor system of the wine cellar under the collaborative control of smart car and UAV. The system consists of four independent parts: the intelligent patrol car terminal, the four-rotor UAV auxiliary terminal, the handheld terminal and the intelligent temperature control device terminal, which cooperate with each other to realize the monitoring and control of the environment under closed conditions. Compared with the traditional temperature and humidity collection method, this system uses the method of UAV and smart car to collect and collect data, which greatly improves the efficiency and accuracy of data collection. The system is equipped with low-power autonomous charging to realize unmanned management. At the same time, the administrator can view and intervene in the real-time changes of the indoor environment through the handheld segment to achieve human-computer interaction. Experimental tests show that this system has strong robustness and adaptability, is accurate, intelligent, efficient, and saves a lot of manpower and material resources.
    Keywords: temperature and humidity acquisition sensor system; DHT11; cooperation; human-computer interaction.

  • Simulation study on identification technology of transmission line potential hazards based on corona discharge characteristics   Order a copy of this article
    by Wei Liu 
    Abstract: The prevention and control of transmission line potential hazards is the guarantee of safe and reliable operation of power grids. At present, the prevention and control of line potential hazards is still based on manual inspection, which has problems of low efficiency and poor reliability. Based on corona discharge theory and experimental simulation, this paper studies the fingerprint characteristics of line discharge signal caused by tree barrier, bird damage and insulator pollution, and puts forward a method of line potential hazard detection and fault identification based on discharge characteristics. The results show that the development of line potential hazard will lead to the discharge process, and the discharge characteristics of different types of potential hazard have obvious differences. The differences are mainly reflected in the main wave width and discharge repetition rate, which can be used to identify the types of potential hazard.
    Keywords: overhead line; potential hazard; corona discharge; simulation analysis; identification.

  • Development and application of PD spatial location system in distributing substation   Order a copy of this article
    by Fang Peng, Hong-yu Zhou, Xiao-ming Zhao 
    Abstract: Partial discharge is an important cause of insulation deterioration of distribution network equipment. Due to the variety of distribution network equipment, the location of discharge source is always a technical difficulty in engineering. In this paper, through the research of UHF PD spatial location technology, a system for spatial location of discharge source in distribution room is developed. The UHF sensor, acquisition and processing module and analysis and diagnosis module for the location of discharge source in distribution room are designed. The results of laboratory test and actual operation show that the system has the advantages of high detection sensitivity, high location accuracy and high operation reliability. It can be used for effective monitoring and timely warning of PD defects in distribution room, which helps to improve the power supply reliability of distribution network system.
    Keywords: distributing substation; partial discharge; spatial location; sensor; online monitoring.

  • Feature matching for multi-beam sonar image sequence using KD-Tree and KNN search   Order a copy of this article
    by Jue Gao 
    Abstract: Feature matching for image sequence generated by multi-beam sonar is a critical step in widespread applications like image mosaic, image registration, motion estimation and object tracking. In many cases, feature matching is accomplished by nearest neighbour arithmetic on extracted features, but the global search adopted brings heavy computational burden. Furthermore, sonar imaging characteristics such as low resolution, low SNR, inhomogeneity, point of view changes and other artifacts sometimes lead to poor sonar image quality. This paper presents an approach to the feature extraction, K-Dimension Tree (KD-Tree) construction, and subsequent matching of the features in multi-beam sonar images. Initially, Scale Invariant Feature Transform (SIFT) method are used to extract features. A KD-Tree based on feature location is then constructed. By K Nearest Neighbour (KNN) search, every SIFT feature is matched with K candidates between a pair of consecutive frames. Finally, the Random Sample Consensus (RANSAC) arithmetic is used to eliminate wrong matches. The performances of the proposed approach are assessed with measured data that exhibited reliable results with limited computational burden for the feature-matching task.
    Keywords: feature extraction; feature matching; multi-beam sonar; KD-Tree; KNN.

  • A study on ultrasonic process tomography for dispersed small particle system visualization   Order a copy of this article
    by Zhiheng Meng, Jianfei Gu, Yongxin Chou 
    Abstract: The present challenge in the ultrasonic process tomography on dispersed small particle system is that it is hard to obtain the accurate algorithm to reconstruction. For more accurate reconstruction, this work proposes an improved GMRES(Generalized Minimal Residual)algorithm based on generalized minimal residual iteration and mean filtering method. To verify the feasibility of the algorithm for dispersed small particle system visualization, a linear acoustic attenuation model is developed to obtain the projection data of ultrasonic array. Then, we compared it with the current mainstream reconstruction algorithms under the conditions of the less effective information by solving the underdetermined equations. It is showed that this method can present a high reconstruction precision in the cases of numerical simulations, and reasonably reflect the cross section of dispersed small particle distribution. In the numerical simulations, the imaging accuracy of improved GMRES algorithm can reach about 90%.
    Keywords: ultrasonic method; dispersed particle; particulate two-phase flow; back projection; iterative algorithm.

  • Distributed fusion algorithm based on maximum internal ellipsoid mechanism   Order a copy of this article
    by Jinliang Cong 
    Abstract: In this paper, a Bar-Shalom Campo based algorithm is presented to solve the approximate maximum ellipsoid in the cross region of covariance ellipsoid. An objective function that can be solved by linear matrix inequality is designed based on the rotation transformation of matrix. Compared with the classical covariance intersection fusion algorithm, it is less conservative. Moreover, the unknown cross-covariance is approximated as a linear matrix inequality constraint with Pearson correlation coefficient which is bounded. With the inequality constraint, the accuracy of fusion results can be improved. Finally, two simulation examples are given to verify the effectiveness of the proposed algorithm.
    Keywords: distributed sensor network; information fusion; maximum ellipsoid; cross-covariance constraint.

  • Local track to detect for video object detection   Order a copy of this article
    by Biao Zeng, Shan Zhong, Lifan Zhou, Zhaohui Wang, Shengrong Gong 
    Abstract: The existing methods for video object detection are generally achieved from searching the objects through the entire image. However, they always suffer from large computation consumption as a result of dozens of similar images are required to be operated. To relieve this problem, we propose a Local Track to Detect (LTD) framework to detect video objects by predicting the movements of objects in local areas. LTD can automatically determine key frames and non-key frames, the objects in key frames can be detected by the single frame detector, and the objects in non-key frames can be efficiently detected by the movement prediction module. LTD also has a siamese module to predict whether objects between the key frame and the non-key frame are the same object to ensure the accuracy of the movement prediction module. Compared with other previous work, our method is more efficient and achieves state-of-the-art performance.
    Keywords: video object detection; local detection; detect and track; movement prediction; efficient detection; CNN.

  • Simple interpolation algorithm and its application in power parameter estimation   Order a copy of this article
    by Zhongyou Luo, Shuping Song, Ling Zhang, Puzhi Zhao, Haijiang Zhang 
    Abstract: The computational complexity of interpolation algorithms is a major concern for real-world power harmonic parameter estimation based on the windowed-interpolation fast Fourier transform (FFT) algorithm. A new interpolation algorithm is proposed in this study to estimate the harmonic parameters of power system. This technique is based on the frequency-domain characteristics of the mainlobe of the weighting cosine window, and its calculation formulas are obtained by employing Newtons divided-difference interpolation polynomial. The validity of the proposed algorithm is confirmed through computer simulations via MATLAB and field tests in a photovoltaic system. The results show that the proposed algorithm has the advantage of low computational effort and can be employed for any cosine window.
    Keywords: FFT; harmonic parameter estimation; cosine window; interpolation; Newton’s divided-difference interpolation.

  • Finger knuckle print verification by fusing invariant texture and structure scores   Order a copy of this article
    by Chaa Mourad, Zahid Akhtar, Sehar Uroosa 
    Abstract: Finger Knuckle Print (FKP) biometric traits for person recognition have recently gained much attention from both the research community and industry, owing to their distinctive features and higher usability or user friendliness. In this paper, a reliable and robust personal identification approach using FKP is presented. The proposed framework merges two types of matching scores extracted from structure and texture images. The region covariances algorithm (RCA) has been employed in the presented method to extract the structure and the texture images from each FKP captured image. Gabor filter bank and Kernel Fisher Discriminant (KFD) methods have been used to obtain distinctive feature vectors. Finally, the Cosine Mahalanobis distance similarity metric is used for classification. Experimental analyses were performed on the Hong Kong Polytechnic University (PolyU) FKP database. Experimental results show that our proposed system achieves better results than prior state-of-the-art systems. In addition, fused scores using the weighted sum rule in the proposed framework renders very good performance compared with min, max, and simple sum rules.
    Keywords: biometric system; FKP-based person recognition; Gabor filter; region covariances algorithm; kernel Fisher discriminant.

  • A novel chaotic grey wolf optimisation for high-dimensional and numerical optimisation   Order a copy of this article
    by Mengjian Zhang, Daoyin Long, Dandan Li, Xiao Wang, Tao Qin, Jing Yang 
    Abstract: Aiming at the weakness of the current evolutionary algorithms for high-dimensional and numerical optimization problems of global convergence, a novel chaotic grey wolf optimization (NCGWO) is proposed for solving the high-dimensional optimization problems. Firstly, the six chaotic one-dimensional maps are introduced and their mathematical models are improved with their mapping ranges being in the interval (0, 1). Secondly, the diversity experiments are conducted to test the results of the chaotic maps. The experiments show that the initial population by chaotic maps is superior to the GWO algorithm and the Sine map is best. Finally, the CSGWO algorithm is also proposed based on the NCGWO algorithm with the parameter C by Sine map. The simulations demonstrate that the performance of the GWO algorithm can be improved by the chaotic maps for high-dimensional and numerical optimization problems, and the effectiveness of the CSGWO algorithm is superior to other evolutionary algorithms and achieves better accuracy and convergence speed.
    Keywords: chaotic system; grey wolf optimisation; chaos initialisation; optimisation; high-dimension.

  • Recursive identification of state space systems with colored process noise and measurement noise   Order a copy of this article
    by Fang Zhu, Xuehai Wang 
    Abstract: This paper concerns the modeling and identification of the state space system, in which both colored process noise and measurement noise are encountered. By using the state filtering, the state space system with colored process noise is transformed into a model without correlated noise, and a state filtering based parameter estimation algorithm is derived on the base of designing a state filter observer using the multi-innovation identification. The validity of the proposed algorithm is verified by given simulation examples
    Keywords: parameter estimation; recursive identification; filtering technique; state estimation.

  • Research on cable partial discharge detection and location system based on optical fibre timing   Order a copy of this article
    by Jian-jun Zhang, Fang Peng, An-ming Xie, Yang Fei 
    Abstract: Partial discharge (PD) is an important index to reflect the running state of cable. According to the characteristics and propagation mechanism of cable partial discharge signal, a cable partial discharge detection and location system based on optical fibre time synchronization technology and travelling wave double terminal location principle is developed. The system has high detection sensitivity, high reliability, real-time detection, diagnosis and positioning of cable discharge power supply. The experimental results show that the positioning accuracy of the cable partial discharge source can be effectively improved based on the fibre timing and double terminal positioning technology, and the positioning accuracy can reach 1%; the method studied in this paper can meet the requirements of the accurate location of the partial discharge source of the cable, Gil and other equipment.
    Keywords: cable; partial discharge; optical fibre timing; double terminal positioning; online monitoring.

  • Life-threatening arrhythmias recognition by pulse-to-pulse intervals analysis   Order a copy of this article
    by Lijuan Chou, Yongxin Chou, Jicheng Liu, Shengrong Gong, Kejia Zhang 
    Abstract: Tachycardia, bradycardia, ventricular flutter and ventricular tachycardia are the four life-threatening arrhythmias, which are seriously harmful to the cardiovascular system. Therefore, a method for identifying these arrhythmias by pulse-to-pulse intervals analysis is proposed in this study. First, the noise and interference are wiped out from the raw pulse signal, and the clear pulse signal is spitted into pulse waves by pulse troughs whose first-order difference are the pulse-to-pulse intervals. Then, fifteen features are extracted from the pulse-to-pulse intervals, and the two-samples Kolmogorov-Smirnov test is utilized to select the markedly changed features. Finally, we design the classifiers for arrhythmias recognition by the probabilistic neural network (PNN), feedback neural network (BPNN) and random forest (RF). The pulse signal from the international physiological database (PhysioNET) is utilized as the experimental data. The experimental results show that RF classifier has the best average classification performance with the kappa coefficient (KC) of 98.86
    Keywords: pulse signal; pulse-to-pulse intervals; life-threatening arrhythmias; intelligent recognition.

  • Instantaneous frequency enhanced peak detection for sugarcane seed cutting   Order a copy of this article
    by Junfeng Wei, Weidong Tang, Chunming Wen, Longdian Huang 
    Abstract: Peak detection methods are widely used in various of areas. This study introduced a peak detection method used for seed cutting. The envelope of signal in discrete time domain was calculated by Hilbert transform. In order to increase the reliability in noise condition, the envelope was enhanced by utilizing instantaneous frequency of signal, and then a multi-rule joint search algorithm marks the peaks, which stands for the location of a node. The proposed method worked in worse SNR condition in the simulation, and was verified in the experiment of detecting sugarcane nodes. Local maximums of accelerometer data are marked, showing the position of node rings on the sugarcane surface. The position data will be used for an action signal in cutting machines, or be used for the analysis of crop growth.
    Keywords: discrete Hilbert transform; instantaneous frequency; peak detection; sugarcane seed cutting.

Special Issue on: Computer Applications in Technology and their Role in Education with Respect to Economic Impact

  • Efficient residential load forecasting using a deep learning approach   Order a copy of this article
    by Rida Mubashar, Mazhar Javed Awan, Muhammad Ahsan, Awais Yasin, Vishwa Pratap Singh 
    Abstract: A reliable and efficient smart grid depends on smart meters that are used for tracking electricity usage and provides accurate, granular information that can be used for forecasting power loads. Residential load forecasting is indispensable, since smart meters can now be deployed at the residential level for collecting historical data consumption of residents. The proposed method is tested and validated through available real world data sets. A comparison of LSTM is then made with two traditionally available techniques, ARIMA and exponential smoothing. Real data from 12 houses over a period of 3 months is used to inspect and validate the accuracy of load forecasts performed using three mentioned techniques. LSTM models, owing to their higher capability of memorising large datasets, establish their usefulness in time series based predictions.
    Keywords: short term load forecast; residential load; power system planning; LSTM; exponential smoothing; ARIMA; deep learning.

  • Fake profile recognition using big data analytics in social media platforms   Order a copy of this article
    by Mazhar Javed 
    Abstract: Online social media platforms today have many more users. This has led to increased fake profile trends, which is harming both social and business entities as fraudsters use images of people for creating new fake profiles. However, most of those proposed methods are outdated and arent accurate enough with an average accuracy of 83%. Our proposed solution for this problem is a Spark ML based project that can predict fake profiles with higher accuracy than other present methods of profile recognition. Our project consists of Spark ML libraries including Random Forest Classifier and other plotting tools. We have described our proposed model diagram and tried to depict our results in graphical representations like confusion matrix, learning curve and ROC plot for better understanding. Research findings through this project illustrate that this proposed system has accuracy of 93% in finding fake profiles over social media platforms. There is 7% false positive rate in which our system fails to correctly identify a fake profile.
    Keywords: fake profile; social media; big data; machine learning; Spark.

  • Towards an enhanced user experience with critical system interfaces in Middle-Eastern countries: a case study of usability evaluation of Saudi Arabias weather interface system (Arsad)   Order a copy of this article
    by Abdulrahman Khamaj 
    Abstract: Access to weather forecasts is well adopted by the public in Western countries. However, in Saudi Arabia (KSA), the use of weather forecasts for administering safety precautions and planning daily activities is still not at an acceptable level owing to the lack of easily accessible weather platforms. Recently, the Saudi Presidency of Meteorology and Environment has launched the first governmental smartphone weather application, Arsad. However, the usability of the Arsad systems interface design is still unknown. Through user testing and a questionnaire, this study examined all Arsads embedded features and design aspects. The analyses highlighted several usability issues and recommendations to be considered in the redesign phases. This research will contribute to the usability body of knowledge of weather interface systems, as well as offer opportunities for users and providers to work together to enhance the accessibility and usability of weather system interfaces in KSA and other Middle-Eastern countries.
    Keywords: smartphone applications; usability; weather forecasts; user experience.

  • Comparative study of satellite multispectral image data processing with Map Reduce and classification algorithm   Order a copy of this article
    by Ch Rajya Lakshmi, Katta Subba Rao, R. Rajeswara Rao 
    Abstract: Now that big data has amassed a significant amount of data, it is available in both structured and unstructured formats. Unstructured data processing is difficult to generate by individuals (e.g. Twitter data) or even sensors (e.g. satellites, videos) with data sizes ranging from gigabytes to terabytes and petabytes. The term 'big data' is being used to describe a growing number of items. We can easily analyse and classify different trends in unstructured datasets if the right analytical approach is used, while keeping data quality and size in mind. Early warning forecasts, which are based on satellite imagery and radar sensor data, are a major problem in the real world. The number of space objects derived from such data querying is a complex task. To obtain a better understanding of big data, a proper architecture for the analysis of various classifications of satellite imagery patterns using Hadoop technology should be proposed. Different classification methods for different satellite imagery pattern classification methods are segregated in the proposed architecture, and Google's Map Reduce C4.5 algorithm for successful classification is proposed for both time efficiency and scalability results to increase the performance of classification patterns and an increasing amount of datasets. The focus of this study is on NASA satellite data, Twitter data, and weather forecasting.
    Keywords: C4.5 algorithm; satellite image; Hadoop; Google’s Map Reduce; big data.

  • Management techniques and methods of total quality management implementation in management institutions.   Order a copy of this article
    by Pritidhara Hota, Bhagirathi Nayak, Sunil Mishra, Pratima Sarangi 
    Abstract: This study explored the implementation and barriers of the internal stakeholder Total Quality Management (TQM) activities and of various performance measures. In this analysis, we used a methodology-based approach to the stakeholders who were the unit of the survey. The sample has been chosen from Top Management, Students, Faculty, Non-teaching Staffs, Alumni and Principal of Management Institutes in Odisha. We developed a questionaire with 64 accessible questions, and 10 major factors were collected, with an acceptable answering rate of 49.4%. The primary research elements Principal Components Analysis (PCA) and ANOVA test have been performed. This study showed that different TQM activities influence different performance results significantly. Results showed that lack of employee engagement, knowledge of employees and dedication, inadequate structure, and lack of resources, were the key challenges that management organisations face with internal stakeholders. Management institutes with all their variables to enhance efficiency should continue implementation of TQM.
    Keywords: TQM; stakeholders; ANOVA; PCA.

  • Improving software performance by automatic test cases through genetic algorithm   Order a copy of this article
    by Sudeshna Chakraborty, Vijay Bhanudas Gujar, Tanupriya Choudhury, Bhupesh Dewangan 
    Abstract: Software testing is a vital part of software development. One would like to decrease work and get the most out of the number of faults detected. For optimization problems, test case production is used. Program checking for major problems in regular production trials has a known sufficiency importance factor. Generating test cases automatically will decrease the price and working time considerably. Experiment case information produced without any human interface by using genetic algorithm and random testing is compared with genetic algorithms. Observation is random testing limitations are solved by genetic algorithms. We have implemented these test cases and tested them in real time environments, and the outcomes show good performance.
    Keywords: routine test case generation; correspondence class partitioning; arbitrary testing.

  • Application of hazard identification and risk assessment for reducing occupational accidents in firework industries with specific reference to Sivakasi   Order a copy of this article
    by Indumathi N, Ramalakshmi R 
    Abstract: Occupational accidents should be avoided because they have very negative consequences on both industries and employees. In India, Sivakasi is the second largest fireworks manufacturing sector. Every workplace task that involves a chemical process has the potential to cause an accident. Identifying the hazards is essential to reduce accidents and explosions. Hazard Identification and Risk Assessment (HIRA) can be used as a risk assessment method to help users identify hazards and estimate the risk associated with each one. This study aims to look into the possible hazards and incidents that may occur in the fireworks industry, and to improve occupational safety and wellness by the techniques of HIRA and F-test. HIRA identified the industrial risk zones based on the task as high (43%), medium (36%), and low (21%). Through the failure detection analysis, it achieved 84% reduction of risk priority number through the prevention and mitigation program.
    Keywords: occupational accidents; hazard identification; risk assessment; fireworks industry; safety; chemical hazards.

  • Flight web searches analytics through big data   Order a copy of this article
    by Amna Khalil, Mazhar Javed Awan, Awais Yasin, Vishwa Pratap Singh, Hafiz Muhammad Faisal Shehzad 
    Abstract: The flight search is considered one of the biggest searches on the World Wide Web. This study aims to establish an effective prediction model from a huge dataset. This article offers a linear regression model to forecast flight searches using the big data framework SparkML library and statistics. Experiments on realistic datasets of domestic airports reveal that the suggested model's accuracy is close to 90% using the big data framework. Our research provides an efficient flight web search engine, which can manage through big data.
    Keywords: flights databases; search query; World Wide Web search engines; content-based retrieval.

  • Application of information technology to e-commerce   Order a copy of this article
    by Nasser Binsaif 
    Abstract: Nations are accelerating the application of technology to e-commerce because of its impact at all times, just as the use of computer applications in electronic shopping is one of the most important pillars of trade. E-commerce is the process of buying and selling services and goods over the internet. Generally, data or money is also transferred over the internet, which is an electronic network. In an online business environment, all information is displayed and payment is also made online. The current situation and the digital world have embraced the e-commerce facility. This article discusses the potential behaviours of e-commerce in the Kingdom of Saudi Arabia. The article analyses perspective and e-commerce in the Kingdom of Saudi Arabia. Fraudulent activities are encountered during transactions and purchases. The difficulties are addressed through security policies and the technology is promoted through social activities.
    Keywords: e-commerce; business; consumer; security policy; threat; social media.

  • The electrical circuit of a new seven-dimensional system with twenty-one boundaries and the phenomenon of complete synchronisation   Order a copy of this article
    by Khaled Mohammed Al-Hamad, Anas Romaih Obaid, Ahmed S. Al-Obeidi 
    Abstract: One the most important properties of dynamic systems is the synchronisation phenomenon. In this document, we obtain nonlinear control unit results in which to synchronise two equivalent 7D structures. We used linearisation and Lyapunov as analytical ways and linearisation way doesnt require Lyapunov function updating, thus it is effective to achieve synchronisation phenomena with better results than Lyapunov way. The two ways were used, and eventually we obtained such similarity to the error of the dynamic system. Digital emulation was applied to the mathematical system with control and error of the dynamic system. The digital excellent results were close to those of the two ways mentioned before. We studied three synchronisation phenomena (complete, incomplete, and hybrid) according to Lyapunov and linearisation ways, and compared their results. Finally, we applied the current system and present it in new attractor and compared these results with other similar ones.
    Keywords: chaos; Lyapunov; linearisation; projective synchronization; nonlinear dynamic system.

  • An expert system based IoT system for minimisation of air pollution in developing countries   Order a copy of this article
    by Sudan Jha, Sidheswar Routray, Sultan Ahmad 
    Abstract: Carbon dioxide (CO2), nitrogen dioxide (NO2), suphur dioxide (SO2) are the major contributing sources to environmental pollution in developing countries. Therefore, measuring and controlling them are significant to human health. This paper proposes a new approach using Internet-of-Things (IoT) to measure and control air pollution in developing countries. Various sensors related to temperature, humidity, and smoke have been used to collect data. These data are sent to the central server through an access point. Numerous techniques have been used on these stored data to investigate the increase in the levels of pollution, temperature, and other parameters that cause air pollution. If they are above danger levels, then an alert signal will be sent to the whole city, and the precautionary measures that need to be taken by the citizens are broadcast. The proposed IoT model has shown superior performance compared with related works concerning factors such as CO2 and NO2.
    Keywords: pollution; pollutants; sensors; internet of things; temperature; controller.

  • An empirical study for customer relationship management in the banking sector using machine learning techniques   Order a copy of this article
    by Guru Prasad Dash, Bhagirathi Nayak 
    Abstract: Customer creation is necessary for a new bank and as well as retention is an existing bank that is more productive and cost-effective. Indian bankers' CRM aims to develop and retain their customers and to view a whole organizational structure as a fully integrated attempt to seek, build, and meet the needs of customers. The deposit and innovation capacity is extremely low in rural areas, but immense in urban regions because the majority of the potential product scheme is well known. It was assumed that analysis and CRM in the banking sector were appropriate in these circumstances. The study is an analytical survey using machine learning techniques aimed at investigating the technological progress faced by commercial banks, and how far banks have gained to create a better performance of the financial sector of public and private sector banks.
    Keywords: CRM; financial sector; machine learning.

  • Data visualization using augmented reality for an education system   Order a copy of this article
    by Sumit Hirve, Pradeep Reddy CH 
    Abstract: Data is present in abundance in the present world whether it is education, e-commerce, defence or many other sectors of the nation, but the actual benefit arises when information is extracted from such huge data which can be used in educational, scientific, or commercial fields. Data visualisation plays an important role in understanding what exactly the data is while processing it and inferring results out of it. Students, being the building and budding blocks of the nation, at a very ripe age are exposed to abundant data so this paper is focused on improving the educational system so that students can get a proper understanding of concepts by visualising the matter at hand. The main objective of the paper lies in innovating educational technology by introducing augmented reality into the data visualisation process.
    Keywords: augmented reality; 3-D visualisation; Hololens; education system; Unity; Visual Studio.

  • Design of QoS on data collection in wireless sensor network for automation process   Order a copy of this article
    by Ghaida Muttashar Abdulsaheb, Hassan Jaleel Hassan, Osamah Ibrahim Khalaf 
    Abstract: Wireless sensor networks (WSN) facilitate the analysis of the universe with an unprecedented resolution. WSN is a network that connects many low-cost, low-powered sensor nodes that are capable of sensing/actuating and can interact with one another over short distances through a low-power radio. This article aims to provide strategies for enhancing and maintaining Quality of Service (QoS) in WSN communication. As a result, we implemented a classification-based multi-layer WSN stack prototype that takes CDC applications into account. This method decreases the increasing point load by control messages, resulting in higher application throughput. In the presence of reasonably large unicast data traffic, the CDC multi-layer stack achieves 85.16% throughput while running at less than 12.14% delay. While WSN is the transmission technique and automation is the implementation scenario, the existing methods can be extended to certain other WSN and scenarios with related communication patterns and QoS issues.
    Keywords: WSN; QoS; throughput; end-to-end delay; network traffic.