Forthcoming articles

International Journal of Computer Applications in Technology

International Journal of Computer Applications in Technology (IJCAT)

These articles have been peer-reviewed and accepted for publication but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

Register for our alerting service, which notifies you by email when new issues are published online.

Open AccessArticles marked with this Open Access icon are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.
We also offer which provide timely updates of tables of contents, newly published articles and calls for papers.

International Journal of Computer Applications in Technology (105 papers in press)

Regular Issues

  • Simulation and visualisation approach for accidents in chemical plants   Order a copy of this article
    by Feng Ting-Fan, Tan Jing, Liu Jin, Deng Wensheng 
    Abstract: A new general approach to lay the foundation for building a more effective and real-time evacuation system for accidents in chemical plants is presented. In this work, we build the mathematical models and realise automatic grid generating based on the physical models stored in advance with several algorithms in jMonkeyEngine environment. Meanwhile, the results of the simulation data through finite difference method (FDM) are visualised coupling with the physical models. Taking fire as an example, including fire with single and multiple ignition sources, shows the feasibility of the presented approach. Furthermore, a coarse alarm and evacuation system from fire have been developed with a multiple SceneNode and roam system, which also includes the making and importing of the physical models. However, to improve the accuracy of the mathematical models, adaptability and refinement of the grids and universality of the evacuation system is the direction of efforts.
    Keywords: simulation; chemical accidents; alarm and evacuation system; jMonkeyEngine.

  • Detecting occluded faces in unconstrained crowd digital pictures   Order a copy of this article
    by Chandana Withana, S. Janahiram, Abeer Alsadoon, A.M.S. Rahma 
    Abstract: Face detection and recognition mechanisms, a concept known as face detection, are widely used in various multimedia and security devices. There are significant numbers of studies into face recognition, particularly for image processing and computer vision. However, there remain significant challenges in existing systems owing to limitations behind algorithms. Viola Jones and Cascade Classifier are considered the best algorithms from existing systems. They can detect faces in an unconstrained crowd scene with half and full face detection methods. However, limitations of these systems are affecting accuracy and processing time. This project proposes a solution called Viola Jones and Cascade (VJaC), based on the study of current systems, features and limitations. This system considered three main factors: processing time, accuracy and training. These factors are tested on different sample images, and compared with current systems.
    Keywords: face detection; unconstrained crowd digital pictures; face recognition.

  • Real-time robust tracking with part-based and spatio-temporal context   Order a copy of this article
    by Yanxia Wei, Zhen Jiang, Junfeng Xiao, Xinli Xu 
    Abstract: Owing to the significant and excellent performance of correlation filters in the aspect of computation convenience, correlation filter-based trackers have become increasingly popular in the visual object tracking community. However, complete or partial occlusion is one of the major factors that seriously impact the tracking performance in visual tracking. To address this issue, we propose a novel tracking algorithm that perfectly integrates the results from the global correlation and local correlation filters for estimating the more accurate position of target. Then, we introduce the occlusion detection mechanism to eliminate the occlusion impact on the final position of object. In addition, our proposed tracker employs the spatial geometric constraints among the global object and local patches of object for preserving the structure integration of object. For verifying our method, we conduct extensive qualitative and quantitative experiments on challenging benchmark image sequences.
    Keywords: tracking; correlation filter; occlusion; part-based strategy; spatial geometric constraint.

  • Maximum power production operation of doubly fed induction generator wind turbine using adaptive neural network and conventional controllers   Order a copy of this article
    by Hazem Hassan Ali, Ghada Saeed Elbasuony, Nashwa Ahmad Kamal 
    Abstract: Production of maximum power based on control of the Rotor Side Converter (RSC) of Doubly Fed Induction Generator (DFIG) wind turbine direct axis current is necessary in order to accomplish fast reaching to the maximum power point and protect the working parts in RSC from high current overshoot. An assessment study between adaptive Neural Network (NN) and conventional Proportional Integral (PI) controllers for control of RSC direct axis current is introduced in this paper. NN controller based LevenbergMarquardt backpropagation (LMBP) is designed for its training to mainly control RSC direct axis current. Also, RSC direct axis current is estimated through using PI controller that is used to control the speed of DFIG according to optimum tip speed ratio obtained by genetic algorithm. The simulation results demonstrated that RSC based NN controller are better than RSC based conventional speed regulator in protecting RSC parts from high current overshoot.
    Keywords: DFIG wind turbine; MPPT; optimisation; NN controller; conventional controller.

  • What are students thinking and feeling? Understanding them from social data mining   Order a copy of this article
    by Hua Zhao, Yang Zuo, Chunming Xu, Hengzhong Li 
    Abstract: Students' digital footprints on social media shed light into their personal experiences. Mining the results of these social data is useful for educators to understand students' mood swings and provide corresponding helps. But owing to the sharp increase of social data, analysing these data manually is impossible. In this paper, we focus on Chinese college students, and explore a method to better understand them based on social data mining. The method firstly collects the social data related to students, creates a hierarchy category system based on data contents analysis; secondly, it proposes a simple but effective multi-class classification method to classify the data into several appropriate categories; finally, it carries out the sentiment analysis of each one, and then looks deep into their emotion evolutionary process. Experiment results show that postgraduate entrance exam, final exam and other professional certificate exams are three prominent concerns of students, and they express worry about them.
    Keywords: social data mining; classification; sentiment analysis; education.

  • Analysis of web browser for digital forensics investigation   Order a copy of this article
    by AlOwaimer AlOwaimer, Shailendra Mishra 
    Abstract: In today's digitalised world a lot of information is getting online, the size of online data is getting huge day by day and thus emerges the field of data science. The gateway for surfing on the internet is a web browser when there is so much huge size of data it also makes it vulnerable to the people have malicious intentions. Security is being compromised and thus makes the data vulnerable. A browser can be exploited by an internal malicious user and the main hindrance comes when all the browsing data is deleted. A forensics investigation needs to extract all the pieces of evidence, such as like history, cookies, URL, sessions and saved passwords from the cloud space provided by the browser. The research method is a mix of qualitative and quantitative. A scenario is created in a virtual environment in which there is a victim machine whose browser is exploited. Dumpzilla and bulk extractor forensics tools were used to capture its history, URLs, cookies, sessions, add-ons, and extensions, and SQLite database is used in auxiliary with these tools to get the required information. The extracted information is analysed to find the malicious user. Two different platforms are used for the authentication and verification of pieces of evidence collected. The culprit is caught based on matching web browsing activities from the victim machine and another machine in the same place as the victim machine.
    Keywords: digital forensics; web browser forensics; forensic investigation; digital evidence.

  • IoT data security with DNA-genetic algorithm using blockchain technology   Order a copy of this article
    by Sultan Saad Alshamrani, Amjath Basha 
    Abstract: The Internet of Things (IoT) is an on-demand technology that is used in different applications and also includes different sensors, embedded devices and some other objects related to the internet. IoT devices are mainly designed for gathering diverse kinds of data from different sources and transferring data in digitalised form. Moreover, data security is an important issue in IoT technology, which affects the privacy of data. The main objective of this to handle large amounts of IoT data with cryptographic technique for transmission of data with high security, to enhance anonymity, security, and reliability of IoT, to establish consensus and trust over decentralised networks by addressing difficulties in trustless environment, to enable agreement between unique users by using a consensus mechanism in blockchain in a decentralised manner. A new lightweight encryption technique called DNA-GA (deoxyribonucleic acid genetic algorithm) has been proposed for resource-restricted IoT devices which gives high level sensing of data security. Blockchain is another method proposed in this research to store data in the form of blocks using a hash function where the respective user alone can decrypt the data with the gedecomposinerated hash key, so that intruders are not able to hack the original data. Several experiments were conducted with different kinds of sensed data for our proposed method, and the result is compared with the existing method. The results show that the proposed method outperforms the existing method in handling a large amount of IoT data by reducing encryption and decryption time and provide a high level of security to protect the data.
    Keywords: DNA; GA; blockchain; IoT; hash function; decentralised network.

  • An analytical review of texture feature extraction approaches   Order a copy of this article
    by Mohammad Reza Keyvanpour, Shokofeh Vahidian, Zahra Mirzakhani 
    Abstract: Image registration has been an essential task in computer vision and image processing. There are many applications of image registration, for instance, in medical systems. Image registration consists of four main steps, including features extraction, feature matching, transform model estimation, and resampling and transformation. The feature extraction step makes the image registration process more accurate. Despite a large number of survey articles on texture feature extraction approaches, a comprehensive classification of approaches is still required, which also identifies the strengths and weaknesses of each approach. Therefore, the novelty of this paper is that our analytical framework includes three major components: a complete classification of texture feature extraction approaches, using crucial evaluation criteria to present an analytical and qualitative comparison between each approach, which simplifies the accurate selection of the proposed approaches for the intended application. Our framework can also lead to the development of texture feature extraction approaches in future research of scientists.
    Keywords: image registration; feature extraction approaches; challenges; benefits; analytical framework.

  • Data science based landscape ecology for traditional village landscape protection   Order a copy of this article
    by Tong Liu, Shijun Wang, Zi Wang, Bingxin Li, Sulin Guo, Baowei Wei 
    Abstract: With the revitalisation of rural areas and the advancement of new rural construction, at present, the traditional village protection has received more attention. The optimisation of village layout is an essential issue in the protection of traditional villages, which is not only an objective demand for rural development, but also an important way to protect and avoid damage to the ecological and landscape patterns of the original villages. With the increase in the development of traditional villages, the general scale, shape of traditional villages have found profound changes. The phenomenon of village landscape convergence and original landscape destruction occurs from time to time. In the new situation, the traditional village protection faces new opportunities and challenges. By using landscape ecology theory to guide the preservation of traditional villages, the integrity, authenticity and continuity of traditional villages can be maintained. This article takes Yueqing Town, Bailong Village and Shixian Town, Shuinan Village in Yanbian Prefecture, Jilin Province, as the research objects, and applies the landscape pattern index to quantitatively analyse its overall landscape. On this basis, the village landscape optimisation and preservation strategy is proposed, which has certain reference value for the preservation and reuse of traditional villages in Jilin Province.
    Keywords: landscape ecology; traditional village protection; landscape protection; landscape pattern index.

  • A new model of transformer operation state evaluation based on analytic hierarchy process and association rule mining   Order a copy of this article
    by Zhenyu Zhou, Haifeng Ye, Huigang Xu 
    Abstract: In order to establish a transformer state evaluation model for power grid operation and maintenance management, based on the ageing mechanism of the transformer, a state risk evaluation method based on the analytic hierarchy process (AHP) and association rule mining is proposed. Based on the big data analysis of the actual state quantity of the transformer, the subjective weight coefficient of different state quantities is determined by the AHP. The objective weight coefficient of the comprehensive state quantity is determined by the association rules mining. The fusion of the subjective weight coefficient and the objective weight coefficient is completed according to the weight coefficient fusion technology of the mean squares deviation method. The practical results show that the model in this paper can evaluate the operation state of transformer comprehensively and accurately.
    Keywords: transformer; ageing mechanism; state evaluation model; analytic hierarchy process; association rules mining.

  • Comprehensive survey of user behaviour analysis on social networking sites   Order a copy of this article
    by Pramod Bide, Sudhir Dhage 
    Abstract: Social networking sites play an important role in every persons life. Users start expressing their emotions online whenever any humanitarian or crisis-like event occurs. A lot of sub-events are stirred up and the internet gets flooded with people tweeting/posting their opinions. Identifying user behaviours, their content and their interaction with others can help in event prediction, cross-event detection, user preferences, etc. For these reasons, our research process was divided into studying user behaviour with respect to content-centric and probabilistic approaches and a hybrid incorporating the two. We further investigate the existence of multiple OSNs and how they affect user behaviour. The purpose of this paper is to investigate the existing research methodologies and techniques along with discussion and comparative studies. User behaviour analysis is carried out based on content centric, probabilistic and hybrid approach. Content centric analysis dealt with analysis of the content posted which gives rise to various applications, such as gender prediction, malicious users, real-time user preferences, emotional content influence on users, etc. It is observed that in the probabilistic approach, most of the papers addressed employed clustering mechanisms followed by probability distribution for the analysis of user behaviour.
    Keywords: social media; user behaviour; content centric features; probabilistic features; hybrid features.

  • Moth optimisation algorithm with local search for the permutation flow-shop scheduling problem   Order a copy of this article
    by Anmar Abuhamdah, Malek Alzaqebah, Sana Jawarneh, Ahmad Althunibat, Mustafa Banikhalaf 
    Abstract: This work investigates the use of the Moth-Flame Optimisation (MFO) algorithm in solving the permutation flow-shop scheduling problem and proposes further optimisations. MFO is a population-based approach that simulates the behaviour of real moths by exploiting the search space randomly without employing any local searches that may stick in local optima. Therefore, we propose a Hybrid Moth Optimisation Algorithm (HMOA) that is embedded within a local search to better exploit the search space. HMOA entails employing three search procedures to intensify and diversify the search space in order to prevent the algorithm from becoming trapped in local optima. Furthermore, HMOA adaptively selects the search procedure based on improvement ranks. In order to evaluate the performances of MFO and HMOA, we perform a comparison against other approaches drawn from the literature. Experimental results demonstrate that HMOA is able to produce better-quality solutions and outperforms many other approaches on the Taillard benchmark, which is used as a test domain.
    Keywords: flow-shop scheduling; flow-shop scheduling problem; makespan; moth-flame optimisation algorithm; local search; adaptive moth optimisation algorithm.

  • Detection and mitigation of attacks in SDN-based IoT network using SVM   Order a copy of this article
    by Shailendra Mishra 
    Abstract: Adapting software defined networking (SDN) raises many challenges, including scalability and security on the internet of things (IoT) network. The centralised SDN controller in an IoT-SDN network is responsible for managing the critical network operations. Growing network size increases the network load in the controller and faces security challenges, such as cascade failure of controllers, unauthorised access to the controllers, configuration issues, and distributed denial of service (DDoS) attacks. The DDoS attack is one of the most acute threats in the present scenario, and the attacker can exploit the vulnerabilities that are located mostly in the control plane. In previous research studies, authors have found some strategies and proposed some solutions. The attack scenario and security of multiple controller networks are simulated and evaluated in this research. Simulation has been conducted in Mininet-SDN emulator, the hosting OS was Ubuntu Linux, Wireshark was used for analysing the network traffic, and support vector machines were used to classify the traffic flows. DDoS attacks were detected, and mitigation has been done using a support vector machine learning-based approach. The results show that the support vector machine's sensitivity, specificity, and accuracy are excellent in the range of 98.7% to 98.8%. Security solutions are fast and effective in mitigating DDoS attacks.
    Keywords: software defined networking; distributed denial of service attack; support vector machine; DDoS mitigation.

  • A neural adaptive level set method for wildland forest fire tracking   Order a copy of this article
    by Aymen Mouelhi, Moez Bouchouicha, Mounir Sayadi, Eric Moreau 
    Abstract: Tracking of smoke and fire in videos can provide helpful regional measures to evaluate precisely damages caused by fires. In security applications, real-time video segmentation of both fire and smoke regions represents a crucial operation to avoid disaster. In this paper, we propose a robust tracking method for fire regions in forest wildfire videos using neural pixel classification approach combined with a nonlinear adaptive level set method based on the Bayesian rule. Firstly, an estimation function is built with chromatic and statistical features using linear discriminant analysis and a trained multilayer neural network in order to get a preliminary fire localisation in each frame. This function is used to compute an initial curve and the level set evolution parameters, thus providing fast refined fire segmentation in each processed frame. The experimental results of the proposed method prove its accuracy and robustness when tested on different varieties of wildfire-smoke scenarios.
    Keywords: fire detection; linear discriminant analysis; neural networks; active contour; level set; Bayesian criterion.

  • Robust tracking of moving hand in coloured video acquired through a simple camera   Order a copy of this article
    by Richa Golash, Yogendra Kumar Jain 
    Abstract: Interaction of a human with a machine using dynamic hand gestures has become one of the interesting yet challenging areas in many aspects. A hand is a non-rigid, subtle object that moves with varying speed and in an undefined path. Additionally, real-time backgrounds are not stable. RGB data of a moving hand are sensitive to light variation, camera-view, and randomness in behaviour, thus continuous detection and localisation of the hand region in RGB images is strenuous. Some researchers prefer advanced cameras to avoid the mentioned problems and some apply deep learning in their techniques. The first solution puts limitations on the environment and increases the cost of application. The second solution requires a large database for initial training of the deep neural network architecture. In this paper, we provide a unique solution that combines the benefits of scale-invariant feature transform (SIFT) features with automatic feature extraction mechanism of region-based convolutional neural network (R-CNN), a deep learning network, for robust tracking of a moving hand in coloured video acquired through a camera that does not have very high resolution. The efficiency of the proposed methodology is 96.84% in a simple background and 94.73% in a complex background. The comparative analysis of the proposed system with contemporary techniques using RGB images shows that initial hand detection using R-CNN and then tracking using SIFT is capable of tracking hand movement with high accuracy in unconditional background. In the future, the method can be implemented to design user-friendly and economical natural user interfaces.
    Keywords: computer vision; deep learning; R-CNN; visual object recognition; feature extraction; scale-invariant feature transform; visual object tracking.

  • A service-based software architecture for enabling the storage of electronic health records using blockchain   Order a copy of this article
    by Ítalo Lima, André Araujo, Rychard Souza, Henrique Couto, Valéria Times 
    Abstract: The healthcare sector requires computational solutions with reliable authenticity features for the storage and retrieval of electronic health record data. To address this important issue, this article proposes a service-based software architecture to extract data from different legacy databases, standardise the patients clinical data requirements, and store data using different blockchain technologies. To achieve this, a software architecture has been designed to guarantee the independence of data storage technology. In addition, a data meta-schema and a set of mapping rules have been specified to store and organise clinical data following an international health standard. To validate the solution presented, the real world scenario of a Brazilian healthcare institution has been used to evaluate the data extraction, standardisation and storage capabilities in two blockchain platforms widely used in the information technology market.
    Keywords: health information systems; blockchain; software architecture; electronic health record.

  • AI augmentation in the field of digital image processing   Order a copy of this article
    by Rahul Malik 
    Abstract: All of us are very well aware that identifying information visually is more effective and efficient for human intelligence. In this aspect, one of the primary modes of interacting within and around for understanding the situation through images can act as a crucial source of information for the activities related to human intelligence. By identifying these reasons, one can understand the importance of image processing growing progressively. The fast pace in improving technology, particularly in the computer field, creates a base for image processing related applications. This papers primary focus is to accomplish a better image processing impact using AI as part of image processing. The technology related to segmenting the image deals with the division of an image into different regions to extract and identify the features in it. Such problems can be considered as related to combinatorial optimisation. Initially, this paper deals with the introduction, elaboration, and mathematical representation of the essential theory of the ant colony algorithm. Furthermore, part of this paper deals with improvising the global search by introducing the crowding function of fish into the algorithm. At the end of this paper, the improvised algorithm is used to segment the images to improve the impact of the segmentation process. Using this algorithm, the results demonstrate its feasibility, significant improvement in performance, and optimisation while segmenting the images.
    Keywords: computer vision; image processing; ant colony algorithm; digital image; image segmentation; artificial intelligence algorithm.

  • Low-cost thermal explorer robot using a hybrid neural networks and intelligent bug algorithm model   Order a copy of this article
    by Willian Baunier De Melo, David Calhau Jorge, Vinicius Abrão Marques 
    Abstract: Autonomous navigation requires an artificial agent able to independently move adapted to the environment. The robot sensors analyse the surrounding environment and learn, from successful exploration experiences, to plot the best routes and avoid obstacles. This paper proposes a new algorithm for navigation being an adaptation of the intelligent bug algorithm (IBA) combined with artificial neural networks. In addition, this approach also aims to reduce costs using low-cost sensors and a proposed thermal measurement system composed of a matrix infrared sensor superposed with a regular camera. The experimental results show that the novel algorithm is efficient, the prototype avoids collisions and manages to optimise the route and the thermal camera demonstrates accuracy in measuring temperatures and identifying different thermal zones. Moreover, the robot's cost reduction and simple operation characteristics make possible its use in destructive missions in a totally inhospitable location for humans, facilitating its implementation for research and testing.
    Keywords: explorer robot; raspberry pi; neural networks; autonomous navigation; cost reduction.

  • A distributed design of ripple-spreading algorithms for path optimisation problems   Order a copy of this article
    by Tian-Qi Wang, Gong-Peng Zhang, Xiao-Bing Hu, Hongji Yang 
    Abstract: As a relatively new nature-inspired algorithm, ripple-spreading algorithm (RSA) exhibits some advantageous features when resolving various path optimisation problems (POPs) against both traditional deterministic algorithms and evolutionary approaches, e.g., RSA is a multi-agent, bottom-up, simulation model full of flexibility for modifications (like many evolutionary approaches), and it can guarantee optimality (like many deterministic algorithms). Towards real applications to large-scale POPs, RSA still needs to improve its computational efficiency. We used to take some measures in order to achieve a tradeoff between computational efficiency and optimality. This paper, for the first time without sacrificing optimality, proposes a way to significantly improve the computational efficiency of RSA. This is done by taking advantage of the multi-agent nature of RSA, i.e., a multi-agent model is naturally friendly to distributed design and parallel computing. Therefore, this paper reports a distributed design of RSA for POPs. Preliminary experimental results clearly demonstrate the effectiveness and efficiency of the new design of RSA.
    Keywords: ripple-spreading algorithm; distributed design; parallel computation; path optimisation; computational efficiency.

  • A less computational complexity clustering algorithm based on dynamic K-means for increasing lifetime of wireless sensor networks   Order a copy of this article
    by Anupam Choudhary, Sapna Jain, Abhishek Badholia, Anurag Sharma, Brijesh Patel 
    Abstract: Clustering in wireless sensor networks is a critical issue based on network lifetime, energy efficiency, connectivity and scalability. Sensor nodes are capable to collect data from any geographical region using routing protocol. This research endeavours to design a less complex computational time clustering algorithm for hierarchical homogeneous wireless sensor networks to extend network lifetime. It forms an optimal number of clusters and reduces the data communication span of sensor nodes using dynamic K-means algorithm. Selection of a suitable cluster head is based on the ratio of the remaining energy of the sensor node to its distance from the centre of the cluster. The simulation results prove that algorithm that has been presented achieves better energy efficiency when compared with other hierarchical homogeneous cluster-based algorithms. It increases the network lifetime, the number of alive nodes per round, the data delivered to the base station, the time of the first node, middle node and last node to die for scalable situations in terms of node density and size of the sensing region.
    Keywords: wireless sensor network; sensor node; hierarchical homogeneous cluster-based protocols; cluster Head; base station; network lifetime.

  • Reduced-order modelling of parameterised incompressible and compressible unsteady flow problems using deep neural networks   Order a copy of this article
    by Oliviu Sugar-Gabor 
    Abstract: A non-intrusive reduced-order model for nonlinear parametric flow problems is developed. It is based on extracting a reduced-order basis from full-order snapshots via proper orthogonal decomposition and using both deep and shallow neural network architectures to learn the reduced-order coefficients variation in time and over the parameter space. Even though the focus of the paper lies in approximating flow problems of engineering interest, the methodology is generic and can be used for the order reduction of arbitrary time-dependent parametric systems. Since it is non-intrusive, it is independent of the full-order computational method and can be used together with black-box commercial solvers. An adaptive sampling strategy is proposed to increase the quality of the neural network predictions while minimising the required number of parameter samples. Numerical studies are presented for unsteady incompressible laminar flow around a circular cylinder, transonic inviscid flow around a pitching NACA0012 aerofoil and a gust response for a modified NACA0012 in subsonic compressible flow. Results show that the proposed methodology can be used as a predictive tool for unsteady parameter-dependent flow problems.
    Keywords: non-intrusive parameterised reduced-order model; artificial neural networks; proper orthogonal decomposition; incompressible and compressible flow model order reduction.

  • Fusion-based Gaussian mixture model for background subtraction from videos   Order a copy of this article
    by T. Subetha, S. Chitrakala, M. Uday Theja 
    Abstract: Human Activity Recognition (HAR) aims at realising and interpreting the activities of humans from videos and it comprises background subtraction, feature extraction and classification stages. Among those stages, the background subtraction stage is mandatory to achieve a better recognition rate while analysing the videos. The proposed Fusion-based Gaussian Mixture Model (FGMM) background subtraction algorithm extracts the foreground from videos, which is invariant to illumination, shadows, and the dynamic background. The proposed FGMM algorithm consists of three stages: background detection, colour similarity, and colour distortion calculation. Here, the Jefries-Matusita distance measure is used to check whether the current pixel matches the Gaussian distribution and, by using this value, the background model is updated. Weighted Euclidean-based colour similarity measure is used to eliminate shadows and a colour distortion measure is adopted to handle illumination variations. The extracted foreground is binarised to easily extract the interest points and the foreground, which has white pixel is stored into the frame. This algorithm is tested over test sets gathered from publicly available benchmark datasets, including the Kth dataset, Weizmann dataset, PETS dataset, and change detection dataset. Results have proved that the proposed FGMM exhibits better accuracy in foreground detection, with an increased accuracy compared with the prevailing approaches.
    Keywords: human activity recognition; Gaussian mixture model; fusion-based Gaussian mixture model; background subtraction.

  • Design and analysis of search group algorithm-based PD-PID controller plus redox flow battery for automatic generation control problem   Order a copy of this article
    by Ramana Pilla, Tulasichandra Sekhar Gorripotu, Ahmad Taher Azar 
    Abstract: The ability of a redox flow battery (RFB) is analysed in the present paper to minimise the tie-line power and frequency deviations of the five-area thermal power system. Initially, a power system network with five areas and a nonlinearity of generation rate constraint is designed in MATLAB/SIMULINK environment. After that a proportional derivative-proportional integral derivative (PD-PID) controller is evaluated for the proposed system. Finally, the RFB is installed in area-1, area-2, area-3, area-4 and area-5 for dynamic response enhancement. Results of simulation show that better transient response characteristics can be obtained by using PD-PID controller along with RFB in area-1. The robust analysis is also performed to show the capability of the proposed method.
    Keywords: dynamic response; generation rate constraint; PD-PID controller; redox flow battery; search group algorithm; transient response.

  • Accurate detection of network anomalies within SNMP-MIB dataset using deep learning   Order a copy of this article
    by Ghazi Al-Naymat, Hanan Hussain, Mouhammd Al-Kasassbeh, Nidal Al-Dmour 
    Abstract: An efficient algorithm for supporting the intrusion detection system (IDS) is required for identifying unauthorised access that attempts to collapse the confidentiality, integrity, and availability of computer networks. The machine learning approaches such as (a) multilayer perceptron, (b) support vector machines, (c) nearest neighbour classifiers and (d) ensemble classifiers, such as Random Forest (RF) show higher accuracy only when the additional feature selection techniques such as Infogain, ReliefF, or Genetic Search are used. When the data gathered for training and testing is huge with a greater number of features, the extra computation of feature selection will result in a higher consumption of hardware resources (CPU, memory, and bandwidth). On the other hand, another subset of the machine learning approach called the Deep Learning (DL) algorithm does the feature selection, automatically to overcome this limitation. In this paper, a deep learning method called Stacked Autoencoder (SAE) is proposed for detecting seven different types of network anomaly using the SNMP-MIB dataset. The autoencoder is a variant of the neural network, which transforms the set of n inputs to a different set of m reduced number of outputs (encoding). Previous outputs are then processed by the decoding part to get the desired output of n dimensions, which is identical with the initial input. They are stacked one by one to form a deep SAE. Parameters of the model are selected by trial and error method to get the best training functions, activation functions, learning rate, etc. The proposed deep learning method attains a high accuracy of 100% and saves the extra computations and resources spent on feature selection. The proposed model is also compared with 22 prominent machine learning techniques from the following categories: (i) decision trees, (ii) discriminant analysis, (iii) support vector machines, (iv) nearest neighbour classifiers and (v) ensemble classifiers. It is found that our model outperforms all other machine learning algorithms in terms of accuracy, precision, and recall.
    Keywords: deep learning; DoS; network anomalies; SNMP-MIB; detection.

  • An interpolation algorithm of B-spline curve based on S-curve acceleration/deceleration with interference pre-treatment   Order a copy of this article
    by Guirong Wang, Qi Wang 
    Abstract: A B-spline curve interpolation algorithm based on S-curve acceleration/deceleration (ACC/DEC) with interference pretreatment is proposed to achieve a smooth transition of feed-rate, and to reduce the impact with acceleration mutation in computer numerical control (CNC) machining. According to the demand of chord error, the algorithm can adaptively adjust the feed-rate of each interpolation point, and divide a B-spline curve by velocity cusps. The interference points of the whole curve can be found out by using the S-curve ACC/DEC calculation for the velocity cusps of the whole curve from the forward and reverse directions. Then, the feed-rate of interference points is re-determined to avoid jerk overrun on the curve segments between mutual interference points, so as to improve the processing stability of the machine tool. The simulation and experiment results demonstrate that the algorithm can obtain the smooth transition of feed-rate and acceleration in CNC machining, and ensure that the jerk can meet the ACC/DEC of system. The CNC system can use this method for high-precision and high-speed machining of complex products.
    Keywords: S-curve ACC/DEC; interference pre-treatment; piecewise curve; interference points; feed-rate scheduling; CNC machine tools; interpolation algorithm.

  • A new multistable jerk system with Hopf bifurcations, its electronic circuit simulation and an application to image encryption   Order a copy of this article
    by Sundarapandian Vaidyanathan, Irene M. Moroz, Ahmed A. Abd El-Latif, Bassem Abd-El-Atty, Aceng Sambas 
    Abstract: In this work, we announce a new 3-D jerk system and show that it is chaotic and dissipative with the calculation of the Lyapunov exponents of the system. By performing a detailed bifurcation analysis, we observe that the new jerk system exhibits Hopf bifurcations. It is also shown that the new jerk system exhibits multistability behaviour with two coexisting chaotic attractors. An electronic circuit simulation of the jerk system is built using Multisim. Finally, based on the benefits of our proposed chaotic jerk system, we design a new approach to image encryption as a cryptographic application of our chaotic jerk system. The simulation outcomes prove the efficiency of the proposed encryption scheme with high security.
    Keywords: bifurcations; chaos; chaotic systems; circuit design; jerk systems; image encryption.

  • Compensation of variability using median and i-Vector+PLDA for speaker identification of whispering sound   Order a copy of this article
    by Vijay Sardar 
    Abstract: Speaker identification from the whispered voice is troublesome contrasted with neutral as the voiced phonations are missing in the whisper. The success of the speaker identification system mainly depends on the selection of appropriate audio features. The various available audio features are explored here and it is shown that the timbre features are able to identify the whispering speaker. Only the well-performing, and thus limited timbre, features are sorted by the hybrid selection algorithm. The timbre features named brightness, roughness, roll-off, MFCC and irregularity using CHAIN database offer improvement in the identification outcomes by 5.8% over the baseline system. The framework ought to be robust enough to repay intra-speaker and inter-speaker variability, including channel impacts. The analysis using timbre features based on median value predicted that the intra-speaker variability is being compensated. The use of median timbre features reported further enhancement of 1.12% compared with using timbre features and a further decline in False Negative Rate (FNR). The use of i-Vector + probabilistic discriminant analysis (PLDA) and Support Vector Machine (SVM - cosine kernel) have contributed a relative improvement in accuracy of 8.13%. The reductions in False Positive Rate (FPR) and False Negative Rate (FNR) confirm better variability compensation.
    Keywords: whispered speaker; median timbre feature; i-Vector; cosine kernel; support vector machine.

  • A model predictive control strategy for field-circuit coupled model of PMSM   Order a copy of this article
    by Zhiyan Zhang, Pengyao Guo, Yan Liu, Hang Shi, Yingjie Zhu, Hua Liu 
    Abstract: Based on the analysis of the mathematical equations and drive circuit of permanent magnet synchronous motor (PMSM), a model predictive control strategy for the controller of PMSM is proposed. The stator current discretization model and the cost function of model predictive control are established, and voltage vector selection is derived. Then, the coupling mechanism among motor, driver and controller is analyzed?and the field-circuit coupled model of a 1kW PMSM using model predictive control is set up. Next, the starting performance, load characteristics and electromagnetic field of the motor are obtained. Good speed and electromagnetic characteristics verify the effectiveness of the PMSM control strategy and the correctness of the PMSM field-circuit coupled model. Finally, the back EMF waveforms and its harmonics of the field-circuit coupled model and the finite-element model without drive circuit and controller are compared and analyzed. The simulation results show that the amplitude of back EMF in both models is basically the same while the field-circuit coupled model has high THD value, which can simulate the practical conditions.
    Keywords: PMSM; model predictive control; voltage vector selection; field-circuit coupled model.

  • Mechanics of the tubing string for supercritical CO2 fracturing   Order a copy of this article
    by Wenguang Duan, Baojiang Sun, Deng Pan, Hui Cai 
    Abstract: Supercritical CO2 fracturing is one of the most efficient ways for increasing petroleum productivity. The tubing string for the fracturing is necessary and plays an in important role in the fracturing process. A mechanical model of the tubing string in the well for fracturing is set up. The forces on the tubing string are analysed. The mechanical formulas are derived. The stresses on the tubing string are calculated and the strength of the tubing string is checked. The running accessibility of the tubing string through the well for fracturing is studied. The equations for calculating the critical force on the tubing string for fracturing causing sinusoidal bucking and helical bucking are given. Based on the finite element method, a model is set up. The stress and deformation of the tubing string in the horizontal and deviation well sections are calculated. Results show that under the given conditions, the tubing string is safe and efficient.
    Keywords: supercritical CO2; fracturing; tubing string; running accessibility; mechanics.

  • Ontology-based broker system for interoperability of federated cloud computing platforms   Order a copy of this article
    by Surachai Huapai, Unnadathorn Moonpen, Thepparit Banditwattanawong 
    Abstract: This paper presents an ontology-based broker system for the interoperability of federated clouds. The system can provision cloud infrastructure resources from different platforms to meet the users requirements of Infrastructure as a Service (IaaS). The system engaged an ontology to enable the interoperability of heterogeneous IaaS management platforms, OpenStack, Apache CloudStack, and VMware ESXi. The system provisioned appropriate cloud-infrastructure resources from available platforms based on a vector-space algorithm. Evaluation results relying on the two datasets of non-scheduled and scheduled IaaS-user requests show that our system is practical in that average latencies to generate REST commands for virtual machine provisioning take less than a second per request and are linearly proportional to the number of provisioned servers.
    Keywords: federated-cloud computing; cloud broker; cloud ontology; infrastructure as a service; interoperability.

  • On the estimation of makespan in runtime systems of enterprise application integration platforms: a mathematical modelling approach   Order a copy of this article
    by Fernando Parahyba, Rafael Z. Frantz, Fabricia Roos-Frantz 
    Abstract: Integration platforms are tools developed to support the modelling, implementation and execution of the integration processes, so that data and functionality from applications in software ecosystems can be reused. The runtime system is a key piece of software in an integration platform and it is directly related to its performance; and, makespan is a metric used to measure performance in this systems. In this paper we propose a mathematical model to estimate the makespan for integration processes that run on application integration platforms built based on the theoretical task-based model. Our model has shown to be accurate and viable to assist software engineers in the configuration and deployment process of integration processes on an actual integration platform. The model was validated by means of a set of experiments, which we report in the paper.
    Keywords: enterprise application integration; makespan; runtime system; mathematical modelling; integration platforms.

  • Gradient iterative based kernel method for exponential autoregressive models   Order a copy of this article
    by Jianwei Lu 
    Abstract: Two kernel method based gradient iterative algorithms are proposed for exponential autoregressive (ExpAR) models in this study. A polynomial kernel function is utilized to transform the ExpAR model into a linear-parameter model. Since the order of the linear-parameter model is large, a momentum stochastic gradient algorithm and an adaptive step-length gradient iterative algorithm are developed. Both these two algorithms can estimate the parameters with less computational efforts. Finally, a simulation example shows that the proposed algorithms are effective.
    Keywords: ExpAR model; kernel method; linear-parameter model; momentum stochastic gradient algorithm; adaptive step-length; gradient iterative algorithm.

  • Ontology-based data integration for the internet of things in a scientific software ecosystem   Order a copy of this article
    by Jade Ferreira, José Maria N. David, Regina Braga, Fernanda Campos, Victor Stroele, Leonardo De Aguiar 
    Abstract: The Internet of Things (IoT) enables a smart observation of the environment, producing a large amount of heterogeneous data. On the one hand, it allows the remote collection of data, either providing a ready field dataset compilation or serving as a secondary source of information to better analyse the research context. On the other hand, all the raw data generated by disparate sensors need to be integrated to leverage the power of IoT in scientific experiments. This paper proposes an ontology-based data integration architecture that allows data from different sources, formats, and semantics to be integrated and organized by a mediated ontology that provides knowledge inference. The architecture is thus evaluated as a use case testing in a scientific software ecosystem that supports all stages of the experiment life cycle.
    Keywords: ontology; internet of things; data integration; scientific software ecosystem.

  • New media art design in commercial public space   Order a copy of this article
    by Zhigang Wang, Y.E. Wang, Y.U. Sun 
    Abstract: New media art in commercial public space is very beneficial for art communication and commercial transformation. The mass communication awareness can help to maximize the value of new media art and even strengthen peoples public awareness. The article mainly includes two aspectsexperience design and the impact on the peoples lifestyleto understand the impact of new media art in the commercial public space. The fundamental of experience design is to let the people participate in interactive experience activities. The content includes the combination of art and technology, the combination of the public space environment and the form of new media art, the people\'s experience cognition and emotional cognition. The new media art of commercial space will build a multi-dimensional cultural consumption place from material to symbol to spiritual level. In order to stimulate the inner demand and resonance between people and goods and give deeper cultural significance to consumer activities.
    Keywords: commercial public space; new media art; mass communication.

  • A vision of 6G: technology trends, potential applications, challenges and future roadmap   Order a copy of this article
    by Syed Agha Hassnain Mohsan 
    Abstract: The ongoing research on fifth-generation (5G) has exposed many inherent drawbacks in this technology. These limitations of 5G have spurred global research activities to focus on future sixth-generation (6G) technology. The fundamental architecture and performance requirements of 6G are yet to be explored. In present world, the academic research and industrial synergy is accelerating to conceptualize 6G. The widespread applications of blockchain, internet-of-things (IoT), artificial intelligence (AI), augmented reality (AR), virtual reality (VR) and extended reality (XR) have driven the need of emerging 6G technology. 6G technology will put profound impact on ubiquitous connectivity, deep connectivity and intelligent connectivity. We envisage 6G as an ultradense heterogeneous, highly dynamic and innately intelligent network. Thus, the current upsurge of diversified mobile networks has spurred heated discussion on evolution of 6G. In this study, we have outlined a holistic vision which enables tenets of 6G. We opine it will bring technological trends by exciting services and applications. 6G is envisaged to revolutionize several allied technologies and applications. Furthermore, it will enable Internet of Everything (IoE) which will put profound impact of Quality of Experiences (QoE) and Quality of Services (QoS). Integration of IoE and 6G will provide better performance of flying sites, smart cities, robotic communication, vehicular networks and remote surgical operations. In this review, we have envisioned potential applications and challenges in future 6G technologies. Generally, the intent of this extant study is to lay a foundation for out-of-the-box research around 6G applications. In the roadmap review, 6G applications as well as potential challenges are highlighted. In this regard, we believe this review will be helpful to aggregate the research efforts and eliminate the technical uncertainties towards breakthrough novelties of 6G.
    Keywords: 5G; 6G; Blockchain; IoT; mobile networks; internet of everything; ubiquitous connectivity.

  • Fast position tracking control of PMSM under the high frequency and variable load   Order a copy of this article
    by Jiafeng Zhang, Jinghua Wang, Yang Liu 
    Abstract: According to the force characteristics and deflection characteristics of the rudder surface of a supercavitating vehicle, the problem of PMSM's fast position tracking under high-frequency variable load conditions is proposed. In order to solve the problem of poor tracking effect of the "traditional three closed loop" position tracker under the condition of high-frequency variable load, base on the feedforward control theory and the "traditional three closed loop" position tracker, a position tracking control strategy of "three closed loops + speed loop fuzzy feedforward compensation + current loop feedforward compensation" is proposed. First, feedforward compensation for the speed loop reference input improves the response speed of the system and improves the fast position tracking accuracy. Second the feedforward compensation of the current loop reference input effectively overcomes the influence of high-frequency variable loads on the position tracking effect, and further improves the position tracking accuracy. Then the theoretical analysis shows that the two feed-forward links do not change the stability of the traditional three loop position tracker, and the design method of two feedforward coefficients is given. Finally, three simulation comparison experiments respectively illustrate the effectiveness of speed loop feedforward compensation, current loop feedforward compensation and fuzzy control for improving the accuracy of PMSM fast position tracking under high-frequency variable load conditions. The simulation results also verify that the position tracking control strategy proposed in this paper has better response speed, position tracking accuracy and anti-interference performance than the "traditional three closed loop" and "three closed loop + speed loop feedforward compensation" position trackers.
    Keywords: permanent magnet synchronous motor; high frequency variable load; fast position tracking; three closed-loop control; fuzzy feedforward compensation; load observer.

  • A chaos-enhanced accelerated PSO algorithm in reliable tracking of mobile objects   Order a copy of this article
    by Sahar Teymori, Peyman Babaei 
    Abstract: Object tracking in monitoring applications is one of the topics in The Internet of Things (IoT) issues. Most important key challenges in object tracking have concentrated on energy consumption and service quality and reliability. In the proposed approach the good performance of the PSO algorithm in global optimization, and its weakness in local optimization, the PSO algorithm is combined with the chaos operator to overcome this flaw. The purpose of the present research is to improve energy consumption and service quality and increase system reliability in mobile object tracking in WSNs. The goal is also to improve the algorithm performance. According to the results of the simulations, the energy consumption in the proposed method has been improved due to the optimal selection of the cluster heads. Also, the proposed method is stable while improving the reliability and increasing the quality of sensor network services.
    Keywords: object tracking; wireless sensor network; chaos theory; particle swarm optimisation.

  • Formal specification at model-Level of model-driven engineering using modelling techniques   Order a copy of this article
    by Jnanamurthy HK, Frans Henskens, David Paul, Mark Wallis 
    Abstract: Nowadays, model-driven engineering (MDE) is gaining more popularity owing to high-level development leading to a faster generation of executable code, which reduces manual intervention. Verification is crucial at different levels of model-based development. Model-based development, along with formal verification process, assures the developed model satisfies software requirements described in formal specifications. Owing to inadequate knowledge of formal methods (complex mathematical theory), software developers are not adopting formal methods during software development. There are several approaches in the literature available to transform MDE models into formal models directly for formal verification, and these approaches require an additional input of formal specifications to verification tools for formal verification. But these methods have not addressed the problem of formal specifications at the model level. In this paper, we design a modelling framework using modelling techniques, which allows specifying formal properties at the model level, automatically extracting formal specifications and formal models from developed application models, which are used for formal verification. The proposed method allows full automation and reduces the time for formal verification process during the development life-cycle. Furthermore, the method reduces the complexity of learning formal specification notations (specifications specified at the model level are automatically converted into formal specifications), which are required to input verification tools for formal verification.
    Keywords: model-driven development; formal specification; formal verification; temporal logic; model-driven architecture;.

  • Design, optimisation and implementation of a DCT/IDCT based image processing system on FPGA   Order a copy of this article
    by Shensheng Tang, Monali Sinare, Yi Zheng 
    Abstract: In this paper, a discrete cosine transform (DCT) and its inverse transform IDCT are designed and optimised for FPGA using the Xilinx VIVADO High-Level Synthesis (HLS) tool. The DCT and IDCT algorithms, along with a filter logic written by C/C++, are simulated for functional verification and optimised through HLS and packaged as custom IPs. The IPs are incorporated into a VIVADO project to form an image processing system for hardware validation. The VIVADO design, along with a Xilinx SDK application written by C language, is implemented on a Zynq FPGA development board, Zedboard. A C# GUI is developed to transfer image data to/from the FPGA and display the original and processed images. Experimental results are presented with discussion. The FPGA development method, including the DCT/IDCT IP design, optimisation and implementation via HLS as well as the VIVADO project integration, can be extended to a wider range of FPGA applications.
    Keywords: DCT; IDCT; FPGA; VIVADO HLS; IP; Zedboard; GUI; C/C++; Verilog; C#; Optimisation; C/RTL co-simulation; hardware validation.

  • Interactive smart home technologies for users with visual disabilities: a systematic mapping of the literature   Order a copy of this article
    by Otávio Oliveira, André Freire, Raphael De Bettio 
    Abstract: This paper presents a systematic mapping of the literature concerning interactive technologies for smart homes targeting users with visual disabilities. The analysis stemmed from a search resulting in 265 papers, of which 25 were selected. The results show the main types of interaction mode reported, including voice, gesture, touch, keyboard and ambient sensors. Technological approaches included desktop computers, mobile devices, embedded systems, and stand-alone smart devices. The studies showed important features to aggregate different interaction modalities and provide accessible interfaces in mobile and desktop devices to interact with the home. This paper provides valuable insight into the implications for the design of smart home technologies for users with visual disabilities and shows the significant research gaps to be investigated in the future, including overcoming barriers with legacy inaccessible utilities and methodologies to enhance user research in the area.
    Keywords: smart homes; ambient-assisted living; visually impaired user; blind user.

  • Confirmed Quality Aware Recommendations Using Collaborative Filtering and Review Analysis
    by Seema Nehete, Satish Devane 
    Abstract: Recommendation System(RS) save the time of users in their hectic life schedules for purchasing their interested products.RS faces challenges of data sparsity, cold start, efficiency of prediction of products and hence the proposed system is making use of Multi-kernel Fuzzy C Means (MKFCM) clustering to group together similar users having similar age, occupation, and gender into clusters. Clusters of similar users are optimized using the Fruit Fly (FF) optimization algorithm which gives high cluster accuracy and dynamically created subclusters of similar users and their favorite products, overcome sparsity issue which make the analysis easy. Collaborative Filtering(CF), one of the filtering method of RS is used to predict products for target users.This RS gains user’s faith by additionally performing analysis of textual reviews using optimized Artificial Neuron Network(ANN) to recommend the highest quality products, thus dual tested and quality confirmed products are recommended to the user. Experimentation is done on a standard movilense dataset used by many researchers to prove the efficiency of this RS and reviews of all users are extracted from online search engines for product quality analysis before recommendation. Experimentation proves highest recall and accuracy than existing recommendation systems.
    Keywords: Clustering;Recommendation systems;Collaborative Filtering;Artificial Neural Networkrnrn

  • A combined solution for flexible control of poultry houses   Order a copy of this article
    by Lucas Schmidt, Dalcimar Casanova, Richardson Ribeiro, Marcelo Teixeira 
    Abstract: In poultry houses, thermal comfort is decisive for maximizing the feed conversion rate, a measure for successful production. As there are different automatic control options, that result in variable performance indexes, this paper reproduces, tests, and compares two of them: a reactive, that applies event-driven methods, and a bio-inspired, which is based on artificial intelligence techniques. As each approach adds specific advantages to the process control, we combine them into a single framework that gathers their best features. Simulations using real data show that temperature and humidity have been reproduced with 97% and 80% precision, respectively. In comparison, reactive and bio-inspired approaches show respectively accuracies of 90% and 82% for temperature, and 60% and 66% for humidity. Therefore, we conclude that our approach can improve both reactive and bio-inspired control, standing as a feasible and flexible alternative for the control of poultry houses.
    Keywords: intelligent systems; poultry houses; automatic control; formal modelling.

  • Asynchronous dynamic arbiter for network on chip   Order a copy of this article
    by Abdelkrim Zitouni, Bouraoui Chemli 
    Abstract: In modern Network on Chip (NoC), communicating blocks are synchronised with different clock rates. However, system performances may present a bottleneck that can be remedied only by considering the notion of communication asynchrony. The implementation of a high-performance asynchronous NoC router requires the design of dynamic arbitration structures to lower packet latency, and thus increase throughput. Also, the dissipated power needs to be as small as possible. This paper presents a design approach of asynchronous dynamic arbiters to be implemented in NoC routers. The design steps begin by State Transition Graph (STG) as specification model and generate a Quasi-Delay-Insensitive (QDI) arbiter implemented by C-element gates. The designed arbiter communicates with the shared resources by using a four-phases (Req/Ack) handshaking protocol. Arbiter performances have been evaluated through the implementation of an asynchronous 2D-Mesh NoC router in FPGA (Virtex 5) and ASIC (28 nm) technologies. Experimental results show that the proposed router exhibits better performances compared with its counterparts. In ASIC design, this router achieves low power (3.8 mW), low area (0.009 mm2), low latency (1.53 ns), and high packet throughput (1562 Mflit/s).
    Keywords: asynchronous dynamic arbiter; STG; C-element; NoC router; FPGA/ASIC designs.

  • SCATAA-CT: smart course attendance tracking android application in classroom teaching   Order a copy of this article
    by Saadeh Z. Sweidan, Sondos M. Alshareef, Khalid Darabkh 
    Abstract: Tracking students attendance manually is an exhausting and time-consuming process for both instructors and students in all universities around the world. However, instructors are obligated to report students who exceed the allowed absence limit and take legal measurements against them. On the other hand, the number of smartphone users has rapidly increased in the last decade owing to their attractive features and affordable prices. With the fast spread of smartphones, the importance of their related applications (apps) has increased to become the most reliable way to provide any service. Today, apps are being used in all fields of life, such as social activities, formal government work, and even entertainment, which has motivated us to introduce Smart Course Attendance Tracking Android Application in Classroom Teaching (SCATAA-CT). This app aims to enhance the process of tracking attendance in terms of time and effort in large universities where the number of students in a class is high and attendance is mandatory (based on a universitys rules and regulations). Using SCATAA-CT is simple, a course instructor generates a Quick Response (QR) code during a lecture and views it for a short time to the students who in turn scan the code and send back attendance requests to the instructors device. To add credibility to the scan process, a fingerprint authentication is required in addition to other restrictions that all prevent any possible manipulation. Besides generating QR codes, SCATAA-CT allows instructors to show their courses details, read attendance reports, cancel lectures, and block/unblock students based on their absences. On the other side, students can also use the app to show the details of their attendance reports. Moreover, SCATAA-CT exploits notifications efficiently to send users important pre lecture alerts and post lecture updates. Our app was practically tested in a number of courses during the academic year 2019/2020, where it showed efficiency and credibility in tracking students attendance. Even more, the involved students were asked to answer an evaluation survey and the results were very positive with a number of useful feedbacks to consider.
    Keywords: engineering education; attendance tracking; Android application; classroom teaching; fingerprints; biometric authentication.

  • Test and simulation of connection and failure performance of high strength bolts in steel structure   Order a copy of this article
    by Chenzhen Ye, Rongyue Zheng, Jue Zhu 
    Abstract: To study the joint performance and failure characteristics of pretension high strength bolts of fabricated steel structures, the resistance strain gauge method was used instead of the traditional torque method to analyse the loading failure of slip specimens. The refined finite element model and the extended finite element model based on linear elastic fracture mechanics were established and their results compared with the experiments. The results showed that the anti-slip coefficient of the untreated clean rolled Q235 steel conforms to the given value of the code. The finite element simulation results were in good agreement with the test results, and the different failure modes on cover plates or core plate respectively occur in the specimens with different thicknesses of core plates.
    Keywords: high strength bolt; steel structure; anti-slip coefficient; resistance strain gauge method; failure mode.

  • A novel multichannel UART design with FPGA-based implementation   Order a copy of this article
    by Ngoc Pham Thai, Bao Ho Ngoc, Tan Do Duy, Phuc Truong Quang, Ca Phan Van 
    Abstract: Universal Asynchronous Receiver and Transmitter (UART) is a popular asynchronous serial communication standard. Although the transmission speed is not too high, UART has the advantage of simplicity, ease of implementation and low power consumption. Therefore, UART is still used in various digital modules that do not require high communication speed, such as SIM module, Bluetooth, GPS, etc. However, communication with many low-speed peripherals can reduce the efficiency of data bus usage and the processor's performance. In this paper, we propose a multichannel UART design to efficiently use the Advanced Peripheral Bus (APB) standard data bus in order to support simultaneously multiple transmission data frames with different rates. Then, we evaluate the performance of our multichannel UART design by means of simulations and practical implementation using field-programmable gate array boards. The evaluation results show that our proposed multi-channel UART module ensures stable operation while guaranteeing proper transmission to/from multiple devices following UART standard with different configurations.
    Keywords: UART; multichannel; AMBA 3 APB; testbench; field-programmable gate array.

  • Sensor Device Scheduling based Cuckoo Algorithm (SeDeSCA) for Enhancing Lifetime of Cluster based Wireless Sensor Networks
    by Mazin Kadhum Hameed, Ali Kadhum Idrees 
    Abstract: Among the more complicated aspects of Wireless Sensor Networks (WSNs) is developing an efficient topology control technique for saving energy of the network, as well as increasing its lifespan. This study proposes Sensor Device Scheduling-based Cuckoo Algorithm (SeDeSCA) for Enhancing Lifetime of Cluster-based WSNs. The SeDeSCA technique consists of two phases: clustering and scheduling. The WSN is clustered into clusters using DBSCAN algorithm in the first phase. The scheduling phase is periodic and composed of three steps: cluster head polling, scheduling decision-based optimization, and covering. The sensor nodes in each cluster choose their cluster head. The elected cluster head executes Cuckoo Algorithm (CA) to select the suitable schedule of sensor nodes that take the mission of sense during the current period. The major aim of scheduling algorithm-based CA is minimizing the amount of energy consumption and ensuring sufficient coverage for the monitored area while maximizing the network lifespan for WSN. The fourth step is to cover the area of interest by the sensor nodes that are scheduled to be active during this period. The simulation results show that the SeDeSCA technique does indeed improve the network lifespan and global coverage ratio, and improve the lifespan of WSNs.
    Keywords: Wireless Sensor Networks; Cuckoo Algorithm; DBSCAN, lifetime enhancement; scheduling algorithms.

  • An Analysis of real-time traffic congestion optimization through VTL in VANETs
    by Parul Choudhary, Umang Singh, Rakesh Dweidi 
    Abstract: Traffic congestions are a daunting phenomenon that affects thousands of people worldwide in their everyday lives. Due to the rapid proliferation of technologies, the demand for VANET technology is increasing expeditiously to create an environment for a virtual traffic light (VTL) to minimize traffic congestion. The replacement of conventional physical traffic light systems with VTL is cost-efficiently achieved across vehicle networks. In this paper, we summarize the recent state-of-the-art methods of VANETs by discussing the importance of Virtual Traffic Light in VANET, its architecture, and real-life applications. Further, work is focused on challenges, characteristics, and related domains of allied VANETs applications by filling the gaps of existing surveys along with the latest trends incorporating the concept of VTL. This attempt presents the effectiveness of virtual traffic by including recent work in real scenarios according to research findings. This paper offers a systematic review of current VTL methodologies that promise to show an impactful result in the future. Finally, this attempt comprehensively covers the entire VANET system and highlights certain research gaps of VTL that are still left to be explored. This work will support researchers of this domain by analysing the literature on VTL in VANET during the period 2007-2019.
    Keywords: Traffic Congestion, VANETs, VTL, Real Life Applications

Special Issue on: Advanced Big Data and Artificial Intelligence Technologies for Edge Computing

  • Efficient synergetic filtering in big datasets using neural network technique   Order a copy of this article
    by B. Mukunthan 
    Abstract: Presently, great accomplishment on speech-recognition, computer-vision and natural-language processing has been achieved by deep neural networks. To tackle the major trouble in synergetic- or collaborative-filtering is the idea of hidden feedback. In this task we concentrated intensively on the techniques based on neural networks. Although a few recent researchers have employed deep learning, they mostly used it to sculpt auxiliary facts, along with textual metaphors of objects and acoustic capabilities of musics. When it involves the major aspect in synergetic filtering, the communication between customer and object capabilities, they still resorted to matrix factorisation and implemented a core product on the hidden capabilities of customers and objects. We present a popular framework named Artificial Neural Synergetic Filtering (ANSF) to substitute the core makeup with a neural design which could be very efficient to analyse data with a random feature. ANSF is ordinary and can specifically popularise matrix-factorisation beneath its framework. To improvise ANSF modelling with non-linearities we propose to leverage a multi-layer perceptron to investigate customer-object communication functions. In-depth experiments on actual-global databases display big improvement of our proposed ANSF over the latest techniques. Investigational results show that the application of core layers of artificial neural networks gives improved overall performance.
    Keywords: synergetic filtering; big data; matrix factorisation; deep neural network; multi-layer perceptron.

Special Issue on: Computational Advances in Healthcare Solutions

  • Unification of firefly algorithm with density-based spatial clustering for segmentation of medical images   Order a copy of this article
    by Bandana Bali, Brij Mohan Singh 
    Abstract: This paper proposes a computer-aided approach for brain image segmentation to figure out various characteristics of digital images that are responsible for the identification of brain tumours with MRI images. The proposed Density-Based Spatial Clustering Fused with Firefly (DB-FF) method is based on density-based spatial clustering and firefly algorithm, which have significant places in nature-inspired computing techniques. In this research, the solutions of the firefly algorithm have been improved by the density-based spatial clustering algorithm, and a soft computing criterion has also been used as a fitness function. The proposed method has been tested on commonly used images from Harvard Whole Brain Atlas, and the results of this method have been compared with other standard benchmarks from the survey. The proposed DB-FF method achieved better segmentation than standard segmentation quality metrics, such as normalised peak signal to noise, normalised root square mean error and structural similarity index metric. Matlab has been used for implementation and observation. The result demonstrates that the proposed method has a better and more robust performance as compared with the existing MRI segmentation models.
    Keywords: brain tumour detection; data clustering technique; firefly algorithm; image segmentation.

  • Alzheimer's disease diagnosis based on feature extraction using optimised crow search algorithm and deep learning   Order a copy of this article
    by Sonal Bansal, Aditya Rustagi, Anupam Kumar 
    Abstract: Alzheimers Disease (AD) is a long-lasting, progressive, cognitive disorder of degenerative nature and one of the most common reason to cause the dementia. Dementia leads to decline in thinking capacity, inability to handle the behavioural and social skills, and disrupts the normal functioning of the persons ability. Conventional methods of assessing the symptoms and information from a close family member are being recorded to analyse the effect of the disease and its stages. Neuroimaging is one of the best methods being used by neurologists and doctors for Alzheimers disease. MRI is being used around the world for the diagnosis of the disease and to provide insights to the brain and its functioning. With the advances in the area of machine learning, the application to various medical images such as MRI and CT scan, is on the rise and has become a major discipline of research among the experts and analysts. Existing methods of feature extraction from an image involve CNN, which provides a large number of feature sets that require great computation power as well as time to evaluate them using traditional machine learning or any deep learning algorithm. Consequently, we propose an Optimised Crow Search Algorithm (OCSA) for early diagnosis of AD which, when applied to the raw MRI image features, yields a highly representative dense embedding of the same. The mapping learned between this embedding and the image labels resulted in diagnosing 98.62% of AD patients dataset correctly.
    Keywords: Alzheimer’s disease; magnetic resonance images; evolutionary algorithm; feature extraction; intelligent computer-aided diagnosis systems; medical imaging; medical informatics.

  • An intelligent COVID-19 classification model using optimal grey-level co-occurrence matrix features with extreme learning machine   Order a copy of this article
    by Pavan Kumar Paruchuri, V. Gomathy, E. Anna Devi, Shweta Sankhwar, S.K. Lakshmanaprabu 
    Abstract: In recent times, earlier diagnosis of 2019 novel coronavirus disease (COVID-19) is essential for disease cure and control. Chest computed tomography (CT) images are found to be a reliable, helpful and faster method for the classification of COVID-19. Since the chest CT image diagnosis requires medical experts and more time, an automated intelligent model needs to be developed for effective COVID-19 diagnosis. This paper presents a new automated COVID-19 diagnosis model using optimal grey level co-occurrence matrix (GLCM) based feature extraction and extreme learning machine (ELM) based classification. The input chest images undergo preprocessing to improve the image quality. Next, the optimal GLCM features are derived by the use of Elephant Herd Optimisation (EHO) algorithm. Then, the ELM model is applied to perform the classification task. The performance of the OGLCM-ELM model has been validated using the benchmark dataset and the experimental outcome ensured the superior performance of the proposed model over the compared methods. The proposed OGLCM-ELM model has achieved maximum sensitivity of 89.56%, specificity of 90.45%, F-score of 90.13% and accuracy of 90.69%.
    Keywords: COVID-19; disease diagnosis; feature extraction; classification; deep learning.

  • Med-Net: a novel approach to ECG anomaly detection using LSTM auto encoders   Order a copy of this article
    by Koustav Dutta, Rasmita Lenka, Soumya Ranjan Nayak, Asimananda Khandual, Akash Kumar Bhoi 
    Abstract: Time series data is generated in various sectors of day-to-day life. Among all, the time series data plays vital role in medical domain analysis. In this specific context, various continuous time series dependent EEG and ECG signals are the most important. Till now, heavy reliance on doctors regarding the manual analysis of these signals for understanding, monitoring and detecting the anomaly is cumbersome. Thus, this paper proposes a novel approach to analyse and detect ECG signals for tracking of anomalies using Hybrid Deep Learning Architectures (HDLA). The proposed scheme works by implementing self-supervised pattern recognition according to the mechanism of Long Short Term Memory networks (LSTM) in terms of auto encoder and decoder. Finally, the proposed scheme is tested on Physionet dataset. The outcome of the model can also handle noise associated with ECG-based time series signals, and it achieved accuracy and solved the overfitting problems.
    Keywords: bio-signals; encoder; decoder; LSTM; ECG; anomaly; time series; reconstruction error.

  • Multimodality medical image fusion based on non-subsampled contourlet transform   Order a copy of this article
    by Velmurugan Subbiah Parvathy, Sivakumar Pothiraj, Jenyfal Sampson 
    Abstract: The fusion of medical images is the mainly essential and effective technique for disease analysis. We have provided a Non-Subsampled Contourlet Transform (NSCT) image fusion technique using Neuro Fuzzy with Binary Cuckoo Search (NFBCS) and the Slap Swarm Optimisation (SSO) method. Here we successfully fused the Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) images and created a single merged image, which provides a new integrated diagnostic method. Initially, two unique sets of images, for example, MRI and CT, were considered for the fusion procedure. These pairs of images are used for NSCT to generate the image to divide the high frequency module and the low frequency module. The mixing policies are used here to generate and combine high and low frequencies. Compared with other existing techniques, the results of the proposed technical tests show better processing efficiency and deliver results on subjective and objective evaluation criteria. This is particularly advantageous for accurate clinical analysis.
    Keywords: magnetic resonance imaging; computed tomography; neuro fuzzy; binary cuckoo search; slap swarm optimisation.

  • Efficient detection of supraventricular tachycardia by machine learning techniques   Order a copy of this article
    by Monalisa Mohanty, Asit Subudhi, Mihir Narayan Mohanty 
    Abstract: Supraventricular tachycardia (SVT) refers to abnormally fast heartbeat that arises because of the improper electrical activity in the upper chamber of the heart. Though SVT cases nowadays are taken as a less hazardous disease, however recurrent incidents may degrade the heart muscle over a period of time. Tachycardia usually refers to a quick rise in the heart rate that is of more than 100 beats per minute. SVT is a kind of arrhythmia that is based on an abnormal heartbeat. Electrocardiogram (ECG) is one of the most significant diagnostic tools used for the recognition of the health of a heart. The increasing number of heart patients has led to essential progress in the techniques of automatic detection of the numerous kinds of abnormality or the arrhythmia of the heart to reduce the pressure and share the load of the physicians. ECG recordings have been acquired from MIT-BIH supraventricular arrhythmia database (SVDB) of the Physionet repository. Each record consists of ST, N and VF rhythm with a duration of 30 minutes length. Then using various techniques, a set of features has been extracted for ST,N and VF and finally fed into a classifier, such as logistic model tree or multilayer perceptron, to classify the ECG signals.
    Keywords: tachycardia; supraventricular tachycardia; arrhythmia; decision tree classifiers.

  • Application and evaluation of classification model to detect autistic spectrum disorders in children   Order a copy of this article
    by Hrudaya Kumar Tripathy, Pradeep Kumar Mallick, Sushruta Mishra 
    Abstract: A child affected by Autism Spectrum Disorder (ASD) faces significant difficulties in social interaction (i.e., communication with language, understanding emotional states of others, thinking and behaving together, etc.). So there is a requirement for a real-time and easy-to-access diagnostic model to identify autism during the initial phase of occurrence to assist medical experts. Presently, an efficient cure for autism does not exist. A reliable detection model will help to provide better therapy, thereby supporting autistic children to continue a better life. This work deals with the autistic dataset's efficient categorisation using various classifiers, such as naive Bayes, neural network, and random forest. At the same time, Python is a programming tool to determine algorithms with optimum accuracy by multiple simulations. An autistic dataset from the UCI repository was used for this research study. It is used to build a model where parents of a suspected autistic child can detect autism by providing their answers to some particular questions relating to autism characteristics. The implementation result indicated that the random forest classifier gave the optimal performance. A 97.5% classification accuracy rate was generated using a random forest algorithm. An RMSE value of 0.676 was observed. A very minimal 1.16 sec execution time was recorded using the autistic dataset. The recorded values were 92.8%, 92.6%, 90.8%, and 91.5% for the mean accuracy, precision, recall, and f-score metrics. Thus it can be concluded that autistic disorder detection using random forest classifiers generated an optimal performance.
    Keywords: autism syndrome disorder; naive Bayes; neural network; random forest; logistic regression.

  • IoT implementation strategies amid COVID-19 pandemic   Order a copy of this article
    by Dhawan Singh, Aditi Thakur, Maninder Singh, Amanpreet Sandhu 
    Abstract: The world, at present, is witnessing grave challenges to its established institutions and shared beliefs owing to the outbreak of novel coronavirus. Almost all of our establishments are under threat and unprecedented disruptions are being witnessed across all spheres of life. Besides the medical hunt for discovering the cure, there exists an equally significant need to invent technological solutions for restoring numerous services while considering the restrictions imposed by the pandemic. Therefore, in this research work, we have investigated and analysed the possibilities, opportunities, and applications of IoT technology in the field of food safety and quality control, automatic disinfection, healthcare systems, wearable health devices, and personal hygiene. We have assessed various features of currently available IoT design platforms and standard protocols and proposed feasible and dynamic strategies for their implementations. The efficacy of the system demonstrates an immense possibility for the continuation of IoT-based technology, even after the novel coronavirus scare is over.
    Keywords: COVID-19; disinfection; food safety; internet of things; pandemic; smart technology; wearable health devices.

  • The application of plug-and-play ADMM framework and BM3D denoiser for compressed sensing magnetic resonance image reconstruction   Order a copy of this article
    by Xiaojun Yuan, Mingfeng Jiang, Lingyan Zhu, Yang Li, Yongming Li, Pin Wang, Tie-Qiang Li 
    Abstract: Compressed Sensing Magnetic Resonance Imaging (CS-MRI) is an effective technique to reduce MRI data acquisition time. There is currently growing interest in using alternating direction method of multiplier (ADMM) for CS-MRI reconstruction. In this paper, we propose a flexible plug-and-play framework to incorporate the block matching 3D (BM3D) denoising algorithm as prior into the plug-and-play ADMM reconstruction procedure for CS-MRI reconstruction, termed BM3D plug-and-play ADMM (BPA) method. We investigated the performance of the proposed BPA method for the construction of highly under-sampled MRI data of two different sampling masks. Compared with other widely used CS-MRI reconstruction methods, such as PANO, BM3D-IT, BM3D-MRI and BM3D-AMP-MRI, the proposed framework can reconstruct highly under-sampled CS-MRI data with improved gains in peak signal-to-noise ratio and structural similarity index measure.
    Keywords: magnetic resonance image reconstruction; plug-and-play ADMM; denoising algorithm; compressed sensing.

Special Issue on: Intelligent Healthcare Systems for Sustainable Development

  • Dermoscopic image segmentation method based on convolutional neural networks   Order a copy of this article
    by Dang N. H. Thanh, Le Thi Thanh, Ugur Erkan, Aditya Khamparia, V. B. Surya Prasath 
    Abstract: In this paper, we present an efficient dermoscopic image segmentation method based on the linearisation of gamma-correction and convolutional neural networks. Linearisation of gamma-correction is helpful to enhance low-intensity regions of skin lesion areas. Therefore, postprocessing tasks can work more effectively. The proposed convolutional neural network architecture for the segmentation method is based on the VGG-19 network. The acquired training results are convenient to apply the semantic segmentation method. Experimental results are conducted on the public ISIC-2017 dataset. To assess the quality of obtained segmentations, we make use of standard error metrics, such as the Jaccard and Dice, which are based on the overlap with ground truth, along with other measures such as the accuracy, sensitivity, and specificity. Moreover, we provide a comparison of our segmentation results with other similar methods. From experimental results, we infer that our method obtains excellent results in all the metrics and obtains competitive performance over other current and state of the art models for dermoscopic image segmentation.
    Keywords: dermoscopic images; deep CNNs; machine learning; skin lesions; image segmentation; skin cancer.

  • Prediction of diabetic patients using various machine learning techniques   Order a copy of this article
    by Shalli RANI, Manpreet Kaur, Deepali Gupta, Amit Kumar Manocha 
    Abstract: The growth of technology and digitisation of several areas has made the world more successful in reaching solutions to remote problems. Large amounts of health records are also available in digital storage. Machine learning plays an important role for uncovering the health issues from the digital records or for diagnosis of various diseases. In this paper, we present the introduction to recommender system (RS) with respect to diabetic patients after the rigorous review of existing literature. An experiment analysis is performed in Python with the help of machine learning classifiers, such as logistic regression, averaged perception, Bayes point, boosted decision tree, neural network, decision forest, two class support vector machine and locally deep support vector machine on Pima Indian Diabetes Database. We conducted an experiment on 23K diabetic patients dataset. The results from all the classifiers reveal that the logistic regression performs best, with an accuracy of 78% and predicting the accurate results with a specificity of 92%.
    Keywords: collaborative filtering; diabetic patients; diabetes mellitus; machine learning.

  • Multisensor fusion approach: a case study on human physiological factor-based emotion recognition and classification   Order a copy of this article
    by A. Reyana, P. Vijayalakshmi, Sandeep Kautish 
    Abstract: In people's daily life, human emotion plays an essential role, and the mental state accompanied by physiological changes. Experts have always seen that monitoring the perception of emotional changes at an early stage is a matter of concern. Within the next few years, emotion recognition and classification is destined to become an important component in human-machine interaction. Today's medical field makes much use of physiological signals for detection of heart sounds and identifying heart diseases. Thus the parameters temperature and heartbeat can identify the major health risks. This paper takes a new look at the development of an emotion recognition system using physiological signals. In this context, the signals are obtained from the body sensors such as muscle pressure sensor, heartbeat sensor, accelerometer, and capacitive sensor. The emotions observed are happy (excited), sad, angry, and neutral (relaxed). The results of the proposed system shows an accuracy percentage for the emotional states as follows: happy 80%, sad 70%, angry 90%, and neutral 100%.
    Keywords: emotion; recognition; multisensor fusion; body sensors; mental state.

  • LabVIEW based cardiac risk assessment of fetal ECG signal extracted from maternal abdominal signal   Order a copy of this article
    by Prabhjot Kaur, Lillie Dewan 
    Abstract: In recent years, the inclination toward the automated analysis of the fetal ECG signal has become a trend. Mathematical computational processing of abdominal fetal ECG has proved to be beneficial in the crucial diagnosis of complex cardiac diseases. To arrive at the diagnosis, a cardiologist needs to observe the variations critically in the duration and amplitude of different waves and segments of the ECG. In the case of a fetus, a preliminary diagnosis of these deviations helps to have a valid and appropriate intervention, which may otherwise result in permanent damage to the brain and nervous system. For this reason, the fetal cardiac signal has been efficiently extracted from a composite abdominal signal in this paper. The signal extraction has been accomplished in the LabVIEW environment using the Independent Component Analysis (ICA) approach, implemented after the application of hybrid filters, employed for removing noise and artifacts within the signal taken from PhysioNet Database. By proper selection of the cut-off frequency of filters, the denoised signal is approximately 99% accurate. Statistical features, such as the signal-to-noise ratio, standard deviation, error, and accuracy, have been computed as well as morphological features including heart rate, time and amplitude of QRS complex with a duration of the PR interval, RR interval, and QT interval. Results obtained demonstrate that the implementation of ICA for fetal ECG signal extraction helps to determine fetal heart rate accurately with low computational complexity. The performance of the proposed algorithm has also been explored in the case of twin pregnancy. The estimated heart rate is comparable to the actual heart rate, which validates the algorithm's accuracy. The results also indicate the feasibility of real-time application of data acquisition and analysis.
    Keywords: electrocardiogram; independent component analysis; sinus rhythm; tachycardia; bradycardia; denoising filters; signal-to-noise ratio; standard deviation; accuracy; LabVIEW.

  • Impact of feature extraction techniques on cardiac arrhythmia classification: experimental approach   Order a copy of this article
    by Manisha Jangra, Sanjeev Kumar Dhull, Krishna Kant Singh 
    Abstract: This paper provides comparative analysis of state-of-the-art feature extraction techniques in the context of ECG arrhythmia classification. In addition, the authors examine a linear heuristic function LW-index as an indirect measure for separability of feature sets. Seven feature sets are extracted using state-of-the-art feature extraction techniques. These include temporal features, morphological features, EMD-based features, wavelet transform based features, DCT features, Hjorth parameters, and convolutional features. The feature sets performance is evaluated using SVM classifier. The experimental setup is designed to classify ECG signals into four types of arrhythmic beat, which are normal (N), ventricular ectopic beat (VEB), supraventricular ectopic beat (SVEB) and fusion beat (F). A PSO-based feature selection method is used for dimensionality reduction using the LW-index as cost function. The results validate the hypothesis that convolutional features have better discrimination capability as compared with other state-of-the-art features. This paper can resolve the hassles for new researchers related to performance efficacy of individual feature extraction techniques. The work offers an inexpensive methodology and measure to indirectly evaluate and compare the performance of feature sets.
    Keywords: ECG; feature extraction; validity index; feature selection; CNN; PSO; DWT; DCT; Hjorth parameters; EMD; temporal features; MIT-BIH database; SVM.

  • IoT-based automatic intravenous fluid monitoring system for smart medical environment   Order a copy of this article
    by Harsha Chauhan, Vishal Verma, Deepali Gupta, Sheifali Gupta 
    Abstract: Over the last few years, hospitals and other healthcare centres are adopting advances in many sophisticated technologies in order to assure the fast recovery of patients. In almost all hospitals, a caretaker/nurse is responsible for the monitoring of intravenous fluid levels. Usually most of the caretakers forget to change the bottle at the correct time owing to their busy schedule, as a result of which the patient may face problems of reverse flow of blood towards the bottle. To overcome this critical issue, this paper proposes an IoT-based automatic intravenous fluid monitoring system. The proposed device consists of Arduino UNO (i.e. ATMega328 microcontroller), liquid crystal display, solenoid actuator, force sensitive resistor 0.5, ESP8266, a buzzer and LED lights. The authors have used FSR (Force Sensitive Resistor) sensor to monitor the weight of bottle. With the installation of the proposed device, the constant need of monitoring will be reduced by the staff, especially during night hours, thus decreasing the chance to harm the patient and increase the accuracy of healthcare in hospitals. Also, this system will avoid the fatal risk of air embolisms entering the patients bloodstream, which leads to immediate death. To analyse the performance of the proposed system, the authors have done a sample test, by taking time as a parameter to analyse how much time the intravenous fluid bottle is taking to get empty. The results have shown a promising future aspect of the proposed device in order to enhance the healthcare services.
    Keywords: drip monitoring system; IoT; healthcare; intravenous fluid; wearable electronics; ESP8266; FSR sensor.

  • Artificial intelligence based algorithm to track the probable COVID-19 cases using contact history of virus-infected persons   Order a copy of this article
    by Javed Shaikh, R.S. Singh, Demissie Jobir Gelmecha, Tadesse Hailu Ayane 
    Abstract: Currently, the world is facing major challenges in tackling COVID-19. It has affected many countries of the world in terms of human lives, economy and so many other aspects. Many organisations and scientists are working to find ways in which the spread of the COVID-19 can be minimised. One technology that can be effective in tackling this virus is Artificial Intelligence (AI), which can help in many ways. The foremost requirement of this situation is to find the cases of infection as early as possible so that it will not spread rapidly. In this paper, an AI-based algorithm is proposed for the tracking of probable COVID-19 cases. The algorithm uses the mobile numbers of coronavirus-infected persons as data for forecasting. This technique will find the probable infected cases and help in controlling the rapid spread of the virus. This method will provide information regarding an infected person who had contact with other persons by using a forecasting method. As this is an automated tracking system it will help in finding the probable virus-infected cases in a very short time.
    Keywords: COVID-19; artificial intelligence; machine learning; forecasting methods.

  • Prevention of utopsy by establishing a cause-effect relationship between pulmonary embolism and heart failure using machine learning   Order a copy of this article
    by Naira Firdous, Sushil Bhardwaj, Amjad Hussain Bhat 
    Abstract: This paper presents a cause-effect relationship between heart failure and pulmonary embolism, using machine learning. The proposed method is divided into two parts. The first part includes the establishment of connectivity between the two medical fields, which is done by finding out the relationship between the pulse pressure and the stroke volume. The second phase includes the implementation of machine learning on the above-formed connectivity. A univariate technique of feature selection is performed initially in order to get the most relevant attributes. The overfitting problem has been addressed by formulating an ensemble model using hard and soft voting classifiers. Also, the efficiency has been checked by increasing the number of hidden layers of a neural network.
    Keywords: pulmonary embolism; stroke volume; pulse pressure; systolic; diastolic; overfitting; ensemble classifiers; neural network.

  • Tool-based persona for designing user interfaces in healthcare   Order a copy of this article
    by Hanaa Alzahrani, Reem Alnanih 
    Abstract: Technology devices such as smartphones, tablets, and computers have become an intrinsic part of modern life, as this form of technology has entered all businesses and fields such as healthcare. Health sites (HSs) impact healthcare delivery by using technology to improve healthcare outcomes, reduce costs and errors, and increase patient and information safety. Among the available website builders, none has been developed for healthcare sites or designed based on healthcare persona. This is a challenge when designing a specific HS for a particular target group of users such as doctors. System complexity, and the difficulty for doctors to deal with those systems, made it necessary to consider persona that help to understand the mental language of the target users, making the whole systemic experience quite human. The purpose of this paper is to create a new health site design (HSD) tool for designing a User Interface (UI) based User Experience (UX). The tool is designed based on doctors behaviour, personae and real-life scenarios. The applicability of this tool is explored as well as its usability, especially for those with no background in web design. The tool was tested by participants from designing perspectives randomly divided into two groups: control group, who were asked to follow all the instructions in terms of watching and attending the tutorial session and then perform the tasks; and study group, who were asked to perform the tasks directly. The study results show that there is no significant difference between participants in the two groups for effectiveness and efficiency. However, for the cognitive load, the study group was better than the control group. All of the participants were able to complete all the tasks successfully with a minimum amount of time, clicks, and errors. In addition, user satisfaction yielded a score of 84.6 on the System Usability Scale (SUS), mapping it in the A Grade.
    Keywords: health systems design tool; website builders; user experience; persona; experimental design; usability evaluation; system usability scale.

  • RC-DBSCAN: redundancy controlled DBSCAN algorithms for densely deployed wireless sensor network to prolong the network lifespan   Order a copy of this article
    by Tripti Sharma, Amar Mohapatra, Geetam Singh Tomar 
    Abstract: In a wireless sensor network, the nodes are spatially distributed and spread over application-specific experimental fields. The primary role of these nodes is to gather the information for various intended fields such as sound, temperature, vibration, etc. In this proposed algorithm efforts have been made to prolong the network lifespan by decreasing the nodes' energy consumption by considering the critical issues of dense deployment. Every node will limit its chance of participation in any cluster based on the local sensor density. The network area is divided into high- and low-density regions using the DBSCAN algorithm. The nodes in low-density areas are considered critical because there is very little probability for sensing and broadcasting the redundant data by these nodes. The division of high- and low-density regions by applying DBSCAN helps in sleep management. Sleep management helps in energy optimisation in dense areas and thus prolongs network lifetime with the improved stable region. It has been observed through computer simulation that RC-DBSCAN is more energy-efficient than IC-ACO and LEACH in densely deployed network areas in terms of total data packets received by the base station, prolonged network lifespan and improved stability period.
    Keywords: DBSCAN; WSN; fuzzy; sleep management.

  • Coronary artery disease diagnosis using extra tree support vector machine: ET-SVMRBF   Order a copy of this article
    by Pooja Rani, Rajneesh Kumar, Anurag Jain 
    Abstract: Coronary artery disease (CAD) is a type of cardiovascular disease that can lead to cardiac arrest if not diagnosed timely. Angiography is a standard method adopted to diagnose CAD. This method is an invasive method having certain side-effects. So there is a need for non-invasive methods to diagnose CAD using clinical data. In this paper, the authors propose a methodology ET-SVMRBF (Extra Tree SVM-RBF) to diagnose CAD using clinical data. The Z-Alizadeh Sani CAD dataset available on UCI (University of California, Irvine) has been used for validating this methodology. The class imbalance problem in this dataset has been resolved using SMOTE (Synthetic Minority OverSampling Technique). Relevant features are selected using the extra tree feature selection method. The performances of different classifiers XGBoost (Extreme Gradient Boosting), KNN (K-Nearest Neighbour), SVM-Linear (Support Vector Machine-Linear), and SVM-RBF (Support Vector Machine-Radial Basis Function) on the dataset have been evaluated. GridSearch optimisation method was used for hyperparameter optimisation. Accuracy of 95.16% was achieved by ET-SVMRBF, which is higher than recent existing work in the literature.
    Keywords: coronary artery disease; cardiovascular disease; extra tree; support vector machine; XGBoost; K-nearest neighbour.

  • Prediction of cardiac disease using online extreme learning machine   Order a copy of this article
    by Sulekha Saxena, Vijay Kumar Gupta, P.N. Hrisheekesha, R.S. Singh 
    Abstract: This paper presents an automated machine learning (ML) algorithm to detect the coronary disease-like congestive heart failure (CHF) and coronary artery disease (CAD). The proposed automated ML has been employed as a combination of nonlinear features extraction methods, online sequential machine (OS-ELM) and linear discriminate analysis (LDA) as well as generalised discriminate analysis (GDA) as feature reduction algorithms. The dimension reduction of nonlinear features was done by LDA and GDA with Gaussian or radial basis function kernel (RBF), and OSELM binary classifier with an activation function, such as Sigmoid, Hardlim, or RBF, has been used to detect CHF and CAD subjects. For training and validation of ML, twelve nonlinear features were extracted from heart rate variability (HRV) signals. The HRV standard databases have obtained from normal young and elderly CHF and CAD subjects. The numerical experiments were carried out on the sets as CAD-CHF, young-elderly-CAD and young-elderly-CHF subjects. The numerical simulation results clearly have shown when GDA with Gaussian or RBF kernel function is combined with OS-ELM having Sigmoid, Hardlim and RBF activation function, the proposed scheme achieved better detection performance compared with OSELM. To test the robustness of proposed method the classification performances including accuracy, positive prediction value, sensitivity and specificity were calculated on a 100 trial, and it achieved average performance accuracy of 99.77% for young-elderly-CAD and 100% overall performance for CAD-CHF and young-elderly-CHF subjects.
    Keywords: Lempel-Ziv; Poincare plot; OSELM; sample entropy; dimension reduction method; detrended fluctuation analysis.

  • Digitisation of paper-ECG using column-median approach   Order a copy of this article
    by Priyanka Gautam, Ramesh Kumar Sunkaria, Lakhan Dev Sharma 
    Abstract: Usually, ECG (electrocardiogram) signals are recorded on standard grid paper to determine the potential of cardiac disorders in hospitals. In the current technological era, existing records of paper-ECG are needed to be converted into digital forms as it is the most effective technique to analyse, process, store and communicate attributes of ECG (features/quality, etc.) for clinical uses. The present work introduces a novel technique for the digitisation of paper-ECG (column-median approach). This paper uses correlation and heart rate as parameters to validate the proposed methodology. To observe the precision of the proposed algorithm, the accuracy of the heart rate is also calculated. The overall correlation and percentage error carried out in 50 different signals are 0.86 and 0.79%, respectively. The overall accuracy obtained for 50 different ECG signals is 99.21%, which shows that the methodology works effectively.
    Keywords: paper-ECG; column-median approach; biomedical image processing.

Special Issue on: The Significance of Machine Learning for COVID-19

  • Analysis of some topological nodes using the adaptive control based on 9-D, hypothesis theoretical to COVID-19   Order a copy of this article
    by Abdulsattar Abdullah Hamad, M.Lellis Thivagar, K. Martin Sagayam 
    Abstract: This work is an extension based on a new model previously proposed where the Hamiltonian, synchronisation, Lyapunov expansion, equilibrium, and stability of the proposed model for the same authors were analysed. In this paper we present a broader analysis to develop receiving network nodes faster. The analysis and study have demonstrated how to determine the basic structure and content of the Sym in theory, an attempt to identify objects that have a fundamental engineering role for the model after confirming the performance and results, we can suggest it to determine the spread of coronavirus.
    Keywords: lu; Hamiltonian; synchronisation; Lyapunov expansion; equilibrium; topological nodes.

  • An ensemble approach to forecast COVID-19 incidences using linear and nonlinear statistical models   Order a copy of this article
    by Asmita Mahajan, Nonita Sharma, Firas Husham AlMukhtar, Monika Mangla, Krishna Pal Sharma, Rajneesh Rani 
    Abstract: Coronavirus 2019, also known as COVID-19, is currently a global epidemic. This pandemic has infected more than 100 countries all over the globe and is continuously spreading and endangering the human species. Researchers are perpetually trying to discover a permanent antidote for the virus, but presently no particular medication is available. As a result, health sectors worldwide are experiencing an unexpected rise in cases each day. Hence, it becomes necessary to predict the spread of the disease so as to enable public health sectors to improve their control capabilities in order to mitigate the spread of the infections. This manuscript proposes a stacked ensemble model for accurately forecasting the future occurrences of COVID-19. The proposed ensemble model uses Exponential Smoothing (ETS), Autoregressive Integrated Moving Average (ARIMA), and Neural Network Autoregression (NNAR) as the base models. Each base model is trained individually on the disease dataset, whose regress values are then used to train the Multilayer perceptron (MLP) model. The stacked model gives better predictions compared with all the other four forecasting models. It is validated that the proposed model outperforms the base models. This validation is established through error metrics such as Root Mean Square Error (RMSE) and Mean Absolute Error (MAE). The results conclude that the ensemble model is highly robust and reliable in forecasting future COVID incidences in comparison to other statistical time series models.
    Keywords: COVID-19; pandemic; forecasting; ensemble approach; stacking; autoregressive integrated moving average; exponential smoothing; neural network; multilayer perceptron.

  • Simple program for computing objective optical properties of magnetic lenses   Order a copy of this article
    by R.Y. J. Al-Salih, Abdullah E. M. AlAbdulla, Ezaldeen Mahmood Abdalla Alkattan 
    Abstract: This paper describes the basic features of a new program denoted MELOP (Magnetic Electron Lens Optical Properties), primarily intended for providing a free simple way to calculate the objective focal properties of rotationally-symmetric electron magnetic lenses in the presence of an axial magnetic field distribution. The calculation is done by solving the paraxial ray equation using the fourth-order Runge-Kutta formula. For a specific beam voltage, the program computes the excitation parameter, the object or image plane, the objective principal plane, the objective focal length, the objective magnification, the spherical aberration coefficient, the chromatic aberration coefficient, and the magnetic flux density at the object or image plane. These parameters are solved for zero, low, high or infinite magnification condition. The program can handle instantaneously plotting the variations of the calculated parameters relatively. In addition to that, the data can be transferred to xlsx or txt file format.
    Keywords: electron lenses design; electron objective focal properties; fourth-order Runge-Kutta formula.

  • The impact of oil exports on consumer imports in the Iraqi economy during the COVID-19 period: a theoretical study   Order a copy of this article
    by Mustafa Kamil Rasheed, Ali Mahdi Abbas Al-Bairmani, Abir Mohammed Jasim Al-Hussaini 
    Abstract: Exports and imports of foreign trade are widely considered to be the most important contribution to the economic development of society. Especially, the potential and competitiveness of exports are realised, result from that an import capacity that supports growth and balance in all economic sectors. Particularly, the foreign exchange revenues come up with increasing exports, which tend to finance investment projects as well as encourage the importation of developed means of production that contribute to increase productivity and achieve economic efficiency, but this is rarely achieved in developing countries including Iraq. In spite of the high amount of oil exports, there are a large proportion of revenues from these exports that go towards import consumer goods, hence do not create a stimulating environment for production and investment. On the contrary, they stimulate the investment multiplier in the exporting partner countries, which stimulates their investment activity. The hypothesis of this study refers to how the direct relationship between oil exports and consumer imports disrupts the economy and output and weakens its performance. The most important finding of the study is that oil exports in Iraq directly link with consumer imports, which leads to the loss of the Iraqi economy's financial resource and stimulating economic activity. The study recommends the need to adopt economic diversification to overcome the unilateral Iraqi economy, as well as the optimal use of financial resources to support the national economy.
    Keywords: oil exports; consumer imports; total exports; production activities; COVID-19.

  • Evaluation of the impact parameters of nano Al2O3 dielectric in wire cut-electrical release machining in the COVID-19 environment   Order a copy of this article
    by Farook Nehad Abed, Azwan Bin Sapit, Saad Kariem Shather 
    Abstract: This paper focuses on wire electric discharge machine in the COVID-19 environment. It can be considered as an attempt to develop models of response variables. Using a different liquid, one of which is nanoparticle (Al2O3), in the ratio of (2 mg) and the function of comparison in both cases is the rate of material removal, in the wire electric discharge machine process using the response surface methodology. The pilot plan is based on the concept of the Box-Behnken, and the study conveys the six main parameters. To evaluate the value of the advanced model, ANOVA was applied; the test results support the validity and suitability of the advanced RSM model. Optimum settings for the parameters are improved work safety in the COVID-19 environment.
    Keywords: wire electric discharge machine; titanium; MRR; RSM; COVID-19.

  • Analysis of convolutional recurrent neural network classifier for COVID-19 symptoms over computerised tomography images   Order a copy of this article
    by Srihari Kannan, N. Yuvaraj, Barzan Abdulazeez Idrees, P. Arulprakash, Vijayakumar Ranganathan, Udayakumar E., P. Dhinakar 
    Abstract: In this paper, a Convolutional Recurrent Neural Network (CRNN) model is designed to classify the patients with COVID-19 infections. The CRNN model is designed to identify the Computerised Tomography (CT) images. The processing of CRNN is modelled with input image processing and feature extraction using CNN and prediction by RNN model that quickens the entire process. The simulation is carried with a set of 226 CT images by varying the training-testing accuracy on a 10-fold cross-validation. The accuracy in estimating the image samples is increased with increased training data. The results of the simulation show that the proposed method has higher accuracy and reduced MSE with higher training data than other methods.
    Keywords: image classification; COVID-19; medical imaging; convolutional recurrent neural network; 10-fold cross-validation.

  • An empirical validation of learning from home: a case study of COVID-19 catalysed online distance learning in India and Morocco   Order a copy of this article
    by Gabriel A. Ogunmola, Wegayehu Enbeyele, Wissale Mahdaoui 
    Abstract: The world as we know it has changed over a short period of time, with the rise and spread of the deadly novel coronavirus known as COVID19 and will never be the same again. This study explores the devastating effects of the novel virus pandemic and the resulting lockdown, thus the need to transform the offline classroom into an online classroom. It explores and describes the numerous online teaching platforms, study materials, techniques, and technologies being used to ensure that educating the students does not stop. Furthermore, it identifies the platforms and technologies that can be used to conduct online examinations in a safe environment devoid of cheating. Additionally, it explores the challenges facing the deployment of online teaching methods. On the basis of literature review, a framework is proposed to deliver superior online classroom experience for the students, so that online classroom is as effective as or even better than offline classrooms. The identified variables were empirically tested with the aid of a structured questionnaire: there were 340 respondents who were purposefully sampled. The result indicates that students prefer online teaching when such sessions are enhanced with multimedia presentations. The study recommends that instructors need to train in the use of technology enhanced learning if learning from home is going to be effective.
    Keywords: COVID–19; online classroom; Zoom; lockdown; MOOC; iCloud; proportional odds model.

  • An empirical study on social contact tracing of COVID-19 from a classification erspective   Order a copy of this article
    by Mohammed Gouse Galety, Elham Tahsin Yasin, Abdellah Behri Awol, Lubab Talib 
    Abstract: The staggering emergency of COVID-19 is a pandemic and irresistible without the antibody and cure. This uprising issue needs the preventive controls through creation of awareness and implementation of the contact tracing process. The procedure of contact tracing is to determine the infected individual or the men and women who have had contact with infected people to be indexed and dealt with carefully. This device has its usage to lessen the infections with the information described at the infectious disease and to reduce the spread of the infection with precautionary measures by means of creating awareness. Awareness creation demands various tools for its installation whereas social media networking is a knowledge set of the current market coverage of maximum sociology of the planet to see, analyse and interpret the present the market knowledge with the support of classifiers and applied math learning ways of Artificial Intelligence (AI). This paper derives the obtainable data with its analysis and widespread contract tracing through the use of social media dataset, similarly as math learning ways of AI are applied to determine the infected COVID-19 and infers the adequate action and preventive measures for the reduction of the expansion of COVID19 infections by the controller's segment of the said method.
    Keywords: COVID-19; infection; preventive controls; awareness creation; social media; contact tracing; artificial intelligence.

  • Analysis of the COVID-19 pandemic and forecasting using machine learning models   Order a copy of this article
    by Ekansh Chauhan, Manpreet Sirswal, Deepak Gupta, Ashish Khanna, Aditya Khamparia 
    Abstract: The coronavirus pandemic is rapid and universal, menacing thousands of lives and all economies. The full analysis of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is imperative as a deciding factor for the remedic actions. Machine learning is being used in every sphere to fight the coronavirus, be it understanding the biology of the virus in time, be it diagnosing the patients, or be it drug and vaccine development. It is also critical to predict the pandemic lifetime so to decide on opportune and remedic activities. Being able to accurately forecast the fate of an epidemic is a critical but difficult task. In this paper, based on public data available to the world and India, the estimation of pandemic parameters and the ten days ahead forecast of the coronavirus cases is proposed using Prophet, Polynomial Regression, Auto Arima and Support Vector Machine (SVM). The performances of all the models were motivating. MAE and RMSE of polynomial regression and SVM were convincingly low. Polynomial regression has predicted the highest number of cases for India and the lowest number of cases for the world, which depicts that according to polynomial regression the daily cases are going to spike in India and decline a little in the world. Prophet has forecast the lowest number of cases for India and the highest number of cases in the world, after SVM. The results of Arima are closest to the average of combined results by all of the four models. The only limitation is the lack of enough data, which creates high uncertainty in the forecast. The four factors, i.e. growth factor, growth ratio, growth rate and second derivative for the growth of coronavirus, in the USA and India are also calculated and compared. Several theories revolving around the origin of the coronavirus are also discussed in this paper. Under optimistic predictions, the results show that the pandemic in some countries is going to terminate soon, while in some countries it is going to increase at an alarming rate and the overall rate of growth of the coronavirus cases is decreasing in both the USA and India.
    Keywords: COVID-19; machine learning; novel coronavirus; classification; Technology.

  • A statistical analysis for COVID-19 as a contact tracing approach and social networking communication management   Order a copy of this article
    by Abdulsattar A. Hamad, Anasuya Swain, Suneeta Satpathy, Saibal Dutta 
    Abstract: The COVID19 outbreak caused by severe acute respiratory syndrome coronavirus 2 (SARSCoV2) has been declared as a global pandemic. The first case of coronavirus was detected in Wuhan city of China and later declared as pandemic by the World Health Organization. As of the first week of August 2020, more than 20 million cases of COVID19 have been reported globally, resulting in more than 700,000 deaths and around 12 million people have recovered. The medium of spread of viral infection is the droplets produced from nose and mouth by coughing, sneezing and talking or by small droplets which hang in the air. COVID-19 disease as yet has no vaccine or medication. Preventive measures for this infectious disease may include creating awareness and implementation of the contact tracing process. The process of contract tracing is to determine the infected person or the people who have had contact with infected people, to be listed and treated carefully. The basic aim is to reduce the infections with the detailed description of COVID-19 and minimise the spreading of the infections by creating awareness. Awareness creation demands the adoption of different tools, among which social media networking is considered as a fruitful medium to achieve the same. Further different classifiers and statistical learning methods are also used to analyse and interpret the social media networking data. This research study has derived the available information with the employment of statistical learning methods of artificial intelligence and successful contract tracing through the use of social media datasets to determine the infected COVID-19 data. In addition, this research also infers the adequate course of action and preventive measures for the reduction of the growth of COVID19 infections with the help of the controllers segment of the said method. The present work has also adopted the Natural Language Processing (NLP) method as an aid to process the social network data and find the solution to the inquiries. In addition, the work has also validated the relationship between the social networking and employment of artificial intelligence techniques as a contact tracing and awareness program with the help of statistical tools like regression, coefficient correlation and Anova. The main objective of the study is to reduce the pandemic infections by spreading awareness and generating detailed descriptive reports about COVID-19 with the usage of social media networking as well as artificial intelligence statistical learning methods.
    Keywords: COVID-19; infection; preventive controls; awareness creation; social media; contact tracing; artificial intelligence.

  • The degree of applying electronic learning in the Gifted School of Nineveh in Iraq, and what management provided to the students and its relationship to qualitative education during the COVID-19 pandemic.   Order a copy of this article
    by Ahmed S. Al-Obeidi, Nawar A. Sultan, Anas R. Obaid, Abdulsattar A. Hamad 
    Abstract: This paper discusses the most important pillars of e-learning and the distance learning process in a Gifted School in Nineveh. Through this study, we were able to identify the methods of conducting distance education under the information technology system and on the work and learning environment used in e-learning. Increasing the efficiency of the educational institution through distance e-learning just as the basics of building an e-learning system in various educational institutions. The types of program and the best-known of which are in the application of e-learning in educational institutions in general and in the Gifted School in particular are also discussed. A comparison is made between the two in terms of method and accuracy of using these programs.
    Keywords: electronic learning; Gifted School of Nineveh; COVID-19; distance education; hypothetical education.

  • Design and analysis on the molecular level of a biomedical event trigger extraction using recurrent neural network based particle swarm optimisation for COVID-19 research   Order a copy of this article
    by R.N. Devendra Kumar, Arvind Chakrapani, Srihari Kannan 
    Abstract: In this paper, rich extracted feature sets are fed to the deep learning classifier that estimates the optimal extraction of lung molecule triggered events for COVID-19 infections. The feature extraction is carried out using a Recurrent Neural Network (RNN) that effectively extracts the features from the rich datasets. Secondly, a particle swarm optimisation (PSO) algorithm is used to classify the extracted features of COVID-19 infections. The rule set for the feature extractor is supplied by the fuzzy logic rule set. The simulation shows that the RNN-PSO, which is the combination of two algorithms, offers improved performance over other machine learning classifiers.
    Keywords: event triggers; COVID-19; lung molecules; feature extraction; classification; particle swarm optimisation; recurrent neural network.

  • Multivariate economic analysis of the government policies and COVID-19 on the financial sector   Order a copy of this article
    by Monika Mangla, Nonita Sharma, Sourabh Yadav, Vaishali Mehta, Deepti Kakkar, Prabakar Kandukuri 
    Abstract: The whole world is experiencing a sudden pandemic outbreak of COVID-19. In the absence of any specific treatment or vaccine, social distancing has proved to be an effective strategy in containing the outbreak. However, this has led to disruption in trade, travel, and commerce by halting manufacturing industries, by the closing of corporate offices, and all other sundry activities. The alarming pace of the virus spread and the increased uncertainty is quite concerning to the leading financial stakeholders. This has led to the customers, investors, and foreign trading partners fleeing away from new investments. Global markets plummeted, leading to erosion of more than the US $6 trillion within just one week from 24 to 28 February 2020. During the same week, the S&P 500 index alone experienced a loss of more than $5 trillion in the USA, while other top 10 companies in the S&P 500 suffered a combined loss of more than $1.4 trillion. This manuscript performs multivariate analysis of the financial markets during the COVID-19 period and thus correlates its impact on the worldwide economy. An empirical evaluation of the effect of containment policies on financial activity, stock market indices, purchasing manager index, and commodity prices is also carried out. The obtained results reveal that the number of lockdown days, fiscal stimulus, and overseas travel ban significantly influence the level of economic activity.
    Keywords: coronavirus; COVID-19; financial sector; forecasting; multivariate analysis; NIFTY indices; pandemic; regression model; stringency index.

  • COVID-19 suspected person detection and identification using thermal imaging based closed circuit television camera and tracking using drone in internet of things   Order a copy of this article
    by Pawan Singh Mehra, Yogita Bisht Mehra, Arvind Dagur, Anshu Kumar Dwivedi, M.N. Doja, Aatif Jamshed 
    Abstract: COVID-19 has emerged as a world-wide health concern where human to human transmission is described with an incubation period of 2-10 days. It is contagion by droplet and contaminated surfaces like hands. The sole way to detect a suspected person being infected with COVID-19 without COVID-19 testing kit is through thermal scanner. Since the disease is spreading at a vast rate, not only it is very hard to check or scan every individual manually but also there are chances of transmission of COVID-19 to the unsuspecting person. In this paper, we propound a system where the suspected person can be easily detected and identified for COVID-19 by using thermal imaging based closed circuit television (CCTV), which will automatically scan the people in the vicinity and capture a video/image of the suspected person. The system will raise an alarm in the vicinity so that people in that area can distance themselves from each other. The recorded video/image will be forwarded to base station and information about the suspected person will be fetched from the server. Meanwhile, drones will be used for tracking the suspected person until the nodal medical team diagnose the suspected person for confirmation. The proposed system can contribute significantly for curbing the rate of infected COVID-19 persons and prevent further spread of this pandemic disease.
    Keywords: coronavirus; COVID-19; face recognition; drone; internet of things; automation; deep learning.

  • Machine learning based classification: an analysis based on COVID-19 transmission electron microscopy images   Order a copy of this article
    by Kalyan Kumar Jena, Sourav Kumar Bhoi, Soumya Ranjan Nayak, Chinmaya Ranjan Pattanaik 
    Abstract: A virus is a type of microorganism which has an adverse effect on human society. Viruses replicate themselves within the human cells rapidly. Currently, the effects of very dangerous infectious viruses are a major issue throughout the globe. Coronavirus (CV) is considered as one of the dangerous infectious viruses for the entire world. So, it is very important to detect and classify this type of virus at the initial stage so that preventive measures can be taken as early as possible. In this work, a machine learning (ML) based approach is used for the type classification of CV such as alpha CV (ACV), beta CV (BCV) and gamma CV (GCV). The ML-based approach mainly focuses on several classification techniques, such as support vector machine (SVM), Random Forest (RF), AdaBoost (AB) and Decision Tree (DT) techniques by processing several CV images (CVIs). The performance of these techniques is analysed using a classification accuracy performance metric. The simulation of this work is carried out using Orange3-3.24.1.
    Keywords: COVID-19; machine learning; TEM CVIs; support vector machine; random forest; AdaBoost; decision tree.

  • Gradient and statistical features based prediction system for COVID-19 using chest X-ray images   Order a copy of this article
    by Anurag Jain, Shamik Tiwari, Tanupriya Choudhury, Bhupesh Kumar Dewangan 
    Abstract: As per data available on the WHO website, the count of COVID-19 patients on 20 June 2020 had surpassed the figure of 8.7 million globally and around 460,000 had lost their lives. The most common diagnostic test for COVID-19 detection is a Polymerase Chain Reaction (PCR) test. In highly populated developing countries such as Brazil, India etc., there has been a severe shortage of PCR test-kits. Furthermore, the PCR-test is very specific and has low sensitivity. This implies that the test can be negative even when the patient is infected. Moreover, it is expensive too. While efforts to intensify the volume and accuracy for PCR testing are in progress, medical practitioners are trying to develop alternative systems using medical imaging in the form of chest radiography or CT scans. In this research work, we have preferred chest X-rays for COVID-19 detection owing to wide availability of chest X-ray infrastructure in all over world. We have designed a decision support system based on statistical features and edge maps of X-ray images to detect COVID-19 virus in a patient. Online available datasets of chest X-ray images have been used to train and test decision tree, K-nearest neighbour, random forest, and multilayer perceptron machine learning classifiers. From the experimental results, it has been found that the multilayer perceptron achieved 94% accuracy, which is highest among the four classifiers.
    Keywords: COVID-19; chest X-ray; statistical features; image gradient; random forest; KNN; multilayer perceptron; decision tree.

  • Indian COVID-19 time series prediction using Facebooks Prophet model   Order a copy of this article
    by Mamata Garanayak, Goutam Sahu, Mohammad Gouse Baig, Sujata Chakravarty 
    Abstract: The entire world has been facing an unprecedented public health crisis due to COVID-19 pandemic for the last year. Meanwhile, more than one million people across the world have already died; many more millions are under treatment. Some countries in Europe have begun to experience the second wave of the pandemic too. This has put the entire health infrastructure of countries under severe strain and has led to downward spiral in the economy. The most worrisome part is the uncertainty as to the spread or arrest of the pandemic. In such a scenario, robust forecasting methods are needed to enable health professionals and governments to make necessary preparation in accordance with the situation. Artificial intelligence and machine learning techniques are useful tools not only for collection of accurate data but also for prediction. Studies show that time series forecasting techniques, such as Facebooks Prophet, have shown promising results. In this paper, time series techniques have been used to forecast the numbers of deaths, recovery and positive cases 60 days ahead. The experimental results demonstrate that machine learning techniques can be beneficial in forecasting the behaviour of the pandemic.
    Keywords: machine learning; Prophet; COVID-19; time series; coronavirus; prediction.

  • Transmission dynamics of COVID-19 outbreak in India and effectiveness of self-quarantine: a phase-wise data-driven analysis   Order a copy of this article
    by Sahil Khan, Md. Wasim Khan, Narendra Kumar, Ravins Dohare, Shweta Sankhwar 
    Abstract: The novel coronavirus disease referred as COVID-19 was declared as a pandemic by the World Health Organization. During this pandemic more than 988,172 lives were lost and 7,506,090 approximately active cases were found across the world by 25 September, 2020. To predict the novel coronavirus transmission dynamics in India, the SQEIHDR mathematical model is proposed. The model is an extension of basic SEIR mathematical model with additional compartments. These additional compartments include self-quarantine (Q), isolation (H) and deceased (D), which help to understand the COVID-19 outbreak in India in a more realistic way and is supposed to suppress the rise of transmission. The SQEIHDR model's simulation comprises ten phases (phases 0-9) with different COVID-19 preparedness and response plans. The simulation results show significant changes in the curve of infected population based on variation in compartment Q, which reveals the efficacy of imposed as well as proposed preparedness and response plans. The results of different conditions of preparedness and response plans highlight the key to reduce the outbreak, i.e. the rate of self-quarantine (Q) which includes general awareness, social distancing and food availability.
    Keywords: COVID-19; mathematical modelling; self-quarantine; transmission dynamics; preparedness; response plan.

  • COVID-19 outbreak in Orissa: MLR and H-SVR based modelling and forecasting   Order a copy of this article
    by Satyabrata Dash, Hemraj Saini, Sujata Chakravarty 
    Abstract: WHO declared COVID-19 to be a pandemic in early March, 2020, and by June it became a severe threat to the human community in almost every country. The present situation throughout the world is very tense and puts everyone at a high risk of infection and this further leads to the high mortality rate. Everyone in the related research community is using technology and trying to identify the time at which the pandemic might stop and make the world healthy again. Therefore, in this study, an attempt has been made to analyse and predict COVID-19 outbreak using Multiple Linear Regression (MLR) and Support Vector Regression (SVR). In this comparative analysis, MLR outperforms SVR. Hence, MLR can be used to predict COVID-19 outbreak in the real life applications.
    Keywords: novel coronavirus; COVID-19; linear multiple regression; support vector regression.

  • Prediction of COVID-19 epidemic curve in India using the supervised learning approach   Order a copy of this article
    by Shweta Mongia, N. Jaisankar, Sugandha Sharma, Manoj Kumar, Vasudha Arora, Thompson Stephan, Achyut Shankar, Pragya Gupta, Raghav Kachhawaha 
    Abstract: The COVID-19 pandemic, a neo zoonotic infectious disease, has caused high mortality worldwide. The need of the hour is to equip the governments with early detection, prevention, and mitigation of such contagious diseases. In this paper, a supervised learning approach of the polynomial regression model is used for the prediction of COVID-19 cases in terms of the number of Confirmed Cases (CC), Death Cases (DC), and Recovered Cases (RC) in India. As per the prediction model, the epidemic curve will reach its peak on 31 May 2020 when the predicted number of CC (148,276) will be almost equal to the sum of the number of DC (35,050) and RC (114,718) i.e. 149,768. This research is based on the data available till 25 April 2020 considering a strong preventive measure of nation-wide lockdown in India since 24 March 2020. Authors have also predicted death rates and recovery rates. As of 25 April 2020, the death rate stands at 3.068% and the predicted death rate for 1 June 2020 is 2.558%. The recovery rate on 25 April 2020 is 21.97% and it is predicted that by 1 June 2020 this rate will increase to 79%. In addition to this, the approach projected a monthly percentage increase in the number of CC from 1 May 2020 to 1 December 2020. This analysis would help and enable the concerned authorities in bringing effective preventive measures into action in the process of decision making.
    Keywords: supervised learning; polynomial regression model; COVID-19; prediction; epidemic curve.

Special Issue on: Signal and Information Processing in Sensor and Transducer Systems

  • Identification of Hammerstein-Wiener nonlinear dynamic models using conjugate gradient based iterative algorithm   Order a copy of this article
    by Xiangli Li, Lincheng Zhou 
    Abstract: This paper mainly studies the identification of a class of nonlinear dynamic models with Hammerstein-Wiener nonlinearity.Firstly, a special form of Hammerstein-Wiener polynomial model is constructed by using the key term decomposition technique to separate the model parameters to be estimated.On this basis, an iterative algorithm based on conjugate gradient (CGI) is proposed, which computes a new conjugate vector along the conjugate direction in each iteration step.Because the search direction of the CGI algorithm is conjugate with respect to the Hessian matrix of the cost function, the CGI algorithm can generally obtain the faster convergence rates than the gradient based iterative algorithm.By conjugating the search direction of the CGI algorithm with the Hessian matrix of the loss function, CGI algorithm has more advantages in convergence rates than the gradient based iterative algorithm. Finally, numerical examples are given to demonstrate the effectiveness of the proposed algorithm.
    Keywords: Hammerstein-Wiener model; conjugate gradient; key term decomposition; Hessian matrix; parameter estimation.

  • Multi-sensor temperature and humidity control system of wine cellar based on cooperative control of intelligent vehicle and UAV   Order a copy of this article
    by Yufan Wang 
    Abstract: Red wine has extremely strict requirements on its fermentation and long-term storage environment. The rapid change of cellar temperature will cause great damage to the taste of red wine. At present, wine cellars at home and abroad usually adopt pure manual management or lay a large number of sensors for monitoring to solve such problems. However, in the face of fire risks caused by specialized laboratories or excessive costs and a large number of aging production lines, it is obvious that the needs of wine cellar managers cannot be met. This paper designs and completes the multi-point data acquisition temperature and humidity adjustment sensor system of the wine cellar under the collaborative control of smart car and UAV. The system consists of four independent parts: the intelligent patrol car terminal, the four-rotor UAV auxiliary terminal, the handheld terminal and the intelligent temperature control device terminal, which cooperate with each other to realize the monitoring and control of the environment under closed conditions. Compared with the traditional temperature and humidity collection method, this system uses the method of UAV and smart car to collect and collect data, which greatly improves the efficiency and accuracy of data collection. The system is equipped with low-power autonomous charging to realize unmanned management. At the same time, the administrator can view and intervene in the real-time changes of the indoor environment through the handheld segment to achieve human-computer interaction. Experimental tests show that this system has strong robustness and adaptability, is accurate, intelligent, efficient, and saves a lot of manpower and material resources.
    Keywords: temperature and humidity acquisition sensor system; DHT11; cooperation; human-computer interaction.

  • Simulation study on identification technology of transmission line potential hazards based on corona discharge characteristics   Order a copy of this article
    by Wei Liu 
    Abstract: The prevention and control of transmission line potential hazards is the guarantee of safe and reliable operation of power grids. At present, the prevention and control of line potential hazards is still based on manual inspection, which has problems of low efficiency and poor reliability. Based on corona discharge theory and experimental simulation, this paper studies the fingerprint characteristics of line discharge signal caused by tree barrier, bird damage and insulator pollution, and puts forward a method of line potential hazard detection and fault identification based on discharge characteristics. The results show that the development of line potential hazard will lead to the discharge process, and the discharge characteristics of different types of potential hazard have obvious differences. The differences are mainly reflected in the main wave width and discharge repetition rate, which can be used to identify the types of potential hazard.
    Keywords: overhead line; potential hazard; corona discharge; simulation analysis; identification.

  • Development and application of PD spatial location system in distributing substation   Order a copy of this article
    by Fang Peng, Hong-yu Zhou, Xiao-ming Zhao 
    Abstract: Partial discharge is an important cause of insulation deterioration of distribution network equipment. Due to the variety of distribution network equipment, the location of discharge source is always a technical difficulty in engineering. In this paper, through the research of UHF PD spatial location technology, a system for spatial location of discharge source in distribution room is developed. The UHF sensor, acquisition and processing module and analysis and diagnosis module for the location of discharge source in distribution room are designed. The results of laboratory test and actual operation show that the system has the advantages of high detection sensitivity, high location accuracy and high operation reliability. It can be used for effective monitoring and timely warning of PD defects in distribution room, which helps to improve the power supply reliability of distribution network system.
    Keywords: distributing substation; partial discharge; spatial location; sensor; online monitoring.

  • Feature matching for multi-beam sonar image sequence using KD-Tree and KNN search   Order a copy of this article
    by Jue Gao 
    Abstract: Feature matching for image sequence generated by multi-beam sonar is a critical step in widespread applications like image mosaic, image registration, motion estimation and object tracking. In many cases, feature matching is accomplished by nearest neighbour arithmetic on extracted features, but the global search adopted brings heavy computational burden. Furthermore, sonar imaging characteristics such as low resolution, low SNR, inhomogeneity, point of view changes and other artifacts sometimes lead to poor sonar image quality. This paper presents an approach to the feature extraction, K-Dimension Tree (KD-Tree) construction, and subsequent matching of the features in multi-beam sonar images. Initially, Scale Invariant Feature Transform (SIFT) method are used to extract features. A KD-Tree based on feature location is then constructed. By K Nearest Neighbour (KNN) search, every SIFT feature is matched with K candidates between a pair of consecutive frames. Finally, the Random Sample Consensus (RANSAC) arithmetic is used to eliminate wrong matches. The performances of the proposed approach are assessed with measured data that exhibited reliable results with limited computational burden for the feature-matching task.
    Keywords: feature extraction; feature matching; multi-beam sonar; KD-Tree; KNN.

  • A study on ultrasonic process tomography for dispersed small particle system visualization   Order a copy of this article
    by Zhiheng Meng, Jianfei Gu, Yongxin Chou 
    Abstract: The present challenge in the ultrasonic process tomography on dispersed small particle system is that it is hard to obtain the accurate algorithm to reconstruction. For more accurate reconstruction, this work proposes an improved GMRES(Generalized Minimal Residual)algorithm based on generalized minimal residual iteration and mean filtering method. To verify the feasibility of the algorithm for dispersed small particle system visualization, a linear acoustic attenuation model is developed to obtain the projection data of ultrasonic array. Then, we compared it with the current mainstream reconstruction algorithms under the conditions of the less effective information by solving the underdetermined equations. It is showed that this method can present a high reconstruction precision in the cases of numerical simulations, and reasonably reflect the cross section of dispersed small particle distribution. In the numerical simulations, the imaging accuracy of improved GMRES algorithm can reach about 90%.
    Keywords: ultrasonic method; dispersed particle; particulate two-phase flow; back projection; iterative algorithm.

  • Distributed fusion algorithm based on maximum internal ellipsoid mechanism   Order a copy of this article
    by Jinliang Cong 
    Abstract: In this paper, a Bar-Shalom Campo based algorithm is presented to solve the approximate maximum ellipsoid in the cross region of covariance ellipsoid. An objective function that can be solved by linear matrix inequality is designed based on the rotation transformation of matrix. Compared with the classical covariance intersection fusion algorithm, it is less conservative. Moreover, the unknown cross-covariance is approximated as a linear matrix inequality constraint with Pearson correlation coefficient which is bounded. With the inequality constraint, the accuracy of fusion results can be improved. Finally, two simulation examples are given to verify the effectiveness of the proposed algorithm.
    Keywords: distributed sensor network; information fusion; maximum ellipsoid; cross-covariance constraint.

  • Local track to detect for video object detection   Order a copy of this article
    by Biao Zeng, Shan Zhong, Lifan Zhou, Zhaohui Wang, Shengrong Gong 
    Abstract: The existing methods for video object detection are generally achieved from searching the objects through the entire image. However, they always suffer from large computation consumption as a result of dozens of similar images are required to be operated. To relieve this problem, we propose a Local Track to Detect (LTD) framework to detect video objects by predicting the movements of objects in local areas. LTD can automatically determine key frames and non-key frames, the objects in key frames can be detected by the single frame detector, and the objects in non-key frames can be efficiently detected by the movement prediction module. LTD also has a siamese module to predict whether objects between the key frame and the non-key frame are the same object to ensure the accuracy of the movement prediction module. Compared with other previous work, our method is more efficient and achieves state-of-the-art performance.
    Keywords: video object detection; local detection; detect and track; movement prediction; efficient detection; CNN.

  • Simple interpolation algorithm and its application in power parameter estimation   Order a copy of this article
    by Zhongyou Luo, Shuping Song, Ling Zhang, Puzhi Zhao, Haijiang Zhang 
    Abstract: The computational complexity of interpolation algorithms is a major concern for real-world power harmonic parameter estimation based on the windowed-interpolation fast Fourier transform (FFT) algorithm. A new interpolation algorithm is proposed in this study to estimate the harmonic parameters of power system. This technique is based on the frequency-domain characteristics of the mainlobe of the weighting cosine window, and its calculation formulas are obtained by employing Newtons divided-difference interpolation polynomial. The validity of the proposed algorithm is confirmed through computer simulations via MATLAB and field tests in a photovoltaic system. The results show that the proposed algorithm has the advantage of low computational effort and can be employed for any cosine window.
    Keywords: FFT; harmonic parameter estimation; cosine window; interpolation; Newton’s divided-difference interpolation.

  • A novel chaotic grey wolf optimisation for high-dimensional and numerical optimisation   Order a copy of this article
    by Mengjian Zhang, Daoyin Long, Dandan Li, Xiao Wang, Tao Qin, Jing Yang 
    Abstract: Aiming at the weakness of the current evolutionary algorithms for high-dimensional and numerical optimization problems of global convergence, a novel chaotic grey wolf optimization (NCGWO) is proposed for solving the high-dimensional optimization problems. Firstly, the six chaotic one-dimensional maps are introduced and their mathematical models are improved with their mapping ranges being in the interval (0, 1). Secondly, the diversity experiments are conducted to test the results of the chaotic maps. The experiments show that the initial population by chaotic maps is superior to the GWO algorithm and the Sine map is best. Finally, the CSGWO algorithm is also proposed based on the NCGWO algorithm with the parameter C by Sine map. The simulations demonstrate that the performance of the GWO algorithm can be improved by the chaotic maps for high-dimensional and numerical optimization problems, and the effectiveness of the CSGWO algorithm is superior to other evolutionary algorithms and achieves better accuracy and convergence speed.
    Keywords: chaotic system; grey wolf optimisation; chaos initialisation; optimisation; high-dimension.

  • Recursive identification of state space systems with colored process noise and measurement noise   Order a copy of this article
    by Fang Zhu, Xuehai Wang 
    Abstract: This paper concerns the modeling and identification of the state space system, in which both colored process noise and measurement noise are encountered. By using the state filtering, the state space system with colored process noise is transformed into a model without correlated noise, and a state filtering based parameter estimation algorithm is derived on the base of designing a state filter observer using the multi-innovation identification. The validity of the proposed algorithm is verified by given simulation examples
    Keywords: parameter estimation; recursive identification; filtering technique; state estimation.

  • Research on cable partial discharge detection and location system based on optical fibre timing   Order a copy of this article
    by Jian-jun Zhang, Fang Peng, An-ming Xie, Yang Fei 
    Abstract: Partial discharge (PD) is an important index to reflect the running state of cable. According to the characteristics and propagation mechanism of cable partial discharge signal, a cable partial discharge detection and location system based on optical fibre time synchronization technology and travelling wave double terminal location principle is developed. The system has high detection sensitivity, high reliability, real-time detection, diagnosis and positioning of cable discharge power supply. The experimental results show that the positioning accuracy of the cable partial discharge source can be effectively improved based on the fibre timing and double terminal positioning technology, and the positioning accuracy can reach 1%; the method studied in this paper can meet the requirements of the accurate location of the partial discharge source of the cable, Gil and other equipment.
    Keywords: cable; partial discharge; optical fibre timing; double terminal positioning; online monitoring.

  • Life-threatening arrhythmias recognition by pulse-to-pulse intervals analysis   Order a copy of this article
    by Lijuan Chou, Yongxin Chou, Jicheng Liu, Shengrong Gong, Kejia Zhang 
    Abstract: Tachycardia, bradycardia, ventricular flutter and ventricular tachycardia are the four life-threatening arrhythmias, which are seriously harmful to the cardiovascular system. Therefore, a method for identifying these arrhythmias by pulse-to-pulse intervals analysis is proposed in this study. First, the noise and interference are wiped out from the raw pulse signal, and the clear pulse signal is spitted into pulse waves by pulse troughs whose first-order difference are the pulse-to-pulse intervals. Then, fifteen features are extracted from the pulse-to-pulse intervals, and the two-samples Kolmogorov-Smirnov test is utilized to select the markedly changed features. Finally, we design the classifiers for arrhythmias recognition by the probabilistic neural network (PNN), feedback neural network (BPNN) and random forest (RF). The pulse signal from the international physiological database (PhysioNET) is utilized as the experimental data. The experimental results show that RF classifier has the best average classification performance with the kappa coefficient (KC) of 98.86
    Keywords: pulse signal; pulse-to-pulse intervals; life-threatening arrhythmias; intelligent recognition.

  • Instantaneous frequency enhanced peak detection for sugarcane seed cutting   Order a copy of this article
    by Junfeng Wei, Weidong Tang, Chunming Wen, Longdian Huang 
    Abstract: Peak detection methods are widely used in various of areas. This study introduced a peak detection method used for seed cutting. The envelope of signal in discrete time domain was calculated by Hilbert transform. In order to increase the reliability in noise condition, the envelope was enhanced by utilizing instantaneous frequency of signal, and then a multi-rule joint search algorithm marks the peaks, which stands for the location of a node. The proposed method worked in worse SNR condition in the simulation, and was verified in the experiment of detecting sugarcane nodes. Local maximums of accelerometer data are marked, showing the position of node rings on the sugarcane surface. The position data will be used for an action signal in cutting machines, or be used for the analysis of crop growth.
    Keywords: discrete Hilbert transform; instantaneous frequency; peak detection; sugarcane seed cutting.