Forthcoming articles

International Journal of Computational Science and Engineering

International Journal of Computational Science and Engineering (IJCSE)

These articles have been peer-reviewed and accepted for publication but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

Register for our alerting service, which notifies you by email when new issues are published online.

Open AccessArticles marked with this Open Access icon are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.
We also offer which provide timely updates of tables of contents, newly published articles and calls for papers.

International Journal of Computational Science and Engineering (68 papers in press)

Regular Issues

  • An intelligent block matching approach for localisation of copy-move forgery in digital images   Order a copy of this article
    by Gulivindala Suresh, Chanamallu Srinivasa Rao 
    Abstract: Block-based Copy-Move Forgery Detection (CMFD) methods work with features from overlapping blocks. As overlapping blocks are involved, thresholds related to similarity and the physical distances are defined to identify the duplicated regions. However, these thresholds are controlled manually in localising the forged regions. In order to overcome this, an intelligent block matching approach for localisation is proposed using Colour and Texture Features (CTF) through Firefly algorithm. Investigation of the proposed CTF method is carried out on a standard database, which achieved an average true detection rate of 0.98 and an average false detection rate of 0.07. The proposed CTF method is robust against brightness change, colour reduction, blurring, contrast adjustment attacks, and additive white Gaussian noise. Performance analysis of the CTF method validates its superiority over other existing methods.
    Keywords: digital forensics; copy-move forgery detection; intelligent block matching; firefly algorithm.
    DOI: 10.1504/IJCSE.2021.10033507
     
  • Discrete stationary wavelet transform and SVD-based digital image watermarking for improved security   Order a copy of this article
    by Rajakumar Chellappan, S. Satheeskumaran, C. Venkatesan, S. Saravanan 
    Abstract: Digital image watermarking plays an important role in digital content protection and security related applications. Embedding watermark is helpful to identify the copyright of an image or ownership of the digital multimedia content. Both the grey images and colour images are used in digital image watermarking. In this work, discrete stationary wavelet transform and singular value decomposition (SVD) are used to embed watermark into an image. One colour image and one watermark image are considered here for watermarking. Three-level wavelet decomposition and SVD are applied and the watermarked image is tested under various attacks, such as noise attacks, filtering attacks and geometric transformations. The proposed work exhibits good robustness against these attacks, and the simulation results show that proposed approach is better than the existing methods in terms of bit error rate, normalised cross correlation coefficient and peak signal to noise ratio.
    Keywords: digital image watermarking; discrete stationary wavelet transform;; wavelet decomposition; singular value decomposition; peak signal to noise ratio.

  • Disaster Management Using D2D Communication with ANFIS Genetic Algorithm Based CH Selection and Efficient Routing by Seagull Optimization   Order a copy of this article
    by Lithungo K. Murry, R. Kumar, Themrichon Tuithung 
    Abstract: The next generation networks and public safety strategies in communications are at a crossroads in order to render best applications and solutions to tackle disaster management proficiently. There are three major challenges and problems considered in this paper: (i) disproportionate disaster management scheduling among bottom-up and top-down strategies; (ii) greater attention on the disaster emergency reaction phase and the absence of management in the complete disaster management series; and (iii) arrangement deficiency of a long-term reclamation procedure, which results in stakeholder resilience and low level community. In this paper, a new strategy is proposed for disaster management. A hybrid Adaptive Neuro-Fuzzy Inference Network based Genetic Algorithm (D2D ANFIS-GA) used for selecting cluster heads, and for the efficient routing the Seagull Optimization Algorithm (SOA) is used. Implementation is done in the MATLAB platform. The performance metrics, such as energy use, average battery lifetime, battery lifetime probability, average residual energy, delivery probability, and overhead ratio, has used to evaluate the performance. Experimental results are compared with two existing approaches, Epidemic and FINDER. Our proposed approach gives better results.
    Keywords: disaster management; adaptive neuro-fuzzy inference network; residual energy; device-to-device communication; seagull optimisation algorithm.

  • Design and implementation of chicken egg incubator for hatching using IoT   Order a copy of this article
    by Niranjan Lakshmappa, C. Venkatesan, Suhas A R, S. Satheeskumaran, Aaquib Nawaz S 
    Abstract: Egg fertilisation is one of the major factors to be considered in poultry farms. This paper describes a smart incubation system designed to combine the IoT technology with the smartphone in order to make the system more convenient to the user in monitoring and operation of the incubation system. The incubator is designed first with both the setter and the hatcher in one unit and incorporating both still air incubation and forced air incubation, which is controlled and monitored by the controller keeping in mind the four factors temperature, humidity, ventilation and egg turning system. Here we are setting with three different temperatures for the experimental purpose at 36.5oC, 37.5oC and 38oC. The environment is maintained the same in all three cases and the best temperature for the incubation of the chicken eggs is noted.
    Keywords: IoT; poultry farms; embryo; brooder; hatchery; Blynk App.

  • Multi-label software bug categorisation based on fuzzy similarity   Order a copy of this article
    by Rama Ranjan Panda, Naresh Kumar Nagwani 
    Abstract: The quality and cost of the software depend on the timely detection of software bugs. For better quality and low-cost development, the bug fixing time should be minimised which is possible only after understanding the bugs and their root cause. Categorisation of the software bugs helps in their understanding and effective management, and improves various software development activities such as triaging and quick resolution of the reported bugs. Because of the modular software development approach and multi-skilled development teams, it is possible that one software bug can affect multiple modules and there can be multiple developers who can fix the newly reported bugs. Multi-label categorisation of the software bugs can play a significant role in handling this situation, as in practice one bug can belong to multiple categories and there can be multiple developers for a software bug. Fuzzy logic and fuzzy similarity techniques can be very helpful for understanding the belongingness of the software bugs in multiple categories in real-life scenarios. Since most of the software bug attributes are textual in nature, in this paper a multi-label fuzzy similarity based text categorisation technique is presented for effective categorisation of software bugs in multiple categories. In the presented approach, the fuzzy similarity between a pair of software bugs is computed and, based on a user-defined threshold value, the bugs are categorised into multiple categories. Experiments are performed on available benchmark software bug datasets, and the performance of the proposed multi-label classifier technique is evaluated using parameters such as F1 score, BEP and Hamming loss.
    Keywords: software bug mining; software bug classification; fuzzy similarity; multi-label classification; software bug repository.

  • Development of an intrusion detection system using mining and machine learning techniques to estimate denial of service malware   Order a copy of this article
    by Revathy Ganapathy, Sathish Kumar Palani 
    Abstract: A denial of service is one of the main types of cyber security attack which allows trespassers to outbreak many of the services such as failures in data, botnet in the system or network environment that makes the systems very slow. Consequently, prevention of legalised users for accessing the services in the system is a major issue. Intrusion Detection System (IDS) techniques play a very important role for detecting and preventing mechanism that eradicate the issues made by hackers in the network environments. In this research, we describe different data mining techniques that can be used to handle different kinds of network attack. We present a model that incorporates an skilled, dedicated IDS for classifying attacks in the system. In this paper, three machine learning techniques are used for classification problems, such as decision tree classifier, gradient boosting classifier, and K nearest neighbour classifier, to find the metric values of false negative rate, accuracy, F score and prediction time. Through this paper we analyse that among all the three algorithms, the decision tree classifier and voting classifier is the best method, which has shorter prediction time and better accuracy of 99.86% to 99.9% which makes the model better along with greater performance. We estimate the solution via KDD cup 99 datasets (normal and malicious). The proposed investigational outcome shows high accuracy level and shorter prediction time. Moreover, the relationship is presented between existing approaches and the proposed approach in terms of metrics calculating namely accuracy, precision and recall, for detecting denial of service attacks and distinguishing malicious from normal data.
    Keywords: denial of service; machine learning techniques; statistical analysis; false negative rate; intrusion detection system.

  • Performance evaluation of smart cooperative traffic lights in VANETs   Order a copy of this article
    by Láecio Rodrigues, Firmino Neto, Glauber Gonçalves, André Soares, Francisco Airton Silva 
    Abstract: Vehicular Ad-Hoc Network (VANET) is an emerging new type of network, consisting of vehicles as mobile nodes and temporary communication links among these nodes. One of the crucial topics in VANETs is related to how to use traffic lights to optimise vehicle mobility. The traffic lights can work cooperatively to reduce traffic jams by communicating with the vehicles. However, the architecture of smart traffic lights offers challenges related to network latency restrictions and resource constraints. This paper presents a performance evaluation of a cooperative smart traffic light using a stochastic Petri net (SPN) model. The proposed model can calculate the mean response time, resource use, and the number of requests discarded. Three case studies are presented to illustrate how useful the model can be. Besides, we conduct real experiments to validate the proposed model by using micro-controllers (Raspberry Pi) that emulate traffic lights. The model is highly flexible, allowing developers and system administrators to calibrate eighteen parameters.
    Keywords: analytical modelling; mean response time; vehicular ad hoc networks; stochastic Petri net.

  • Blockchain for committing peer-to-peer transactions using distributed ledger technologies   Order a copy of this article
    by Rashmi P. Sarode 
    Abstract: Blockchain consists of networks of successive blocks that are interconnected to each other by references to their former block. These form a chain. Blockchain technology creates a database-like support for creation of digital ledgers, in order to support distributed transactions. The adoption of blockchain in real-world applications poses many challenges. This study aims to understand the method, its characteristics as well as the implementation concepts of blockchain systems in terms of distributed transactions over web resources. The study also examines the current trends and issues in the use of blockchain in many large-scale public utility applications in e-commerce.
    Keywords: blockchain; bitcoin; peer-to-peer transactions; cloud-based databases; web services; e-commerce.
    DOI: 10.1504/IJCSE.2021.10036974
     
  • CEMP-IR: a novel location-aware cache invalidation and replacement policy   Order a copy of this article
    by Ajay Kumar Gupta, Udai Shanker 
    Abstract: Earlier mobile client cache invalidation-replacement policies used in the location-based information system are not appropriate if the path for the client movement is changing rapidly. Further, previous cache invalidation-replacement policies show high server overhead in terms of processing costs. Therefore, the objective of this work is to solve the aforementioned challenges by developing a novel effective approach for predicting the future path for the user movement by the use of mobility Markov chain and matrix created to estimate the future movement path (FMP) used in the revised spatio-temporal cost estimation of a data item in cache replacement for contribution to the cache hit ratio improvement. The user-predicted FMP is further used in optimal sub-polygon selection for reducing the storage overhead in cache invalidation valid scope representation. A client-server queuing model is used for simulation of CEMP-IR in location-based services (LBS). Analytical results show significant caching performance improvement compared with previous policies, such as Manhattan, FAR, PPRRP, SPMC-CRP, and CEFAB for LBS.
    Keywords: LBS; location-based computing; context prediction; context-aware systems; context-aware mobility.

  • Evaluation of feature selection techniques on network traffic for comparing model accuracy   Order a copy of this article
    by Prabhjot Kaur, Amit Awasthi, Anchit Bijalwan 
    Abstract: The accuracy and performance of any machine learning model are highly dependent on the number of qualitative features taken into consideration while training the model. The selection of qualitative features depends on the considerate choice of feature selection technique. In this study, feature selection is performed using different techniques, such as information gain, gini decrease, chi2 and FCBF, on the same dataset, and subsequently, the accuracy has been measured. The results showed that the FCBF method has dramatically reduced the number of features and moderated the accuracy over other feature selection methods.
    Keywords: feature selection; FCBF; network traffic; chi2; gini decrease; information gain.
    DOI: 10.1504/IJCSE.2021.10035715
     
  • A novel method of mental fatigue detection based on CNN and LSTM   Order a copy of this article
    by Shaohan Zhang, Zhenchang Zhang, Zelong Chen, Shaowei Lin, Ziyan Xie 
    Abstract: Mental fatigue is a state that may occur owing to excessive work or long-term stress. Electroencephalogram (EEG) is considered a reliable standard for mental fatigue detection. The existing EEG fatigue detection methods mainly use traditional machine learning models to classify mental fatigue after manual feature extraction. However, manual feature extraction is difficult and complicated. The quality of feature extraction largely determines the quality of the model. In this article, we collected EEG signals from 30 medical staff. The wavelet threshold denoising method was then applied to the measured EEG signal data to denoise the original EEG data. And, a method based on a Convolution and Long Short-Term Memory (CNN+LSTM) neural network to determine the fatigue state of medical staff. The extensive experiment on the established data set clearly proves the advancement of our proposed algorithm compared with other neural network based methods. Compared with the existing DNN, CNN and LSTM, the proposed model can quickly learn the information before and after the time series, so as to obtain higher classification accuracy.
    Keywords: mental fatigue detection; wavelet threshold denoising; CNN+LSTM.

  • A computational semantic information retrieval model for Vietnamese texts   Order a copy of this article
    by Tuyen Thi-Thanh Do, Dang Tuan Nguyen 
    Abstract: Semantic information retrieval systems for text document aim at retrieving text documents having semantic contents relevant to the query. Semantic representation of text can be a vector or a dependency graph, depending on the approach of the semantic analysis. This paper proposes a model of semantic information retrieval for Vietnamese to retrieve similar texts to a query. In the proposed system, the semantic analysis is to identify the semantic dependency graph of sentences, and the retrieving process computes the relevance of text document with these semantic dependency graphs. In order to identify the semantic dependency graph of a sentence, the transformation rules are studied to apply on dependency parse using a lexicon ontology for Vietnamese. For ranking retrieval results, the JaccardTanimoto distance is applied to the ranking function. The evaluation shows that the proposed model has higher MAP (0.4045) than that of BM25 model (0.3825) and of TF.IDF model (0.3688).
    Keywords: semantic information retrieval; lexicon ontology; dependency graph; semantic distance.

  • Counterexample generation in CPS model checking based on ARSG algorithm   Order a copy of this article
    by Mingguang Hu, Zining Cao, Fujun Wang, Weiwei Lu 
    Abstract: With the rapid development of software and physical devices, Cyber-Physical Systems(CPS) are widely adopted in many application areas. It is difficult to detect defects in CPS models owing to the complexities involved in the software and physical systems. Counterexample generation in CPS model checking is of interest because it is able to find defects in CPS models efficiently. In many studies, robustness-guided counterexample generation of CPS is investigated by various optimisation methods, which falsifies the given properties of a CPS. In this work, we combine the genetic algorithm with acceptance-rejection technique based on the neighborhood of input sequence space and create a novel algorithm ARSG. We claim that the ARSG algorithm can find counterexamples more quickly and accurately, its idea is similar to 'exploration-exploitation' in reinforcement learning. Finally, we demonstrate the effectiveness of our technique on the automatic transmission controller.
    Keywords: cyber-physical systems; signal temporal logic; counterexample generation; acceptance-rejection technique; genetic algorithm.

  • A hybrid heuristic algorithm for optimising SLA satisfaction in cloud computing   Order a copy of this article
    by Yongxuan Sang, Zhongwen Li, Tien-Hsiung Weng, Bo Wang 
    Abstract: Task scheduling is a one of key techniques for the effective and reliable resource usage of cloud computing. In this paper, we design a hybrid heuristic scheduling that employed Particle Swarm Optimisation (PSO) and least accumulated slack time first to respectively address the problems of the assignment of tasks to servers and of the task scheduling for multi-core servers, to maximise the service level agreement (SLA) satisfaction with resource efficiency improvement, for task execution in heterogeneous clouds with deadline constraints. Experimental results show that our method can complete up to 112.5% more tasks, compared with several classical and state-of-art task scheduling methods.
    Keywords: cloud computing; hybrid heuristic; SLA; task scheduling; PSO.

  • Detection of denial of service using a cascaded multi-classifier   Order a copy of this article
    by Avneet Dhingra, Monika Sachdeva 
    Abstract: This paper proposes a cascaded multi-classifier Two-Phase Intrusion Detection (TP-ID) approach that can be trained to monitor incoming traffic for any suspicious data. It addresses the issue of efficient detection of intrusion in traffic and further classifies the traffic as DDoS attack or flash event. Features portraying the behaviour of normal, DDoS attack, and flash event are extracted from historical data obtained after merging CAIDA07, SlowDoS2016, CIC-IDS-2017, and World Cup 1998 benchmark datasets available online, along with the commercial dataset for e-shopping assistant website. Information gain is applied to rank and select the most relevant features. TP-ID applies supervised learning algorithms in the two phases. Each phase tests the set of classifiers, the best of which is chosen for building a model. The performance of the system is evaluated using the detection rate, false-positive rate, mean absolute percentage error, and classification rate. The proposed approach classifies the traffic anomalies with a 99% detection rate, 0.43% FPR, and 99.51% classification rate.
    Keywords: intrusion detection; denial of service; DDoS attack; multi-classifier; flash event; detection rate; false-positive rate; network security; machine learning; supervised learning algorithm.

  • Application of convolution neural network in web query session mining for personalised web search   Order a copy of this article
    by Suruchi Chawla 
    Abstract: In this paper, a deep learning Convolution Neural Network (CNN) is applied in web query session mining for effective personalised web search. In this research, CNN extracts high-level continuous clicked document/query concept vector for semantic clustering of documents. The CNN model is trained to generate document/query concept vector based on clickthrough web query session data. Training of CNN is done using backpropagation based on stochastic gradient descent maximising the likelihood of a relevant document given a user search query. During web search, search query concept vector is generated and compared with semantic cluster means to select the most similar cluster for web document recommendations. The experimental results were analysed based on average precision of search results and loss function computed during training of CNN. The improvement in precision of search results as well as decrease in loss value proves CNN to be effective in capturing semantics of web user query sessions for effective information retrieval.
    Keywords: convolution neural network; deep learning; personalised web search; search engines; clustering; information retrieval.

  • Prediction of consumer preference for the bottom of the pyramid using EEG-based deep model   Order a copy of this article
    by Debadrita Panda, Debashis Das Chakladar, Tanmoy Dasgupta 
    Abstract: Emotion detection using electroencephalogram (EEG) signals has gained widespread acceptance in consumer preference studies. It has been observed that emotion classification using brain signals has great potential over rating-based quantitative analysis. In the consumer segment, the Bottom of the Pyramid (BoP) people also have been considered as an essential consumer base. This study aims to classify consumer preferences while visualising advertisements for the BoP consumers. Four types of consumer preference (most like, like, dislike, most dislike) have been classified while visualising different advertisements. A robust long short-term memory (LSTM)based deep neural network model has been developed for classifying consumer preferences using the EEG signal. The proposed model has achieved 94.18% classification accuracy. The proposed model has attained a significant improvement of 11.71% and 3.24% in terms of classification accuracy over other machine learning classifiers (support vector machine and random forest), respectively. This study aims to add a significant contribution to the research domain of consumer behaviour, as it provides a guideline about the consumer preferences of the BoPs after seeing the online advertisements.
    Keywords: neuromarketing; deep learning; EEG; bottom of the pyramid; consumer behaviour.

  • Cost-effective and mobility-aware cooperative resource allocation framework for vehicular service delivery in the vehicular cloud networks   Order a copy of this article
    by Mahfuzulhoq Chowdhury 
    Abstract: The cloud-empowered vehicular networks have been identified as a promising paradigm to improve the latency of vehicular application by transferring the large computation tasks of the vehicular application to clouds for processing. Note that existing cloud empowered vehicular networks cannot adequately meet the very low latency requirements of real-time vehicular applications owing to the absence of a suitable resource allocation algorithm. However, at present, most of the existing works on resource allocation in vehicular cloud networks do not take into account the heterogenous cloud resources, heterogeneous vehicular services, SLA requirements, vehicles mobility, cost-effectiveness, different transmission strategies, workload processing and data transfer delay along with waiting delay, and both cloud and network resource allocation. To address these issues, this paper proposes a cost-effective and mobility-aware cooperative resource allocation (CEMC) method that aims to reduce the overall vehicular service result delivery delay for multiple intelligently connected vehicle applications over cloud empowered vehicular networks. The method considers inter-cluster resource awareness, both workload processing and communication latency, mobility awareness, SLA requirements, different types of power consumption and pricing cost, different network and cloud resources, proper scheduling order, best fit resource selection by taking account of cooperation among roadside units, vehicles, cloud servers, and mobile devices. This paper also presents three different variants of our proposed framework. The performance of our proposed scheme, along with the variants, is evaluated through service result delivery delay, throughput, service finishing ratio, normalised power consumption cost, net pricing cost, waiting delay, and average time gain. The evaluation results show superiority of the proposed CEMC framework in terms of delivery delay, throughput, and pricing cost among all the variants of the proposed framework
    Keywords: vehicular cloud; resource allocation; heterogeneous computing; vehicular service result delivery; best fit allocation; service finishing ratio; communication delay.

  • The FRCK clustering algorithm for determining cluster number and removing outliers automatically   Order a copy of this article
    by Yubin Guo, Yuhang Wu 
    Abstract: the clustering algorithm is one of the most popular unsupervised algorithms for data grouping. The K-means algorithm is a popular clustering algorithm for its simplicity, ease of implementation and efficiency. But for K-means algorithm, the optical cluster number is difficult to predict, while it is sensitive to outliers. In this paper, we divided outliers into two types, and then prompt a clustering algorithm to remove the two types of outliers and calculate the optimal cluster number in each clustering iteration. The algorithm is a fusion of rough clustering and K-means, abbreviated as FRCK algorithm. In the FRCK algorithm, outliers are removed precisely, therefore the optical cluster number can be more accurate, and the quality of the clustering result can be heightened accordingly. This algorithm is proven effective by experiment.
    Keywords: clustering algorithm; number of clusters; outliers; rough clustering.

  • Gene selection and classification combining information gain ratio with fruit fly optimization algorithm for single-cell RNA-seq data   Order a copy of this article
    by Jie Zhang, Junhong Junhong, Xiani Yang, Jianming Liu 
    Abstract: There are a wide range of genes in single-cell data, but some of them are not beneficial to classification. In order to eliminate these redundant genes and select beneficial genes, this study first uses the information gain (IG) to select some genes coarsely, then uses the modified fruit fly optimisation algorithm (FOA) to choose the relevant genes refinedly from the subsets after performing IG. The proposed algorithm makes full use of respective advantages of the IG and FOA, and is abbreviated as IGFOA. The proposed algorithm is implemented on multiple scRNA-seq datasets with various numbers of cells and genes, and the obtained results validate that the IGFOA can select effectively some superior genes and acquire good classification performance.
    Keywords: single-cell; scRNA-seq; gene selection; fruit fly optimisation; information gain.

  • ECC-based lightweight mutual authentication protocol for fog-enabled IoT system using three-way authentication procedure   Order a copy of this article
    by Upendra Verma, Diwakar Bhardwaj 
    Abstract: Internet of Things (IoT) devices may be easily compromised and incapable of defending and securing themselves owing to their resource-constrained nature. Hence, the integration of devices with a resource-rich pool such as fog is required. This integration provides expected growth in delay-sensitive IoT applications. In this context, authentication plays a vital role. In this paper, we propose a new and anonymous mutual authentication protocol for fog-enabled IoT based on elliptic curve cryptography. The proposed protocol achieves mutual authentication between a device and the fog server with the help of a cloud server, called three-way authentication procedure. Security analyses of the proposed protocol show that it is robust against several attacks. Moreover, the performance of the proposed protocol has been evaluated and compared with other related protocols in terms of communication and storage cost. Security analyses and performance analyses reveal that the proposed authentication protocol attains better security than related protocols.
    Keywords: elliptic curve cryptography; mutual authentication; fog server; three-way authentication procedure; delay-sensitive IoT applications.

  • Real-time lidar and radar fusion for road-object detection and tracking   Order a copy of this article
    by Wael Farag 
    Abstract: In this paper, a real-time road-object detection and tracking (LR_ODT) method for autonomous driving is proposed. The method is based on the fusion of lidar and radar measurement data, where they are installed on the autonomous car, and a customised Unscented Kalman Filter (UKF) is employed for their data fusion. The merits of both devices are combined using the proposed fusion approach to precisely provide both pose and velocity information for objects moving in roads around the autonomous car. Unlike other detection and tracking approaches, the balanced treatment of both pose estimation accuracy and its real-time performance is the main contribution in this work. The proposed technique is implemented using the high-performance language C++ and uses highly optimised math and optimisation libraries for the best real-time performance. Simulation studies have been carried out to evaluate the performance of the LR_ODT for tracking bicycles, cars, and pedestrians. Moreover, the performance of the UKF fusion is compared with that of the Extended Kalman Filter fusion (EKF) showing its superiority. The UKF has outperformed the EKF on all test cases and all the state variable levels (-24% average RMSE). The employed fusion technique shows how outstanding is the improvement in tracking performance compared with the use of a single device (-29% RMES with lidar and -38% RMSE with radar).
    Keywords: sensor fusion; Kalman filter; object detection; object tracking; ADAS; autonomous driving.

  • A novel audio steganography technique integrated with a symmetric cryptography: a protection mechanism for secure data outsourcing   Order a copy of this article
    by Shirole Rashmi, Jyothi K 
    Abstract: Data transfer over the internet is a very common process and security of this data is everyones responsibility. Many techniques are available to secure such confidential data. Among them, cryptography and steganography are two major subdisciplines of information security. The aim of this paper is to maintain the imperceptibility and provide better security. The goals of this paper are to be achieved through (1) the use of variable sample selection method for audio samples, which enhances the security with better audio quality; (2) deploying one of the most secure algorithms, blowfish, which is used to secure data from eavesdropping. The performance of the proposed work is compared with the traditional LSB algorithm for audio. The proposed system is the best approach for enhancing the security and retaining the good quality of the cover object.
    Keywords: data hiding; cryptography; blowfish; AES; steganography; LSB; PSNR.

  • Local-constraint transformer network for stock movement prediction   Order a copy of this article
    by Hu Jincheng 
    Abstract: Stock movement prediction is to predict the future movements of stocks for investment, which is challenging both for research and industry. Typically, stock movement is predicted based on financial news. However, existing prediction methods based on financial news directly use models for natural language processing, such as recurrent neural networks and transformer network, which are still difficult to effectively process the key local information in financial news. To address this issue, Local-constraint Transformer Network (LTN) is proposed in this paper for stock movement prediction. LTN leverages transformer network with local constraint to encode the financial news, which can increase the attention weights of key local information. Moreover, since there are more difficult samples in financial news which are hard to learn, this paper further proposes a difficult-sample-balance loss function to train the network. This paper also researches the combination of financial news and stock price data for prediction. Experiments demonstrate that the proposed model outperforms several powerful existing methods on the datasets, and the stock price data can assist to improve the prediction.
    Keywords: stock movement prediction; short-term dependence; transformer network; difficult sample.

  • Constructive system for double-spend data detection and prevention in inter- and intra-block of blockchain   Order a copy of this article
    by Vijayalakshmi Jayaraman, Murugan Annamalai 
    Abstract: Currently, our global financial market faces lots of trouble owing to migration from fiat currency to cryptocurrency and its underlying blockchain technology. Blockchain provides trust in a decentralised way for storing, managing, and retrieving transactions. The double-spending issue arises owing to the erroneous transaction verification mechanism in the blockchain. Research has shown that transaction malleability, such as double-spending, creates millions of bitcoin losses to the owners as well as to a few bitcoin exchanges. This research aims to detect and prevent the double-spending of bitcoins in single and multiple blocks. In this context, double-spend data in a single block is identified using the DPL2A method. Further, the original transaction from the double-spend transaction list is identified using the ACRT method, which acts as a prevention of double-spend in a forthcoming occurrence. Similarly, double-spend data in multiple blocks are identified using MBDTD along with the Cognizant Merkle tree. Finally, a system named F2DP is constructed to detect and prevent the double-spend data in inter- and intra-blocks of the blockchain. The result indicates these methods will act best for double-spend detection and prevention with a limited set of transaction records. Further research is needed to increase the scalability of transaction records.
    Keywords: cryptocurrency; bitcoin; double-spending; UTXO; Merkle.

  • Workflow scheduling in cloud computing environment with classification ordinal optimisation using support vector machine   Order a copy of this article
    by Debajyoti Mukhopadhyay, Vahab Samandi 
    Abstract: Every day, civilisation generates more and more data. The processing cost and performance issues of this massive set of data have become a challenge in distributed computing. Processing multitasks workloads for big data in a dynamic environment requires real-time scheduling and the additional complication of generating optimal schedules in a large search space with high overhead. In this paper, we propose an adaptive workflow management system that uses ordinal optimisation to acquire suboptimal schedules in much less time. We then introduce a prediction-based workflow scheduler model that predicts the execution time of the next coming workflow by using a support vector machine. We used a real application, Montage workflow for large-volume image data, and the experimental results show that our classification ordinal optimisation outperforms other existing methods.
    Keywords: cloud computing; workflow scheduling; classification ordinal optimisation; support vector machine; ordinal optimisation; Montage workflow; big data.

  • A local region enhanced multi-objective fireworks algorithm with subpopulation cooperative selection   Order a copy of this article
    by Xiaoning Shen, Xuan You, Yao Huang, Yinan Guo 
    Abstract: A local region enhanced multi-objective fireworks algorithm with subpopulation cooperative selection (LREMOFWA) is proposed for multi-objective optimisation. In LREMOFWA, the ranking based on the non-dominated sorting and the crowding distance is taken as the fitness evaluation indicator. A novel way to calculate the explosion amplitude is designed to enhance the search for the local region. The concept of subpopulation is introduced, and the selection operation is performed by the elites in the archive cooperating with the optimal subpopulation sparks. The differential mutation operator is used to deal with the repeated individuals in the fireworks, which prevents the algorithm from falling into the local optimum. The proposed algorithm is compared with five state-of-the-art algorithms on the WFG test functions. Experimental results show that the proposed algorithm has better performance with respect to the searching accuracy and diversity. It is suitable for solving multi-objective function optimisation problems with various complex characteristics.
    Keywords: multi-objective optimisation; fireworks algorithm; explosion amplitude; cooperative selection; differential mutation.

  • High order random coefficient INAR model and simulations   Order a copy of this article
    by Qingqing Zhan, Yunyan Wang 
    Abstract: This paper develops a high-order INAR model based on random environmental processes. The conditional expectation, conditional variance, and correlation structure of this new process are discussed, Yule-Walker estimators of the parameters in the new model are obtained, and strong consistency of the Yule-Walker estimators is proved. Numerical simulation is studied to test the performance of the estimators in a finite sample.
    Keywords: random environmental process; generalised signed thinning operator; integer valued time series; Yule-Walker estimation.

  • GDC-a-CGI: efficient algorithms for dynamic graph data cleaning and indexing   Order a copy of this article
    by Santhosh Kumar D K, Demain Antony D'Mello 
    Abstract: The era of big data has led graph data collection and analytics to grow rapidly in numerous fields. Data quality and data access are the two decisive factors of performance (accuracy and efficiency) for the graph data analytics model. This paper proposes a Graph Data Cleaning (GDC) technique, which removes erroneous messy data, leading to better data quality. The GDC is a dynamic cleaning technique that facilitates the user to update rules expressions at runtime and support inherit rules from inter-domains. In addition to cleaning, GDC verifies and validates the graph data. The paper presents the Cache-based Graph Indexing (CGI) technique to address data access, which is built using the tree structure CSS-Tree on the Hadoop distributed framework. The CGI is a scalable index construction technique, which builds efficient indexing for an extensive graph dataset. We carried out experiments with different graph data sets and results reveal that the proposed GDC and CGI techniques outperform the state of the art.
    Keywords: data mining; graph data cleaning; graph data indexing; big data; Hadoop; graph data analytics; dynamic cleaning; cache-based indexing.

  • A multi-hop cross-blockchain transaction model based on improved hash-locking   Order a copy of this article
    by Bingrong Dai, Shengming Jiang, Chao Li, Menglu Zhu, Sasa Wang 
    Abstract: Blockchain is a decentralised, trust-free distributed ledger technology that has been applied in various fields such as finance, supply chain, and asset management. However, the network isolation between blockchains has limited their interoperability in asset exchange and business collaboration since it forms blockchain islands. Cross-blockchain is an important technology aiming to realise the interoperability between blockchains, and has become one of the hottest research topics in this area. This paper proposes a multi-hop cross-blockchain transaction model based on an improved hash-locking consulted by the notary and users. It can solve the security problems in the traditional hash-locking, and prevent malicious participants from creating a large number of transactions to block the cross-blockchain system. Moreover, a notary multi-signature scheme is designed to solve the problem of lack of trust in the traditional model. A multi-hop cross-blockchain transaction loop is designed based on the loop detection method of directed graphs. The transaction process of key agreement, asset locking, lock releasing, and security analysis based on the model is discussed in detail. Experiments of cross-blockchain trans-actions are carried out in Ethereum private chain, and prove that the proposed model has good applicability.
    Keywords: blockchain; cross-blockchain; hash-locking; notary schemes; Diffie-Hellman algorithm; multi-hop transaction.

  • Workflow scheduling optimisation for distributed environment using artificial neural networks and reinforcement learning   Order a copy of this article
    by K. Jairam Naik, Mounish Pedagandham, Amrita Mishra 
    Abstract: The objective of this research article is to find an optimal schedule that can reduce the makespan of workflow. The workflow scheduling was enhanced by realising Artificial Neural Networks (NN) and reinforcement Q-learning standards. An optimised NN-based scheduling algorithm (WfSo_ANRL) that represents an agent that can effectively schedule the tasks among computational nodes is presented in this article.
    Keywords: workflow scheduling; optimisation; distributed environment; artificial neural network; makespan time; Q-learning.

  • Causal event extraction using causal event element-oriented neural network   Order a copy of this article
    by Kai Xu, Peng Wang, Xue Chen, Xiangfeng Luo, Jianqi Gao 
    Abstract: Causal event extraction plays an important role in natural language processing (NLP) such as question answering, decision making and event prediction. Previous work extracts causal events using template-matching methods, machine-learning methods, or deep-learning methods. However, these methods ignore the guiding role of specific causal patterns on causal event extraction. In this paper, we propose causal event element-oriented neural network (CEEONN) to extract causal events. Firstly, we construct a causal event element knowledge base (CEEKB) from domain casual text. Then we construct a neural network by incorporating both the entire sentence and associated causal patterns into a better semantic representation. With domain-based CEEKB, the proposed CEEONN can be better guided to identify specific causal patterns. Experiments show that CEEONN achieves competitive results compared with previous work.
    Keywords: causal event elements; causal event extraction; causal patterns.

  • Enhancing the energy efficiency by LEACH protocol in the internet of things   Order a copy of this article
    by Meghana Lokhande, Dipti Patil 
    Abstract: The Internet of Things (IoT) is one of the active applications of Wireless Sensor Networks (WSNs) with different objects or devices that can be connected over the internet. Limitation of battery life is the main concern for WSNs, which affects network life. Various medical devices and applications have benefitted from machine-to-machine (M2M) connectivity. In tele-robotic surgery, M2M communication between medical devices provides visual assistance to doctors during internal procedures and gives feedback on the progress of operation from sensors embedded in surgical instruments. Many researchers worked on reducing energy consumption in M2M networks. These studies presented energy-efficient routing protocols (EERP) to enhance energy efficiency to prolong sensor node life using LEACH protocol. LEACH is a ranked protocol that converts sensor nodes (SN) to cluster heads (CH) based on current energy, and CH collects and compresses data, and sends it to the destination node. The energy of the node dissipates when each node receives or transmits data to the base station. In the medical field, surgeons and patients are located at different places and connected through public networks. To make this communication reliable and robust, research shows the design of the medical sensor node network with LEACH protocol. After designing a network, a denial of service or man-in-middle attack is performed to analyse its impact on network performance. The system performance is measured using performance parameters. Based on the experimental results, the protocol can significantly extend the life of the WSN with the LEACH protocol, thus improving energy efficiency. This makes the system more robust and reliable in the emerging area of medical science.
    Keywords: machine-to-machine communication; internet of things; security.

  • Predicting stock price movement using a stack of multi-sized filter maps and convolutional neural networks   Order a copy of this article
    by Yash Thesia, Vidhey Oza, Priyank Thakkar 
    Abstract: This paper explores the use of Convolutional Neural Networks (CNN) to predict the movement of the stock market from a classification perspective. Standard methods of classification yield results with quite low confidence and precision. We, therefore, propose a CNN enhanced by multi-sized feature maps and spatial mapping providing more accurate two-way classification on a set of stocks. We also propose transforming stock indicators and data into a spatial map/image so that they can be processed using CNN. Our model and mapping achieves an average of 80% weighted f1 score for a two-way classification of market movement. A trading strategy is also employed and returns are compared with benchmarks. Our returns from the trading strategy from 2017 to 2020 outperform the previous benchmarks.
    Keywords: convolutional neural network; stock market; stock price movement; Inception; technical indicators.

  • Stacked auto-encoder for Arabic handwriting word recognition   Order a copy of this article
    by Benbakreti Samir, Benouis Mohamed, Roumane Ahmed, Benbakreti Soumia 
    Abstract: Arabic handwritten recognition systems face several challenges, such as the very diverse scripting styles, the presence of pseudo-words and the position-dependent shape of a character inside a given word, etc. These characteristics complicate the task of features extraction. Our proposed solution to this problem is a Stacked Auto-Encoder (SAE) unsupervised learning approach applied to resolve the unconstrained Arabic handwritten word recognition. Our strategy consists of using an unsupervised pre-training stage, i.e. SAE, which will extract the features layer by layer, then, through fine-tuning, the global system will be used for the classification task. By exploiting this, our system gets the advantage of applying a holistic approach, i.e. without word segmentation. In order to train our model, we have enhanced the NOUN v3 hybrid (i.e. offline and online) database, which contains 9600 handwritten Arabic words and 4800 characters. However, this work is focusing on the offline recognition of Arabic word handwriting using a SAE-based architecture for images classification. Our experiment study shows that after a careful tuning of the main SAE parameters we got good results (98.03%).
    Keywords: Arabic handwriting; offline recognition; deep learning; auto-encoder; SAE.

  • Prior distributions based data augmentation for object detection   Order a copy of this article
    by Ke Sun, Xiangfeng Luo, Liyan Ma, Shixiong Zhu 
    Abstract: Deep convolutional neural networks based object detection models require extensive labelled data as train set, while the collection and annotation of these data are much more laborious and costly. To solve the problem, data augmentation methods based on cut-and-paste that can explore the visual context are widely used. However, these methods either limit the expansion of the instances diversity of the dataset or increase the computational burden. In this paper, we propose a novel data augmentation strategy based on prior distributions, which can be used to guide data augmentation for object detection. On one hand, the method can effectively capture the relationship between the foreground instance and the visual context. On the other hand, it can increase the instances diversity of the original dataset as much as possible. Experimental results show that the performance of the popular object detection model can be effectively improved by expanding the original dataset with our method. Compared with the baseline, our method improves by 0.8 percentage point on PASCAL VOC and 1.1 percentage points higher on cross-data test set.
    Keywords: data augmentation; visual context; prior distributions; object detection.

  • Side-path FPN based multi-scale object detection   Order a copy of this article
    by Weixian Wan, Xiangfeng Luo, Liyan Ma, Shaorong Xie 
    Abstract: Multi-scale object detection faces the problem of how to obtain distinguishable features. Feature pyramid network (FPN) is the most typical method to construct a feature pyramid to obtain multi-scale features. FPN is beneficial for multi-scale object detection tasks to improve the mean Average Precision (mAP) of the detectors. However, owing to the lack of feature selection to eliminate redundant information, FPN cannot make full use of multi-scale features. In this paper, side-path FPN is proposed to address this problem. Side-path FPN contains two components: feature alignment and feature fusion. The feature alignment component uses the best operator to extract features. The feature fusion component can enhance features that are helpful for detection and reduce redundant information. With ResNet-50 as the backbone, compared with the original FPN, side-path FPN improves mAP by 1.8 points on the VOC2007 test data set and 1.0 point on the COCO 2017 test data set with MS COCO metrics.
    Keywords: object detection; multiple scale; feature selection.

  • On managing security in smart e-health applications   Order a copy of this article
    by Fiammetta Marulli, Stefano Marrone, Emanuele Bellini 
    Abstract: Distributed machine learning can give an adaptable but strong shared condition for the design of trusted AI applications; this is mainly due to lack of privacy of centralised remote learning mechanisms. This notwithstanding, distributed approaches have also been compromised by several attack models (mainly data poisoning): in such a situation, a malicious member of the learning party may inject bad data. As such applications are growing in criticality, learning models must face issues of security and protection just as with versatility issues. The aim of this paper is to improve these applications by providing extra security features for distributed and federated learning mechanisms: more in the details, the paper examines specific concerns such as the use of blockchain, homomorphic cryptography and meta-modelling techniques to ensure protection as well as other non-functional properties.
    Keywords: federated learning; cloud computing; security in machine learning; adversarial attacks.

  • Attitude control of an unmanned patrol helicopter based on an optimised spiking neural membrane system for use in coal mines   Order a copy of this article
    by Jiachang Xu, Yourui Huang, Ruijuan Zhao, Yu Liu 
    Abstract: For the attitude control of unmanned helicopters used in the intelligent patrolling of coal mines, a spiking neural membrane system is introduced for attitude optimisation control. First, a geometry-based attitude dynamics model suitable for coal mine scenarios is constructed. Second, in accordance with the attitude dynamics model, an extended spiking neural membrane system (ESNMS) is constructed, and an optimised spiking neural membrane system (OSNMS) and accompanying algorithm are designed to optimise the ESNMS. Then, the attitude control performance of the developed system is theoretically analysed. Finally, through the simulation of a semiphysical experimental platform, trajectory tracking is effectively realised. Under normal and wind disturbance conditions, comparisons with the traditional synovium controller (TSC) and linear feedback controller (LFC) show that the performance of the designed OSNMS is greatly improved, and the experimental results show that the OSNMS has good stability and effectiveness.
    Keywords: intelligent coal mining; unmanned helicopter; attitude control; membrane computing; spiking neural membrane system.

  • Transfer learning approach in deep neural networks for uterine fibroid detection   Order a copy of this article
    by Sumod Sundar, Sumathy Subramanian 
    Abstract: Convolutional Neural Network (CNN) is a deep learning algorithm that takes images as input and automatically extracts features for effective class prediction. A lot of research attempts are happening in medical imaging diagnosis using deep learning techniques. The performance of CNN architecture is a major concern while dealing with fewer data. Traditional CNN architectures such as ImageNet, AlexNet, and GoogleNet are trained with a big quantity of data. Also, CNN architectures such as NiftyNet, UNet, and SegNet, are not designed using uterine fibroid images. The idea of transfer learning is used in this work by combining the pre-trained model Inception-V4 and classifier Support Vector Machine (SVM) for better performance while dealing with fewer data. The goal of the proposed approach is to efficiently detect the presence of fibroids in uterus MRI images. The features of fibroid affected uterus images are extracted using the initial layers of Inception-V4 and transferred to SVM during training. Several combinations of various network classifiers are tested, and the performance metrics are evaluated. Experimental validations on the proposed model attained an accuracy of 81.05% with a U-kappa score of 0.402 on predicting fibroid images and 2.65% accuracy improvement compared with Fully Connected Neural Network (FCNN) used for fibroid detection.
    Keywords: CNN; transfer learning; deep learning; uterine fibroid; Inception-V4.

  • Numerical treatment and analysis for a class of time-fractional Burgers equation with the Dirichlet boundary conditions   Order a copy of this article
    by A.S.V Ravi Kanth, Neetu Garg 
    Abstract: This paper aims to study a class of time-fractional Burgers equations with the Dirichlet boundary conditions in the Caputo sense. Burgers equation occurs in the study of fluid dynamics, turbulent flows, acoustic waves, and heat conduction. We discretize the equation by employing the Crank-Nicolson finite difference quadrature formula in the direction of time. We then discretise the resulting equations in the space domain using the exponential B-splines. A rigorous study of stability and convergence analysis is analysed. Several test problems are studied to illustrate the efficacy and feasibility of the proposed method. Numerical simulations confirm the coherence with the theoretical analysis. Comparisons with the other existing results in the literature indicate the effectiveness of the method.
    Keywords: exponential B-spline; time-fractional Burgers equation; Caputo fractional derivative.

  • Multiclass classification using convolution neural networks for plant leaf recognition of ayurvedic plants   Order a copy of this article
    by K.V.N. Rajesh, Lalitha Bhaskari Dhavala 
    Abstract: Ayurveda is the traditional medicine system of India. The ingredients from which ayurvedic medicines are made are mostly herbal and mineral in nature. Also, there are many herbal home remedies in India for general ailments. This knowledge has been passed down from generation to generation in large joint families. This knowledge is slowly fading away in the current generation of nuclear families. The current generation is unable to identify even locally available plants. The authors have come up with the idea of using convolution neural networks for solving this problem. In this solution, the images of leaves are used to identify the plant. This problem is a case of multiclass classification. A leaf image database is created and a neural network model is built using Convolutional Neural Network (CNN). Keras deep learning framework with tensorflow as backend, is used for this purpose. The work presented in this paper is a part of larger research work in this area. This paper explains the developed CNN model and presents the results corresponding to six ayurvedic leaves commonly available in and around the city of Visakhapatnam in the state of Andhra Pradesh.
    Keywords: convolution neural networks; multiclass classification; plant leaf recognition; leaf feature extraction.

  • Improved ELBP descriptors for face recognition   Order a copy of this article
    by Shekhar Karanwal, Manoj Diwakar 
    Abstract: In this work, three novel descriptors are introduced for Face Recognition (FR) so-called Sobel Horizontal Elliptical Local Binary pattern (SHELBP), Sobel Vertical ELBP (SVELBP) and Sobel ELBP (SELBP). All three proposed descriptors are extensions of the work proposed by Nguyen et al. (2012). They proposed three descriptors for FR called as HELBP, VELBP and ELBP. In HELBP and VELBP the horizontal neighborhood pixels (aligned elliptically) and vertical neighborhood pixels (aligned elliptically) are compared with the centre pixel to produce their feature sizes, and ELBP is the combined histogram extracted from both the descriptors. The performance of these descriptors is not effective under illumination variations (without pre-processing), as is experimentally proved in this work. To compensate for that the SOBEL operator is applied as image pre-processing before feature extraction is performed. The features extracted from Sobel magnitude and directional gradients eliminate this problem very effectively.
    Keywords: image pre-processing; feature extraction; dimension reduction; classification.

  • Feature reduction of rich features for universal steganalysis using a meta-heuristic approach   Order a copy of this article
    by Ankita Gupta, Rita Chhikara, Prabha Sharma 
    Abstract: The development of content adaptive steganographies has become a challenge for steganalysis. This led researchers towards extraction of a rich space of features. The detection of stego images based on Spatial Rich Model (SRM) features and its variants is a promising research area in the field of universal steganalysis. SRM features are extracted as 106 submodels that collectively provide 34,671 features. So, one of the most significant challenges in universal steganalysis is feature selection. In this paper, an improved binary particle swarm optimisation, Global and Local Best Particle Swarm Optimization (GLBPSO) with Fisher Linear Discriminant classifier is used to identify relevant feature submodels that improve the efficiency of a steganalyser. A significant reduction rate of more than 70% is achieved by the proposed approach. This further helps in reducing computational complexity without much affecting the detection capability. The proposed methodology gives superior results when compared with state-of-the-art algorithms.
    Keywords: steganalysis; spatial rich model; GLBPSO; Fisher linear discriminant; ensemble; submodels; classification accuracy; steganography; meta-heuristic; optimisation.

  • Self-similarity single image super-resolution based on blur kernel estimation for texture reconstruction   Order a copy of this article
    by Kawther Aarizou, Abdelhamid Loukil 
    Abstract: Most of recent Single Image Super Resolution (SISR) reconstruction methods adopt simple bicubic downsampling to construct low-resolution (LR) and high-resolution (HR) pairs for training. Those models learn an inverted version of an ideal degradation operation, which leads to generating less realistic SR images. The obtained details are either blurred or not, reminiscent of the usually observed textures (Du et al., 2020). The generation of SR image from a single LR with faithful ground-truth texture and no external information remains an issue, especially when the degradation model is not defined (not necessarily bicubic downscaling). To overcome this issue, we focus on designing a single-image SR reconstruction framework for real-world scenarios by injecting the image-specific degradation kernel in the training process. Our method combines the advantages of both SISR and Multiple-Image Super Resolution (MISR) techniques by generating a dataset regarding internal statistic of the LR image. A small CNN is trained over this internal dataset and requires no additional or external data. Our method is proved to address more textural details in the generated outcome, and outperforms state-of-the-art deep models.
    Keywords: unsupervised single-image super-resolution; internal learning; image-specific super-resolution; data-augmentation; kernel estimation.

  • Distributed energy management study based on blockchain technology   Order a copy of this article
    by Jingzhao Li, Lei Wang, Xiaowei Qin 
    Abstract: This article proposes a power network management system based on blockchain technology to address the difficulties in distributed energy management in smart grids and power resource dispatching. Ethereum is used as the development platform to build a blockchain for power interaction management that mainly involves distributed energy transactions. The consensus mechanism for electricity generation and consumption in the blockchain is designed based on the proof of stake (PoS) mutation algorithm. The consensus mechanism is also based on a selection function to determine the bookkeeper among the two sides of the transaction. In order to guarantee the rationality of energy transactions in the grid, this article presents a distributed power dispatching strategy using K-means clustering algorithm and particle swarm optimisation (PSO) algorithm. Finally, the distributed power matching transactions are completed in the power interaction management blockchain with the presence of power operation smart contracts. The experimental results show that the consensus mechanism and power dispatching strategy designed in this paper effectively solve the matching problem in distributed power trading. The application of the power operation smart contract further promotes the success rate of the transaction and effectively reduces its time consumption.
    Keywords: smart grid; distributed energy; power network management system; power interactive management blockchain; smart contract.

  • A genetic algorithm for real time demand side management in smart microgrids   Order a copy of this article
    by Salvatore Venticinque, Massimiliano Diodati 
    Abstract: One of the main drawbacks in the management of renewable resources, including wind and solar energies, is the issue of uncertainty in their behaviour. Demand side management (DSM) shifts loads of a household from times characterised by a surplus in consumption to times with photovoltaic production surplus. In this paper, we propose the use of a genetic algorithm to find the best schedule of energy loads that best matches the energy production by photovoltaic panels. We aim at optimising self-consumption, but satisfying real-time constraints, which allow for addressing unforeseen changes of the planned schedule or unpredictable variations of renewable energy production. We designed specialised genetic operators to accelerate, already in the first iterations, the convergence to a local minimum of the solution space, and evaluated how such improvements affect the optimality of results.
    Keywords: smart microgrid; optimisation; genetic algorithm; demand side management.

  • Empowerment of cluster and grid load-balancing algorithms to support distributed exascale computing systems with high compatibility   Order a copy of this article
    by Faezeh Mollasalehi, Shirin Shahrabi, Elham Adibi, Ehsan Mousavi Khaneghah 
    Abstract: The occurrence of dynamic and interactive events in processes leads to changes in the state of their necessities and in the state of computing elements of the system, leading to changes in the state of the load balancer. This impact may render the load balancer unable to manage the load balancing of the system. On the other hand, the nature of the scientific applications that need to be distributed exascale systems means that both traditional and distributed exascale systems programs are required. As a result, not only does the load balancer support traditional patterns and mechanisms, but also these mechanisms should be empowered to support the states caused by the occurrence of events with a dynamic and interactive nature. In this paper, in addition to the examination of events with dynamic and interactive nature, a mathematical model is presented to examine the impact of this concept on the load balancer. This mathematical model examines the traditional load balancer that is used in cluster and grid computing systems and should support which characteristics to manage the dynamic and interactive nature in processes and execution in distributed exascale computing systems. Based on the model and definition of global activity, mechanisms that are used in cluster and grid systems in distributed exascale systems are examined. In cluster and grid systems that can support the specified characteristics in this article, in 60% of the cases, the load balancer can manage events with a dynamic and interactive nature and use the mathematical model as the load-balancing mechanism in the distributed exascale system.
    Keywords: distributed exascale computing; load balancing; events of dynamic and interactive nature; load-balancing algorithms; cluster computing; grid computing.

Special Issue on: Recent Advancements in Machine Learning Techniques for Big Data and Cloud Computing

  • Research of the micro grid renewable energy control system based on renewable related data mining and forecasting technology   Order a copy of this article
    by Lin Yue, Yao-jun Qu, Yan-xia Song, Kanae Shunshoku, Jing Bai 
    Abstract: The output power of renewable energy has the characteristics of random fluctuation and instability, which have a harmful effect on stability of renewable power grids and causes the problem of low usage ratio on renewable energy output power. Thus, this paper proposes a method to predict the output power of renewable energy based on data mining technology. Firstly, the renewable generation power prediction accuracy of three different algorithms, linear regression, decision tree and random forest, is obtained and compared. Secondly, by applying the prediction result to the power dispatch control system, grid-connected renewable power will be consumed by grid-connected load to improve the usage ratio of renewable power. A simulation model and experiment platform are established to verify and analyse the prediction usefulness. The experiment shows that the prediction accuracy of the random forest algorithm is the highest. The tendency of renewable energy output power within a period can be calculated by using data mining technology, and the designed experiment platform system can adjust the working state automatically by following the instruction from the data mining result, which can increase the usage ratio of renewable energy output power and improve the stability of renewable power grid.
    Keywords: data mining; micro grid; renewable energy.

  • Research on advertising content recognition based on convolutional neural network and recurrent neural network   Order a copy of this article
    by Xiaomei Liu, Fazhi Qi 
    Abstract: The problem tackled in this paper is to identify the text advertisement information published by users in a medium-sized social networking website. First, the text is segmented, and then the text is transformed into sequence tensor by using a word vector representation method, which is input into the deep neural network. Compared with other neural networks, RNN is good at processing training samples with continuous input sequence, and the length of the sequence is different. Although RNN can theoretically solve the training of sequential data beautifully, it has the problem of gradient disappearance, so it is a special LSTM based on RNN model that is widely used in practice. In the experiment, the convolutional neural network is used to process text sequence, and time is regarded as a spatial dimension. Finally, the paper briefly introduces the use of universal language model fine-tuning for text classification.
    Keywords: RNN; LSTM; CNN; word vector; text classification.
    DOI: 10.1504/IJCSE.2021.10034064
     

Special Issue on: CCPI'20 Smart Cloud Applications, Services and Technologies

  • A big data and cloud computing model architecture for a multi-class travel demand estimation through traffic measures: a real case application in Italy   Order a copy of this article
    by Armando Cartenì, Ilaria Henke, Assunta Errico, Marida Di Bartolomeo 
    Abstract: The big data and cloud computing are an extraordinary opportunity to implement multipurpose smart applications for the management and the control of transport systems. The aim of this paper is to propose a big data and cloud computing model architecture for a multi-class origin-destination demand estimation based on the application of a bi-level transport algorithm using traffic counts on a congested network, also to propose sustainable policies at urban scale. The proposed methodology has been applied to a real case study in terms of travel demand estimation within the city of Naples (Italy), also aiming to verify the effectiveness of a sustainable policy in term of reducing traffic congestion by about 20% through en-route travel information. The obtained results, although preliminary, suggest the usefulness of the proposed methodology in terms of ability in real time/pre-fixed time periods to estimate traffic demand.
    Keywords: cloud computing; big data; virtualisation; smart city; internet of things; transportation planning; demand estimation; sustainable mobility; simulation model.

  • A methodology for introducing an energy-efficient component within the rail infrastructure access charges in Italy   Order a copy of this article
    by Marilisa Botte, Ilaria Tufano, Luca D'Acierno 
    Abstract: After the separation of rail infrastructure managers from rail service operators occurred within the European Union in 1991, the necessity of defining an access charge framework for ensuring non-discriminatory access to the rail market arose. Basically, it has to guarantee an economic balance for infrastructure manager accounts. Currently, in the Italian context, access charge schemes neglect the actual energy-consumption of rail operators and related costs of energy traction for infrastructure managers. Therefore, we propose a methodology, integrating cloud-based tasks and simulation tools, for including such an aspect within the infrastructure toll, thus making the system more sustainable. Finally, to show the feasibility of the proposed approach, it has been applied to an Italian real rail context, i.e. the Rome-Naples high-speed railway line. Results have shown that customising the tool access charges, by considering the power supply required, may generate a virtuous loop with an increase in energy-efficiency of rail systems.
    Keywords: cloud-based applications; rail infrastructure access charges; environmental component; energy-saving policies.

  • Edge analytics on resource-constrained devices   Order a copy of this article
    by Sean Savitz, Charith Perera, Omer Rana 
    Abstract: Video and image cameras have become an important type of sensor within the Internet of Things (IoT) sensing ecosystem. Camera sensors can measure our environment at high precision, providing the basis for detecting more complex phenomena in comparison with other sensors e.g. temperature or humidity. This comes at a high computational cost on the CPU, memory and storage resources, and requires consideration of various deployment constraints, such as lighting and height of camera placement. Using benchmarks, this work evaluates object classification on resource-constrained devices, focusing on video feeds from IoT cameras. The models that have been used in this research include MobileNetV1, MobileNetV2 and Faster R-CNN, which can be combined with regression models for precise object localisation. We compare the models by using their accuracy for classifying objects and the demand they impose on the computational resources of a Raspberry Pi.
    Keywords: internet of things; edge computing; edge analytics; resource-constrained devices; camera sensing; deep learning; object detection.

  • Traffic control strategies based on internet of vehicles architectures for smart traffic management: centralised vs decentralised approach   Order a copy of this article
    by Houda Oulha, Roberta Di Pace, Rachid Ouafi, Stefano De Luca 
    Abstract: In order to reduce traffic congestion, real-time traffic control is one of the most widely adopted strategies. However, the effectiveness of this approach is constrained not only by the adopted framework but also by data. Indeed, the computational complexity may significantly affect this kind of application, thus the trade-off between the effectiveness and the efficiency must be analysed. In this context, the most appropriate traffic control strategy to be adopted must be accurately evaluated. In general, there are three main control approaches in the literature: centralised control, decentralised control and distributed control, which is an intermediate approach. In this paper, the effectiveness of a centralised and a decentralised approach is compared and applied to two network layouts. The results, evaluated not only in terms of performance index with reference to the network total delay but also in terms of emissions and fuel consumption, highlight that the considered centralised approach outperforms the adopted decentralised one and this is particularly evident in the case of more complex layouts.
    Keywords: cloud computing; internet of vehicles; transportation; centralised control; decentralised control; emissions; fuel consumption.

  • ACSmI: a solution to address the challenges of cloud services federation and monitoring towards the cloud continuum   Order a copy of this article
    by Juncal Alonso, Maider Huarte, Leire Orue-Echevarria 
    Abstract: The evolution of cloud computing has changed the way in which cloud service providers offer their services and how cloud customers consume them, moving towards the usage of multiple cloud services, in what is called multi-cloud. Multi-cloud is gaining interest by the expansion of IoT, edge computing and the cloud continuum, where distributed cloud federation models are necessary for effective application deployment and operation. This work presents ACSmI (Advanced Cloud Service Meta-Intermediator), a solution that implements a cloud federation, supporting the seamless brokerage of cloud services. Technical details addressing the discovered shortcomings are presented, including a proof of concept built on JHipster, Java, InfluxD, Telegraf and Grafana. ACSmI contributes to relevant elements of the European Gaia-X initiative, specifically to the federated catalogue, continuous monitoring, and certification of services. The experiments show that the proposed solution effectively saves up to 75% of the DevOps teams effort to discover, contract and monitor cloud services.
    Keywords: cloud service broker; cloud services federation; cloud services brokerage; cloud services intermediation; hybrid cloud; cloud service monitoring; multi-cloud; DevOps; cloud service level agreement; cloud service discovery; multi-cloud service management; cloud continuum.

  • User perception and economic analysis of an e-mobility service: development of an electric bus service in Naples, Italy   Order a copy of this article
    by Ilaria Henke, Assunta Errico, Luigi Di Francesco 
    Abstract: Among the sustainable mobility policies, electric mobility seems to be one of the best choices to reach sustainable goals, but it has limits that could be partially exceeded in the local public transport. This research presents a methodology to design a new sustainable public transport service that meets users needs by analysing economic feasibility. This methodology is then applied to a real case study: renewing an 'old' bus fleet with an electric one charged by a photovoltaic system in the city of Naples (Southern Italy). Its effects on users' mobility choices were assessed through a mobility survey. The bus line and the photovoltaic system were designed. Finally, the economic feasibility of the project was assessed through a cost-benefit analysis. This research is placed in the field of smart mobility and new technologies that increasingly need to store, manage, and process large amounts of data typical of cloud computing and big data applications
    Keywords: e-mobility; electric bus services; cloud computing; user perception; economic analysis; cost-benefit analysis; photovoltaic system; sustainable mobility policies; sustainable goals; new technologies; local emissions; environmental impacts.

Special Issue on: Intelligent Self-Learning Algorithms with Deep Embedded Clustering

  • Application of virtual numerical control technology in cam processing   Order a copy of this article
    by Linjun Li 
    Abstract: Numerical control (NC) machining is an important processing method in the machinery manufacturing industry. In most cases, as the final processing procedure, NC machining directly determines the quality of the finished product. As the key components needed in many industries such as automobile, internal combustion engine, national defense and so on, the precision and efficiency of the cam processing have a direct impact on the quality, life and energy saving standard of the engine and related products. This paper takes the cam NC grinding machining as the research object, takes the optimisation and intelligentisation of processing technology as the goal and uses the virtual NC technology to develop a process intelligent optimisation and NC machining software platform specifically for cam NC grinding machining. The software platform has machine tool library, grinding wheel library, material library, coolant reservoir library, process accessory library and other basic technology libraries, and it also has process example library, meta-knowledge rule library, forecast model library and other process intelligent libraries. With the support of database, the software platform can realise intelligent optimisation and automatic NC machining programming of cam grinding process plan. Because the software platform involves many research contents, this paper mainly focuses on the modelling of the motion process of the NC machining process system, the architecture of the intelligent platform software of the cam NC machining, and the virtual NC machining simulation of the process system. Therefore, the study of this paper is of great significance.
    Keywords: cam grinding; numerical control grinding; intelligent platform software; process problems; virtual grinding.

  • Research on oil painting effect based on image edge numerical analysis   Order a copy of this article
    by Yansong Zhang 
    Abstract: With the continuous development of the technology of non-photorealistic rendering, the effect of oil painting on image is increasing. The traditional oil painting effect is not satisfactory enough to satisfy people's needs. Therefore, this paper puts forward the research of oil painting effect based on image edge numerical analysis, and constructs a corresponding algorithm for image edge numerical analysis and detection. Through the comparison experiment with traditional algorithm oil painting results, the conclusion is drawn that the algorithm proposed in this paper can accurately analyse and detect the image edge, and the final rendering effect is more natural and more smooth than the traditional algorithm oil painting effect.
    Keywords: image; edge value; analysis; oil painting effect.

  • Research on multimedia and interactive teaching model of college English   Order a copy of this article
    by Zhang Juan 
    Abstract: Since the current higher education focuses on cultivating comprehensive practical ability rather than simply inculcating theoretical ideas, English should be adopted from the aspects of teaching purpose, teaching content and teaching strategy. A multiple interactive English teaching model is constructed to improve the information of a constructed method. Spatial reconstruction is used to extract and retrieve the information of multiple teaching resources, optimise and control the allocation of resources under the condition of load balance, and construct the data-mining model of College English teaching resources in the environment of information technology. With the result of information processing, optimised to maximise enthusiasm and creativity of the teachers and students, to continue the development of multimedia network resources and create a multiple interactive teaching environment, so as to create a platform for students.
    Keywords: information technology environment; college English; multiple interactive teaching mode;.

  • Design and application of system platform in piano teaching based on feature comparison   Order a copy of this article
    by Tingting Rao 
    Abstract: Traditional piano teaching is managed mainly by hand, but there are low management efficiency, management confusion and other problems, seriously restricting the development of piano teaching activities. In order to make up for the limitations of piano music teaching materials and the shortage of music teachers in some areas, the automatic score of computer is introduced into music learning, and a set of piano music singing and singing system based on characteristic comparison is developed. The difference between the score system and the existing commercial music scoring system on the internet lies in the educational orientation of the system, which is mainly reflected in the design and implementation of the feedback evaluation module. The system uses melody feature extraction, similarity comparison and pitch data analysis to perform the automatic singing score, locate the error position, estimate the cause of the error and give the learners detailed feedback and guidance suggestions. The application case study shows that the system has practical application value.
    Keywords: piano music teaching material; similarity comparison; learning feedback.

Special Issue on: Computational Intelligence in Data Science

  • Topologisation of the situation geographical image in the aspect of control of local transport and economic activity   Order a copy of this article
    by Sergei Bidenko, Sergei Chernyi, Yuri Nikolashin, Evgeniy Borodin, Denis Milyakov 
    Abstract: The specific features of cartographic images are considered from the point of view of procedures for assessing the situation in the area of maritime transport activity and spatial planning. The tasks of spatial analysis are highlighted, requiring a transition from cartographic to topological mapping of geographic reality. The existing anamorphic techniques, their classification as well as their advantages and disadvantages, are considered. Models for constructing anamorphosis of the terrain for topologising the geoimage of real situation have been developed. An algorithm based on affine transformation, based on the distortion of the boundaries of the area relative to the centre of mass of the region, is proposed. A comparison of the proposed algorithm with the applicable Gastner-Newman algorithm is given.
    Keywords: maritime territorial activity; territorial situation; analysis and assessment of the situation; base map; geospace; geoobject; anamorphosing; cartoid; anamorphosis.

  • Palm-print recognition based on quality estimation and feature dimension   Order a copy of this article
    by Poonam Poonia, Pawan K. Ajmera 
    Abstract: The exploitation of biometric traits for human identification is more and more in style in recent years. Among the widely used biometric traits, palm-print is a vital one because of its acquisition convenience and comparatively high recognition results. The paper proposes a palm-print recognition system based on quality estimation and feature dimensions. Initially, a quality assessment is applied on the extracted region of interest (ROI) images. Gabor filter is employed to extract the palm-print features having various scales and orientations. The kernel-based dimensionality reduction is applied in the full space, which reduces the high dimensional Gabor features. The experiments are conducted on the PolyU, IIT-Delhi and CASIA palm-print databases. The best recognition performance in terms of an Equal Error Rate (EER) of 0.051% and Recognition Rate (RR) of 98.34% was achieved on the PolyU database. Experimental results prove the effectiveness of the proposed approach.
    Keywords: palm-print; pre-processing; quality control; dimensionality reduction; feature extraction.

  • Design and implementation of an efficient and cost-effective deep feature learning model for rice yield mapping   Order a copy of this article
    by Divakar M. Sarith, Elayidom M. Sudheep, R. Rajesh 
    Abstract: Crop yield prediction before harvest is essential to address the instability of crop prices and ensure food security. Existing approaches of crop yield forecasting focus on survey data and are expensive. Remote sensing-based crop yield forecasting is a promising approach, especially in areas where field data is scarce. Recent studies using machine learning and deep learning techniques used modern representation learning ideas instead of traditionally used features that discarded many spectral bands available from the satellite imagery. A deep feature learning model using convolutional LSTM cells is used for forecasting rice yield from remote sensing satellite imagery. Convolutional LSTM with convolutional input and recurrent transformations directly captures spatial and temporal features of the input data. Feature selection is performed using principal component analysis to reduce the dimension of input data without much loss in the performance. Results suggest that features learned are highly informative and our proposed model performed better than other existing techniques.
    Keywords: precision agriculture; remote sensing; crop yield forecast; deep learning; recurrent neural network; long short term memory; convolutional LSTM network; PCA; MODIS.

  • Stock indices price prediction in real time data stream using deep learning with extra-tree ensemble optimisation   Order a copy of this article
    by Monika Arya, Hanumat Sastry G 
    Abstract: Stock price prediction has always been one of the favourite research topics in industry and academia. The patterns of stock markets follow random walk motion and are highly volatile. Stock traders usually forecast the upcoming effective trends and take the decision to buy or sell the stock by performing fundamental or technical analysis. Earlier prediction models using machine learning, ensemble learning, neural networks and deep learning (DL) techniques for forecasting stock are more complex and less accurate. We propose a novel DL network with Extra-Tree Ensemble optimisation (DELETE) for predicting stock indices price trends in a real time data stream. We have applied extra-tree ensemble for optimising the cross entropy loss function and derived highly predictive Stock Technical Indicators (STIs), thus improving prediction accuracy. In the proposed neuro-computational model we supplied these STIs as tensor, to make computation faster. The experiments were conducted in Python language using powerful DL libraries: TensorFlow and Keras. For performance evaluation the data of three popular stock indices of National Stock Exchange (NSE) India were chosen. The daily prediction model achieved an accuracy of up to 78.9% and average accuracy of 66.61%, which is up to 30.2% higher than benchmark models. The DL algorithm predicts what is likely to happen to prices, while the optimisation algorithm gives strength to such predictions by improving its accuracy. The proposed model performed well in deriving correct monthly trends as well as higher prediction accuracy for daily buy/sell decisions.
    Keywords: deep learning; ensemble learning; extra-tree optimisation; machine learning; neural network; stock indices prediction; predictive model; real time data streams.

  • A two-stage text detection approach using gradient point adjacency and deep network   Order a copy of this article
    by Tauseef Khan, Ayatullah Faruk Mollah 
    Abstract: Accurate localisation of texts in complex scene environment is a pivotal problem in computer vision and image processing research. Although several methods have been reported, one may hardly find any method that performs adequately well in the wild, and most of the reported methods have employed Latin script. However, a focus on Indian regional scripts that appear in diversified text-pattern and orientation has not received ample attention. In this paper, an attempt is made to design a simple, robust yet effective text detection method for both scene and computer-generated images under a multi-script environment in an Indian context. At first, a fine-scale edge-map is generated from the original image, and subsequently, adaptive clustering is applied to form clusters of edge-points based on their spatial density. Foreground objects are then extracted with the help of the appropriate cluster boundaries, and they are considered as prospective text proposals. Such text proposals are fed to a deep convolutional neural network for learning and prediction as text or non-text components. Finally, true-text components are properly aggregated as localised final texts of the original image. The proposed method is evaluated on two popular benchmark datasets, viz. ICDAR 2017-MLT and ICDAR 2013 born-digital image. The results are found to surpass some other state-of-the-art methods, which demonstrates its strength and pertinent usefulness in both scene and born-digital environments.
    Keywords: text detection; candidate text proposal; foreground object classification; text proposal aggregation; deep network.

  • RNN-BD: an approach for fraud visualisation and detection using deep learning   Order a copy of this article
    by G. Madhukar Rao, K. Srinivas 
    Abstract: The evolution of banking information systems considerably increases fraud activities, which can have a negative impact on banking financial services. The use of credit cards has increased significantly due to electronic funding, electronic services and e-commerce activities. Massive amounts of data from credit card transactions can result in big data. Researchers are now using machine learning algorithms to detect and analyse fraud in online transactions. One of the major concerns of the banking industry is the visualisation and detection of credit card fraud. Machine learning techniques only work well when the dataset is small and does not have complex models. Deep learning, on the other hand, processes large and complex datasets. The objective of this paper is to visualise and detect credit card fraud by incorporating deep learning and dimensionality reduction techniques. A real dataset is used to assess the effectiveness of the intended work. The results show that our proposed model is more efficient in identifying fraudulent transactions to reduce fraud and income loss. We found that our deep learning model can be used to identify fraudulent transactions and reduce fraud losses to protect customer interests.
    Keywords: big data; credit card transaction fraud; deep Learning; optimisation; visualisation.

Special Issue on: Cloud Computing and Networking for Intelligent Data Analytics in Smart City

  • Real time ECG signal preprocessing and neuro-fuzzy-based CHD risk prediction   Order a copy of this article
    by S. Satheeskumaran, C. Venkatesan, S. Saravanan 
    Abstract: Coronary heart disease (CHD) is a major chronic disease that is directly responsible for myocardial infarction. Heart rate variability (HRV) has been used for the prediction of CHD risk in human beings. In this work, neuro-fuzzy-based CHD risk prediction is performed after performing preprocessing and HRV feature extraction. The preprocessing is used to remove high frequency noise, which is modelled as white Gaussian noise. The real time ECG signal acquisition, preprocessing and HRV feature extraction are performed using NI LabVIEW and DAQ board. A 30 seconds recording of the ECG signal was selected in both smokers and non-smokers. Various statistical parameters are extracted from HRV to predict CHD risk among the subjects. The HRV extracted signals are classified into normal and CHD-risky subjects using neuro-fuzzy classifier. The classification performance of the neuro-fuzzy classifier is compared with the ANN, KNN, and decision tree classifiers.
    Keywords: electrocardiogram; Gaussian noise; wavelet transform; heart rate variability; neuro-fuzzy technique; coronary heart disease.

  • Optimised fuzzy clustering-based resource scheduling and dynamic load-balancing algorithm for fog computing environment   Order a copy of this article
    by Bikash Sarma, R. Kumar, Themrichon Tuithung 
    Abstract: The influential and standard tool known as fog computing performs applications of the Internet of Things (IoT) and it is an extended version of cloud computing. In the network of edge, the applications of IoT are possibly implemented by the fog computing, which is an emerging technology in the cloud computing infrastructure. The unique technology in fog computing is the resource-scheduling process. The load on the cloud is minimised by the resource allocation of the fog-based computing method. Maximisation of throughput, optimisation of available resources, response time reduction, and elimination of overload of single resource are the goals of a load-balancing algorithm. This paper suggests an Optimised Fuzzy Clustering Based Resource Scheduling and Dynamic Load Balancing (OFCRS-DLB) procedure for resource scheduling and load balancing in fog computing. For resource scheduling, this paper recommends an enhanced form of Fast Fuzzy C-means (FFCM) with the Crow Search Optimisation (CSO) algorithm in fog computing. Finally, the loads or requests are balanced by applying the scalability decision technique in the load-balancing algorithm. The proposed method is evaluated based on some standard measures, including response time, processing time, latency ratio, reliability, resource use, and energy consumption. The proficiency of the recommended technique is obtained by comparing with other evolutionary methods.
    Keywords: fog computing; fast fuzzy C-means clustering; crow search optimisation algorithm; scalability decision for load balancing.