Forthcoming and Online First Articles

International Journal of Computational Science and Engineering

International Journal of Computational Science and Engineering (IJCSE)

Forthcoming articles have been peer-reviewed and accepted for publication but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

Online First articles are published online here, before they appear in a journal issue. Online First articles are fully citeable, complete with a DOI. They can be cited, read, and downloaded. Online First articles are published as Open Access (OA) articles to make the latest research available as early as possible.

Open AccessArticles marked with this Open Access icon are Online First articles. They are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.

Register for our alerting service, which notifies you by email when new issues are published online.

We also offer which provide timely updates of tables of contents, newly published articles and calls for papers.

International Journal of Computational Science and Engineering (85 papers in press)

Regular Issues

  • Web API service recommendation for mashup creation   Order a copy of this article
    by Gejing Xu, Sixian Lian, Mingdong Tang 
    Abstract: Mashup refers to a sort of Web application developed by reusing or combining Web API services, which are very popular software components for building various applications. As the number of open Web APIs increases, to find suitable Web APIs for mashup creation, however, becomes a challenging issue. To address this issue, a number of Web API service recommendation methods have been proposed. Content-based methods rely on the description of the service candidates and the users request to make recommendations. Collaborative filtering-based methods use the invocation records of a set of services generated by a set of users to make recommendations. There are also some studies employing both the description and invocation records of services to make recommendations. In this paper, we survey the state-of-the-art Web API service recommendation methods, and discuss their characteristics and differences. We also present some possible future research directions.
    Keywords: web service; recommendation; collaborative filtering; mashup creation.

  • A novel dual-fusion algorithm of single image dehazing based on anisotropic diffusion and Gaussian filter   Order a copy of this article
    by Kaihan Xiao, Qingshan Tang, Si Liu, Sijie Li, Jiayi Huang, Tao Huang 
    Abstract: Dark channel prior (DCP) is a widely used method in single image dehazing technology. Here, we propose a novel dual-fusion algorithm of single image dehazing based on anisotropic diffusion and Gaussian filter to suppress the halo effect or colour distortion in traditional DCP algorithms. Anisotropic diffusion is used for edge-preserving smooth images and a Gaussian filter is used to smooth the local white objects. A dual-fusion strategy is conducted to optimise the atmospheric veil. Besides, the fast explicit diffusion (FED) scheme is used to accelerate the numerical solution of the anisotropic diffusion to reduce time consumption. The subjective and objective evaluation of the experiment shows that the proposed algorithm can effectively suppress the halo effect and colour distortion, and has good dehazing performance on evaluation metrics. The proposed algorithm also reduces the time consumption by 54.2% compared with DCP with guided filter. This study provides an effective solution for single image dehazing.
    Keywords: image dehazing; dark channel prior; anisotropic diffusion; fast explicit diffusion; image fusion.

  • Spam email classification and sentiment analysis based on semantic similarity methods   Order a copy of this article
    by Ulligaddala Srinivasarao, Aakanksha Sharaff 
    Abstract: Electronic mail is widely used for communication purposes, and the spam filter is required in the email to save storage and protect from security issues. Various techniques based on NLP methods are used to increase spam detection efficiency. Spam detection cannot handle the unbalanced classes and lower efficiency owing to irrelevant feature extraction in existing approaches. In this research, sentiment analysis-based semantic FE and hybrid FS techniques were used to increase the spam and non-spam detection efficiency in email. The sentiment analysis is carried out in this proposed method with semantic feature extraction and hybrid FS. The sentiment analysis measures the polarity of the input text and used for email spam classification. Different semantic similarity feature extraction methods are used in this proposed method. The TF-IDF, Information Gain, and Gini Index were used. The proposed semantic similarity and hybrid FS were evaluated with various classifiers. The experimental analysis shows that the Gini index FS technique, word2vec FE, and SVM classifier show the higher performance of 95.17% and RF with Gini index and word2vec methods has 93.3% accuracy in email spam detection.
    Keywords: artificial neural network; hybrid feature selection; semantic similarity; SVM; TF-IDF.

  • An efficient algorithm for maximum cliques problem on IoT devices   Order a copy of this article
    by Bouneb Zine El Abidine 
    Abstract: This work describes how the maximum clique problem (MCP) algorithm can be performed on microcontrollers in a dynamic environment. In practice, many problems can be formalised using MCP and graphs where our problem is considered in the context of a dynamic environment. MCP is, however, a tricky problem NP-Complete, for which suitable solutions must be designed for microcontrollers. Microcontrollers are built for specific purposes and optimised to meet different constraints, such as timing, nested recursion depth limitation, or no recursion at all due to recursion stack limitation, power, and RAM limitation. On another side, graph representation and all the algorithms mentioned in the literature to solve the MCP, which is recursive, consume memory and are designed specifically for computers rather than a microcontroller.
    Keywords: MCE; MCP; microcontrollers ; IoT; agent coalition; symbolic computation; n queens completion problem ; MicroPython.
    DOI: 10.1504/IJCSE.2022.10052588
     
  • A new resource-sharing protocol in the light of a token-based strategy for distributed systems   Order a copy of this article
    by Ashish Singh Parihar, Swarnendu Chakraborty 
    Abstract: One of the highly researched areas in distributed systems is mutual exclusion. To avoid any inconsistent state of a system, more than one process executing on different processors are not allowed to invoke their critical sections simultaneously for the purpose of resource sharing. As a solution to such resource allocation issues, a token-based strategy for distributed mutual exclusion algorithms as a prime classification of solutions is one of the most popular and significant ways to handle mutual exclusion in this field. Through this research article, we propose a novel token-based distributed mutual exclusion algorithm. The proposed solution is scalable and has better results in terms of message complexity compared with existing solutions. In this proposed method, the numbers of messages exchanged per critical section invocation are 3(?log? N?-1), 3?(?log?(N+1) ?-1)/2? and 6[?log?(N+1)?+2(2^(-?log?(N+1)?)-1)] in the cases of light load, medium load and high load situations, respectively.
    Keywords: distributed system; mutual exclusion; critical section; token-based; resource allocation.

  • Low-loss data compression using deep learning framework with attention-based autoencoder   Order a copy of this article
    by S. Sriram, P. Chitra, V.V. Sankar, S. Abirami, S.j. Rethina Durai 
    Abstract: With the rapid development of media, representation and compression of data plays a vital role for efficient data storage and data transmission. Deep learning can help the research objective of compression by exploring its technical avenues to overcome the challenges faced by the traditional Windows archiver. The proposed work is an experimental effort to employ deep learning for data compression to achieve high compression rate with reduced loss. Initially, the work explored multilayer autoencoder models that obtained efficient data compression with higher compression ratio than the traditional Windows archiver. However, the autoencoders suffered from reconstruction loss. Therefore, an added attention mechanism in the autoencoder is proposed for reducing the reconstruction loss. The objective of the attention mechanism is to reduce the difference between the latent representation of an input at the encoder with its corresponding representation in the decoder along with the difference between the original input and its corresponding reconstructed output. This attention layer introduced in a multilayer autoencoder that capably compresses the data with improved compression ratio and reduced reconstruction loss. The proposed attention-based autoencoder is extensively evaluated on the atmospheric and oceanic data obtained from the Centre for Development of Advanced Computing. The validation shows that the proposed attention-based autoencoder could proficiently compress the data with around 89.7% improved compression rate compared with the traditional Windows archiver. Also, the results demonstrate that the proposed attention mechanism reduces the reconstruction loss by up to 25% more than the multilayer autoencoder
    Keywords: deep learning; multilayer autoencoder; compression ratio; attention; reconstruction loss.
    DOI: 10.1504/IJCSE.2022.10050669
     
  • Cross-modal correlation feature extraction with orthogonality redundancy reduce and discriminant structure constraint   Order a copy of this article
    by Qianjin Zhao, Xinrui Ping, Shuzhi Su 
    Abstract: Canonical Correlation Analysis (CCA) is a classic feature extraction method that widely used in the field of pattern recognition. Its goal is to learn correlation projection directions to maximize the correlation between the two sets of variables, but it does not consider the class label information among samples and the within-modal redundancy information from the correlation projection directions. To this end, this paper proposes a class label embedded orthogonal correlation feature extraction method. This method embeds label-guide discriminant structure information into correlation analysis theories for improving discrimination of correlation features, and within-modal orthogonality constraints are added to further reduce the projection redundancy of correlation features. Experiments on multiple image databases show that this method is an effective feature extraction method. This method provides a new solution to pattern recognition.
    Keywords: feature extraction; correlation analysis theory; discriminative subspace learning; orthogonality redundancy reduce.

  • Hurst exponent estimation using neural network   Order a copy of this article
    by Somenath Mukherjee, Bikash Sadhukhan, Arghya Kusum Das, Abhra Chaudhuri 
    Abstract: The Hurst exponent is used to identify the autocorrelation structure of a stochastic time series, which allows for detecting persistence in time series data. Traditional signal processing techniques work reasonably well in determining the Hurst exponent of a stochastic time series. However, a notable drawback of these methods is their speed of computation. On the other hand, neural networks have repeatedly proven their ability to learn very complex input-output mappings, even in very high dimensional vector spaces. Therefore, an endeavour has been undertaken to employ neural networks to determine the Hurst exponent of a stochastic time series. Unlike previous attempts to solve such problems using neural networks, the proposed architecture can be recognised as the universal estimator of the Hurst exponent for short-range and long-range dependent stochastic time series. Experiments demonstrate that, if sufficiently trained, a neural network can predict the Hurst exponent of any stochastic time series at least 15 times faster than standard signal processing approaches.
    Keywords: neural network; regression; Hurst exponent estimation; stochastic time series.
    DOI: 10.1504/IJCSE.2022.10045166
     
  • Enhancement of classification and prediction accuracy for breast cancer detection using fast convolution neural network with ensemble algorithm   Order a copy of this article
    by Naga Deepti Ponnaganti, Raju Anitha 
    Abstract: Breast cancer is a leading cancer found mostly in women across the world and are more in number in the developing countries where they are not diagnosed in the early stages. The recent works have compared machine learning algorithms using various techniques, such as ensemble methods and classification. Hence, the requirement now is to develop a technique that gives minimum error to increase accuracy. So, this paper proposes the neural network where classifying and predicting breast cancer are enhanced with maximum accuracy. The novel technique of fast convolution neural network (FCNN) has been used for enhancing the classification and for improving the prediction accuracy ensemble algorithm of gradient boosting and adaptive boosting. By this proposed technique with ensemble algorithm, the huge data has been taken and predicted for detecting the cancer, and this combined boosting algorithm will reduce the misclassification and will improve the binary classification. The training and testing of the dataset has been done with FCNN where the numerous datasets can be classified for earlier detection of cancer. The simulation result shows the improved accuracy, prediction class, precision and F-1 score.
    Keywords: breast cancer; machine learning algorithms; classification; prediction accuracy; fast convolution neural network; gradient boosting and adaptive boosting.
    DOI: 10.1504/IJCSE.2022.10045291
     
  • Combining DNA sequences and chaotic maps to improve robustness of RGB image encryption   Order a copy of this article
    by Onkar Thorat, Ramchandra Mangrulkar 
    Abstract: In todays world, colour images are generated and stored for a variety of purposes by organisations. Standard encryption schemes, such as AES or DES, are not well suited for encryption of multimedia data owing to their pattern appearance and high computational cost. Many methods are being proposed for the encryption of greyscale images. However, there are only a few methods proposed in the literature specifically for encrypting colour images. This paper presents a new method, called RGB Image Encryption Scheme (RGBIES), for encrypting colour images based on chaotic maps and DNA sequences. RGBIES has three major stages. Stages 1 and 3 propose a powerful scrambling algorithm based on a chaotic logistic map. The intermediate step uses a chaotic Lorenz map, the keys of which are obtained using DNA sequences. Various visual and quantitative analyses are performed that prove the resistance of the method against modern-day attacks.
    Keywords: image encryption; image security; chaotic maps; deoxyribonucleic acid sequences; security analysis; cryptography; diffusion.
    DOI: 10.1504/IJCSE.2022.10050834
     
  • Improved class-specific vector for biomedical question type Classification   Order a copy of this article
    by Tanu Gupta, Ela Kumar 
    Abstract: Polysemy words in questions are problematic in labelling questions correctly. This paper proposes two word embedding based approaches for tackling the polysemy problem in biomedical question type classification. In the first approach, the label-independent class vector is drawn using a statistical score of the word for classifying multi-class questions dataset, unlike previous work where the class vector is label-dependent. Secondly, the sense vector of a word interpreted using a context group discrimination algorithm is concatenated with class-specific word embedding to increase the discriminative property of the word. Besides this, a survey dataset Covid-S is introduced in this paper, which is a collection of public queries, myths, and doubts about novel Covid-19 diseases. The efficacy of our introduced approach for question classification is evaluated for BioASQ8b and Covid-S datasets with three well-known evaluation measures: accuracy, micro-average and Hamming loss.
    Keywords: class vector; polysemy; biomedical question classification; sense embedding; Covid-S.

  • Research on big data access control mechanism   Order a copy of this article
    by Yinxia Zhuang, Yapeng Sun, Han Deng, Jun Guo 
    Abstract: The increasing amount of data provides an excellent opportunity for the analysis of big data. But when we consider the convenience provided by big data, we should also consider the security issues involved behind it. In recent years, the leakage of data resources has become a troublesome problem in the field of big data. The emergence of access control technology adds a barrier to the access of data resources, avoids some illegal users' access to resources, and reduces the problem of resource leakage to a certain extent. This paper summarises the development of some access control technologies in the field of big data access control, including discretionary access control technology, role-based access control technology, attribution-based access control technology, blockchain-based access control technology, etc. Then it summarises the application characteristics of access control technology in the field of big data, and finally looks forward to the development prospect of access control.
    Keywords: big data; Access control; Security.

  • Combining machine learning and effective feature selection for real-time stock trading in variable time-frames   Order a copy of this article
    by A. K. M. Amanat Ullah, Fahim Imtiaz, Miftah Uddin Md Ihsan, Md. Golam Rabiul Alam, Mahbub Majumdar 
    Abstract: The unpredictability and volatility of the stock market render it challenging to make a substantial profit using any generalised scheme. Many previous studies tried different techniques to build a machine learning model, which can make a significant profit in the US stock market by performing live trading. However, very few studies have focused on the importance of finding the best features for a particular period for trading. Our top approach used the performance to narrow down the features from a total of 148 to about 30. Furthermore, the top 25 features were dynamically selected before each time training our machine learning model. It uses ensemble learning with four classifiers: Gaussian Naive Bayes, Decision Tree, Logistic Regression with L1 regularization, and Stochastic Gradient Descent, to decide whether to go long or short on a particular stock. Our best model performed daily trade between July 2011 and January 2019, generating 54.35% profit. Finally, our work showcased that mixtures of weighted classifiers perform better than any individual predictor about making trading decisions in the stock market.
    Keywords: feature selection; feature extraction; stock trading; ensemble learning.
    DOI: 10.1504/IJCSE.2022.10046373
     
  • Privacy-preserving global user profile construction through federated learning   Order a copy of this article
    by Zheng Huo, Teng Wang, Yilin Fan, Ping He 
    Abstract: User profiles are derived from big data left on the internet through machine learning algorithms. However, threats of data privacy leakages restrict access to the sources of data in centralised machine learning. Federated learning can avoid privacy leakage during the data collection phase. In this study, we propose an algorithm for constructing a privacy-preserving global user profile through federated learning in a vertical data-segmentation scenario. Each participant has some of the characteristics of user data, they train the local clusters on their data using the CLIQUE algorithm, and carefully encrypt the cluster parameters using Paillier encryption to protect the cluster parameters from the untrusted aggregator. The aggregator then makes intersections over the cluster parameters without decryption to construct a global user profile. Finally, we conduct experiments on real datasets, and the results verify that the algorithm shows good performance in terms of precision and effectiveness.
    Keywords: differential privacy; federated learning; CLIQUE algorithm; encryption.

  • Parameter-free marginal Fisher analysis based on L2,1-norm regularisation for face recognition   Order a copy of this article
    by Yu'e Lin, Zhiyuan Ren, Xingzhu Liang, Shunxiang Zhang 
    Abstract: Marginal Fisher analysis is an effective feature extraction algorithm for face recognition, but the algorithm is sensitive to the influence of the neighbourhood parameter setting, and does not have the function of feature selection. In order to solve the above problems, this paper proposes a parameter-free marginal discriminant analysis based on L2,1-norm regularisation (PFMDA/L2,1). The algorithm calculates the weights using the cosine distance between samples and dynamically determines neighbours of each data point so that it doesn't set any parameters. In order to enable both feature extraction and feature selection to proceed simultaneously, two optimisation models with the L2,1-norm constraint are presented and then the complete solution for PFMDA/L2,1 is given. The experimental results on the ORL, YaleB and AR face databases show that the proposed method is feasible and effective.
    Keywords: marginal Fisher analysis; feature extraction; feature selection; parameter-free; L2,1-norm regularisation; cosine distance.

  • Research on credit risk evaluation of B2B platform supply chain financing enterprises based on improved TOPSIS   Order a copy of this article
    by Hong Zhang, Yuan Chen, Xiyue Yan, Han Deng 
    Abstract: This paper establishes a credit evaluation index system for B2B platform supply chain financing enterprises, which consists of four levels: B2B platform, supply chain financing enterprises, core enterprises and supply chain collaboration. Taking 24 samples of supply chain financing enterprises listed on GEM from six well-known B2B platforms in China as the research objects, the credit risk of supply chain financing enterprises on B2B platform is dynamically evaluated by using the improved TOPSIS method incorporating time dimension. The research shows that the strong comprehensive strength of core enterprises, close supply chain collaboration and good credit status of enterprises themselves have a favourable impact on the credit evaluation of supply chain financing of enterprises. A high-quality B2B platform is beneficial for enterprises to carry out supply chain financing and attract more high-quality supply chain enterprises to cooperate.
    Keywords: improved TOPSIS method; TOPSIS; B2B platform; supply chain financing; credit risk.

  • BITSAT: an efficient approach to modern image cryptography   Order a copy of this article
    by Sheel Sanghvi, Ramchandra Mangrulkar 
    Abstract: This paper proposes a new approach towards efficient image cryptography that works using the concepts of bit-plane decomposition and bit-level scrambling. This method does not require the involvement of additional or extra images. Users have the flexibility of choosing (1) any bit-plane decomposition method; (2) logic that runs the key generation block; (3) customisation in the bit-level permutations performed during scrambling. The implementation of the method is simple and free of heavily complex steps and operations. This makes the algorithm applicable in a real-world scenario. The BITSAT algorithm is applied to a variety of images and, consequently, the encrypted images generated showcase a high level of encryption. The analysis and evaluation of the algorithm and its security aspects are performed and described in detail. The paper also presents the application domains of the method. Overall, the results and analysis indicate a positive working scope and suitability for real-life applications.
    Keywords: hybrid image encryption; image cryptography; novel image encryption scheme; bit-level scrambling; bit-plane decomposition.
    DOI: 10.1504/IJCSE.2022.10049691
     
  • A GRU-based hybrid global stock price index forecasting model with group decision-making   Order a copy of this article
    by Jincheng Hu, Qingqing Chang, Siyu Yan 
    Abstract: To predict the global stock price index daily more effectively, this study develops a new filtering gate recurrent unit group-based decision-making (FiGRU_G) model that combines GRU group network and group decision-making strategy. This proposed FiGRU_G model can effectively overcome the shortcoming of traditional neural network algorithms that the random initialisation of network weights may cause worse performance to some extent. The experimental results indicate visually that the proposed FiGRU_G framework outperforms other competing methods in terms of prediction accuracy and robustness for both Chinese and international stock markets. In the short-term prediction scenario, the FiGRU_G framework achieves 20% and 19% performance improvements on evaluation criteria MAPE and SDAPE, respectively, compared with the GRU model in the Chinese stock market. For the international markets, this FiGRU_G framework also achieves 23% and 22% performance improvements, respectively, compared with the GRU model.
    Keywords: stock closing price prediction; deep learning; GRU model; group decision-making.
    DOI: 10.1504/IJCSE.2022.10047524
     
  • Patient reviews analysis using machine learning   Order a copy of this article
    by Bijayalaxmi Panda, Chhabi Rani Panigrahi, Bibudhendu Pati 
    Abstract: In the present situation, web technologies provide opportunities for online communication. Health-related tweets are available in online forums. Doctors and patients share their views in discussion forums that help other people seeking similar information. An investigation was performed on patient reviews collected from different forums regarding different diseases. These are unstructured to identify positive and negative tweets. The dataset collected from figshare identified several features from the text provided by patients into numerical forms. Specific features are selected from the dataset and machine learning classification algorithms, such as Support Vector Machine (SVM), Gaussian na
    Keywords: classification; support vector machine; Gaussian naïve Bayes; random forest; feature selection.
    DOI: 10.1504/IJCSE.2022.10050923
     
  • Multiple correlation based decision tree model for classification of software requirements   Order a copy of this article
    by Pratvina Talele, Rashmi Phalnikar 
    Abstract: Recent research in Requirements Engineering (RE) includes requirements classification, and use of Machine Learning (ML) algorithms to solve RE problems. The limitation of existing techniques is that they consider only one feature at a time to map the requirements without considering the correlation of two features and are biased. To understand these limitations, our study compares and extends the ML algorithms to classify requirements in terms of precision and accuracy. Our literature survey shows that decision tree (DT) algorithm can identify different requirements and outperforms existing ML algorithms. As the number of features increases, the accuracy using the DT is improved by 1.65%. To overcome the limitations of DT, we propose a Multiple Correlation Coefficient based DT algorithm. When compared with existing ML approaches, the results showed that the proposed algorithm can improve classification performance. The accuracy of the proposed algorithm is improved by 5.49% compared with the DT algorithm.
    Keywords: machine learning; requirements engineering; decision tree; multiple correlation coefficient.

  • Improved performance on tomato pest classification via transfer learning based deep convolutional neural network with regularisation techniques   Order a copy of this article
    by Gayatri Pattnaik, Vimal K. Shrivastava, K. Parvathi 
    Abstract: Insect pests are major threat to the quality and quantity of crop yield. Hence, early detection of pests using a fast, reliable and non-chemical method is essential to control the infestations. Hence, we have focused on tomato pest classification using pre-trained deep convolutional neural network (CNN) in this paper. Four models (VGG16, DenseNet121, DenseNet169, and Xception) were explored with transfer learning approach. In addition, we have adopted two regularization techniques viz. early stopping and data augmentation to prevent the model from overfitting and improve its generalization ability. Among four models, the DenseNet169 achieved highest classification accuracy of 95.23%. The promising result shows that the DenseNet169 model with transfer learning and regularization techniques can be used in agricultural for pest management.
    Keywords: agriculture; convolutional neural network; data augmentation; early stopping; pest; regularisation; tomato plants.

  • A collaborative filtering recommendation algorithm based on DeepWalk and self-attention   Order a copy of this article
    by Jiaming Guo, Hong Wen, Weihong Huang, Ce Yang 
    Abstract: Graph embedding is one of the vital technologies in solving the problem of information overload in recommendation systems. It can simplify the vector representations of items and accelerates the calculation process. Unfortunately, the recommendation system using graph embedding technology does not consider the deep relationships between items when it learns embedding vectors. In order to solve this problem, we propose a collaborative filtering recommendation algorithm based on DeepWalk and self-attention in this paper. This algorithm can enhance the accuracy in measuring the similarity between items and obtain more accurate embedding vectors. Chronological order and mutual information are used to construct a weighted directed relationship graph. Self-attention and DeepWalk are used to generate embedding vectors. Then item-based collaborative filtering is used to obtain recommended lists. The results of the relative experiments and evaluations on three public datasets show that our algorithm is better than the existing ones.
    Keywords: DeepWalk; self-attention; mutual information; collaborative filtering; recommendation algorithm.
    DOI: 10.1504/IJCSE.2022.10050515
     
  • A proxy signcryption scheme for secure sharing of industrial IoT data in fog environment   Order a copy of this article
    by Rachana Patil, Yogesh Patil 
    Abstract: Rapid technical advancements have transformed the industrial segment. The IIoT and Industry 4.0 comprise a complete instrumentation system that has sensors, positioners, actuators, instruments and processes. Owing to various delays and safety concerns, the industrial process necessitates that specified data be transferred across the internet. Considering this, fog computing is potentially helpful as a mediator, as it performs localised processing of data, so that may be applied to a variety of industrial applications. Furthermore, industrial big data is often required to be shared among different applications. Here, we proposed an ECC-based Proxy Signcryption for IIoT (ECC-PSC-IIoT) in a fog computing environment. The proposed scheme provides the features of signature and encryption in a single cycle. The ECC-PSC-IIoT system is proven to be secure by using the AVISPA tool. Moreover, extensive performance assessment indicates the competency of the proposed scheme with respect to computation and communication time.
    Keywords: IIoT; elliptic curve cryptography; signcryption; fog computing; proxy signature.
    DOI: 10.1504/IJCSE.2022.10052354
     
  • An aeronautic X-ray image security inspection network for rotation and occlusion   Order a copy of this article
    by Bingshan Su, Shiyong An, Xuezhuan Zhao, Jiguang Chen, Xiaoyu Li, Yuantao He 
    Abstract: Aviation security inspection needs lots of time and human labour. In this paper, we establish a new network for detecting prohibited objects in aeronautic security inspection X-ray images. Objects in the X-ray image often present rotated shapes and overlap heavily with each other. In order to solve the rotation and occlusion in X-ray image detection, we construct the De-rotation-and-occlusion Module (DROM), an efficient module that can be embedded into most deep learning detectors. Our DROM leverages the edge, colour and Oriented Fast and Rotated BRIEF (ORB) features to generate an integrated feature map, while the ORB features could be extracted quickly and diminish the deviation produced by rotation effectively. Finally, we evaluate DROM on the OPIXRay dataset; compared with several latest approaches, the experimental results certify that our module promotes the performance of Single Shot MultiBox Detector (SSD) and obtains higher accuracy, which proves the modules application value in practical security inspection.
    Keywords: X-ray image detection; de-rotation-and-occlusion; deep learning; ORB; SSD.

  • ReFIGG: retinal fundus image generation using generative adversarial networks   Order a copy of this article
    by Sharika Sasidharan Nair, M.S. Meharban 
    Abstract: The effective training of deep architectures mainly depends on a large number of well-explained data. This is a problem in the medical field where it is hard and costly to gain such images. The tiny blood-vessels of the retina are the only part of the human structure that can be directly and nonintrusively foreseen within the living person. Hence, it can be easily obtained and examined by automatic tools. Fundus imaging is a basic check-up process in ophthalmology that provides essential data to make it easier for doctors to detect various eye-related diseases at early stages. Fundus image generation is a challenging process to carry out by constructing composite models of the eye structure. In this paper, we overcome the issue of unavailability of medical fundus datasets by synthesising them artificially through an encoder-decoder generator model to the existing MCML method of generative adversarial networks for easier, quicker, and early analysis.
    Keywords: fundus image; generative adversarial networks; encoder-decoder model; image synthesis; deep learning.

  • CBSOACH: design of an efficient consortium blockchain-based selective ownership and access control model with vulnerability resistance using hybrid decision engine   Order a copy of this article
    by Kalyani Pampattiwar, Pallavi Vijay Chavan 
    Abstract: Cloud deployments are prone to vulnerabilities and attacks, mitigated via security patches. However, these patches increase the computational complexity of the deployments, thus reducing their quality-of-service performance. To overcome this limitation and maintain high-security levels, this text proposes the design of the CBSOACH model, which is a novel Consortium Blockchain-based Selective Ownership and Access Control model with vulnerability resistance using a Hybrid decision engine. The model introduces header-level pattern analysis, which processes all incoming traffic using a light-weight rule-based method. Header-level pattern analysis is backed by a consortium blockchain model that allows for efficient ownership control with minimum overheads. Owing to a combination of header-level pattern analysis with consortium blockchain, the model can maintain traceability, trustability, immutability, and distributed computing capabilities. The model can reduce attack probability while maintaining lower delay and high-efficiency ownership transfer. This increases the scalability and usability of the model for large-scale deployments.
    Keywords: cloud; ownership; blockchain; authentication; access control; consortium; attacks; accuracy; delay.
    DOI: 10.1504/IJCSE.2022.10050702
     
  • Blockchain-based secure deduplication against duplicate-faking attack in decentralized storage   Order a copy of this article
    by Jingkai Zhou, Guohua Tian, Jianghong Wei 
    Abstract: Secure client-side deduplication enables cloud server to efficiently save storage space and communication bandwidth without compromising privacy. However, the potential duplicate-faking attack (DFA) may cause data users to lose their outsourced data. Existing solutions can either only detect DFA and fail to avoid data loss, or have high storage costs. In this paper, we propose a blockchain-based secure deduplication scheme against DFA in decentralised storage. Specifically, we firstly proposed a client-side deduplication protocol, in which server does not need to store additional metadata to check subsequent uploaders, who only need to encrypt the challenged partial blocks instead of the entire file. Besides, we design a battle mechanism based on smart contract to protect users from losing data. When an uploader detects a DFA, he can apply for a battle with the previous uploader to achieve an effective punishment. Finally, security and performance analysis indicate the practicality of the proposed scheme.
    Keywords: secure data deduplication; duplicate-faking attack; proof of ownership; blockchain; smart contract.

  • Blockchain-based collaborative intrusion detection scheme   Order a copy of this article
    by Tianran Dang, Guohua Tian, Jianghong Wei, Shuqin Liu 
    Abstract: The collaborative intrusion detection technique is an effective solution to protect users from various cyber-attacks, among which the large-scale trusted sharing and real-time updating of attack instances are the main challenges. However, the existing collaborative intrusion detection systems (CIDS) either can only achieve real-time instance sharing in a centralised setting or implement large-scale instance sharing through blockchain. In this paper, we propose a novel blockchain-based CIDS scheme. Specifically, we present a reputation-based consensus protocol, which incentives service providers (SP) to evaluate the attack instances collected from collectors and punishes the malicious evaluators. Then, only trusted attack instances will be published on the blockchain to provide large-scale trusted intrusion detection services. Furthermore, we introduce a redactable blockchain technique to achieve dynamic instances update, which enables our scheme to provide a real-time intrusion detection service. Finally, we demonstrate the practicality of the proposed scheme through security analysis, theoretical analysis and performance evaluation.
    Keywords: collaborative intrusion detection; blockchain; reputation-based consensus; redactable blockchain.

  • Application of bagging and particle swarm optimisation techniques to predict technology sector stock prices in the era of the Covid-19 pandemic using the support vector regression method   Order a copy of this article
    by Heni Sulastri, Sheila Maulida Intani, Rianto Rianto 
    Abstract: The increase in positive cases of Covid-19 not only affects the health and lifestyle, but also the economy and the stock market. Tech and digital sector stocks can be predicted to be one of the most profitable. Therefore, the prediction of the stock price is required to be able to see how the prospects of investment in the future. In this study, the prediction of the stock prices of Multipolar Technologies Ltd (MLPT) was carried out using the Support Vector Regression (SVR) method with Bootstrap Aggregation (Bagging) Technique and Particle Swarm Optimisation (PSO) as SVR optimisation. From the results of the prediction process, it is shown that the application of bagging and PSO techniques in predicting stock prices in the technological sector can reduce the Root Mean Squared Error (RMSE) value on the SVR, the RMSE value from 22.142 to 21.833. Although it does not have a big impact, it is better to apply a combination of bagging and PSO techniques to SVR than one of them (SVR / SVR - PSO / SVR-bagging).
    Keywords: bootstraps aggregation; bagging; Covid-19; particle swarm optimisation; prediction; share prices; support vector regression.

  • Kidney diseases classification based on SONN and MLP-GA in ultrasound radiography images   Order a copy of this article
    by Anuradha Laishram, Khelchandra Thongam 
    Abstract: A strategy for robust classification of renal ultrasound images for the identification of three kidney disorders, renal calculus, cortical cyst, and hydronephrosis, has been attempted. Features were retrieved using the Intensity Histogram (IH), grey level co-occurrence matrices (GLCM), and grey level run length matrices (GLRLM) techniques. Using the extracted features, input samples are created and then fed to a hybrid model which is a combination of self-organising neural network (SONN) and multilayer perceptron (MLP) trained with a genetic algorithm (GA). Self-Organising Neural Network (SONN) is used to cluster the input patterns into four groups or clusters and finally, MLP using genetic algorithm is employed on each cluster to classify the input patterns. The proposed hybrid method using SONN and MLP-GA has more potential to classify the ultrasound images by achieving a precision of 93.9%, recall of 93.0%, F1 score of 93.0%, and overall accuracy of 96.8%.
    Keywords: genetic algorithm; grey level co-occurrence matrix; grey level run length matrix; intensity histogram; multilayer perceptron; self-organising neural network; ultrasound images.

  • LightNet: pruned sparsed convolutional neural network for image classification   Order a copy of this article
    by Edna Too 
    Abstract: Deep learning has become a most sought after approach in the area of Artificial intelligence (AI). However, deep learning models pose some challenges in the learning process. It is computationally intensive to train deep learning networks and also resource intensive. Therefore, it cannot be applied in limited resource devices. Limited research is being done on implementation of efficient approaches for real world problems. This study tries bridge the gaps towards an applicable system in real world, especially in the agricultural sector for plant disease management and fruit classification. We introduce a novel architecture called LightNet. LighNet is an architecture that employs two strategies to achieve sparsity of DenseNet: the skip connections and pruning strategy. The resultant is a small network with reduced parameters and model size. Experimental evaluation reveal that LightNet is more efficient that the DenseNet architecture. The model is evaluated on real world datasets PlantsVillage and Fruits-360.
    Keywords: image classification; deep learning; convolution neural network; LightNet; ConvNet.

  • Predicting possible antiviral drugs against COVID-19 based on Laplacian regularised least squares and similarity kernel fusion   Order a copy of this article
    by Xiaojun Zhang, Lan Yang, Hongbo Zhou 
    Abstract: COVID-19 has produced a severe impact on global health and wealth. Drug repurposing strategies provide effective ways for inhibiting COVID-19. In this manuscript, a drug repositioning-based virus-drug association prediction method, VDA-LRLSSKF, was developed to screen potential antiviral compounds against COVID-19. First, association profile similarity matrices of viruses and drugs are computed. Second, similarity kernel fusion model is presented to combine biological similarity and association profile similarity from viruses and drugs. Finally, a Laplacian regularised least squares method is used to compute the probability of association between each virus-drug pair. We compared VDA-LRLSSKF with four of the best VDA prediction methods. The experimental results and analysis demonstrate that VDA-LRLSSKF calculated better AUCs of 0.8286, 0.8404, 0.8882 on three datasets, respectively. VDA-LRLSSKF predicted that ribavirin and remdesivir could be underlying therapeutic clues for inhibiting COVID-19 and need further experimental validation.
    Keywords: SARS-CoV-2; VDA-LRLSSKF; drug repurposing; Laplacian regularised least squares; similarity kernel fusion.

  • A modified Brown and Gibson model for cloud service selection   Order a copy of this article
    by Munmun Saha, Sanjaya Kumar Panda, Suvasini Panigrahi 
    Abstract: Cloud computing has been widely accepted in the information technology (IT) industry as it provides on-demand services, lower operational and investment costs, scalability, and many more. Nowadays, small and medium enterprises (SMEs) use the cloud infrastructure for building their applications, which makes their business more agile by using elastic and flexible cloud services. Many cloud service providers (CSPs) have offered numerous services to their customers. However, owing to the vast availability of cloud services and the wide range of CSPs, decision-making for cloud selection or adopting cloud services is not consistently straightforward. This paper proposes a modified Brown and Gibson model (M-BGM) to select the best CSP. We consider both the subjective and objective criteria for the non-quantifiable and quantifiable values, respectively. Here, various decision-makers can express their views about the alternatives. We compare M-BGM with multi-attribute group decision-making (MAGDM) approach and perform a sensitivity analysis to show the robustness.
    Keywords: cloud computing; multi-criteria decision-making; quality of service; cloud service provider; Brown and Gibson model; analytic hierarchy process; Delphi method; decision maker.

  • Short-term load forecasting with bidirectional LSTM-attention based on the sparrow search optimisation algorithm   Order a copy of this article
    by Jiahao Wen, Zhijian Wang 
    Abstract: Short-term power load forecasting has always been a complex problem for distribution networks due to their insufficient accuracy and poor training effects. To solve this problem, a bidirectional long short-term memory (BILSTM) prediction model based on attention was proposed to process collected data, and the different observed data characteristics were divided through a pretreatment unit to obtain a training set and test set. The BILSTM layer was used for modelling to learn historical load data and daily feature data, enabling the extraction of the internal dynamic change rules of features. An attention mechanism was used to give different weights to the implied BILSTM states through mapping, weighting and parameter matrix learning, which reduced the loss of historical information and enhanced the influence of important information. The sparrow search (SS) algorithm was used to optimise the hyperparameter selection process of the model. The test results showed that the performance of the proposed method was better than that of the traditional prediction model, and the root mean square errors decreased by (1.18, 1.09, 0.60, 0.54) and (2.11, 0.45, 0.21, 0.11) on different datasets.
    Keywords: short-term load prediction; sparrow search algorithm; neural network; weight assignment; attention mechanism.
    DOI: 10.1504/IJCSE.2022.10049692
     
  • 3DL-PS: an image encryption technique using a 3D logistic map, hashing functions and pixel scrambling techniques   Order a copy of this article
    by Parth Kalkotwar, Rahil Kadakia, Ramchandra Mangrulkar 
    Abstract: With the advancement in technology over the years, the security of data transferred over the internet is a major concern. In this paper, a robust and efficient image encryption scheme has been implemented using a 3D logistic map, SHA-512, and pixel scrambling. A good image encryption scheme should be able to produce two drastically different encrypted images for two original images with minute differences. Chaotic systems have proved to be highly efficient in providing this property, mainly because of their high randomness and volatility depending on the initial conditions. A 3D logistic map can be preferred over a 1D Logistic map owing to its increased encryption complexity, enhanced security, and better chaotic properties. To start the process, two secret keys are generated using two different user-provided keys and the input image, which makes it resistant to classical attacks such as the chosen-plaintext attack and chosen ciphertext attack. Further, it is necessary to change the pixel values of the original image so that it becomes difficult to trace back the original image from the encrypted image. Pixels of the images are altered using the values obtained upon the iteration of the 3D logistic map. In addition to this, two different pixel scrambling techniques are employed to enhance security. Firstly, different fragments of varying sizes are swapped depending on the secret keys generated earlier. Finally, a jumbling technique is used to mix the pixels horizontally and vertically in a completely dynamic way depending on the secret keys. The keyspace of the algorithm is found to be large enough to resist brute force attacks. The encrypted image has been observed and analysed against several attacks such as classical attacks, statistical attacks, and noise resistance. Key sensitivity analysis has also been performed. The results prove that the 3DL-PS algorithm is found to be resistant to several well-known attacks, providing an efficient image encryption scheme that can be used in various real-time applications.
    Keywords: image cryptography; chaotic systems; pixel scrambling; SHA-512; security analysis; classical attacks; 3D logistic map; noise resistance.
    DOI: 10.1504/IJCSE.2022.10049693
     
  • Hybrid grasshopper and ant lion algorithms to improve imperceptibility, robustness and convergence rate for the video steganography   Order a copy of this article
    by Sahil Gupta, Naresh Kumar Garg 
    Abstract: The need for securing multimedia content from being intercepted is a prominent research issue. This work proposes an optimised video steganography model that improves imperceptibility and robustness by extracting keyframes and calculating the optimal scaling factor. The Squirrel Search Algorithm (SSA) is used to extract keyframes since it ensures distinct position updation processes through Levy flying and predator features, whilst the grasshopper optimisation and ant lion optimisation algorithms are hybridised to compute the optimal value of the scaling factor. In terms of imperceptibility and robustness, the simulation results suggest that the proposed approach outperforms existing data-hiding models. It also discovers the optimal scaling factor in under 10 iterations, indicating that the fastest convergence rate is possible.
    Keywords: ant-lion optimisation; grasshopper optimisation; SVD; video steganography; impercepbility; robustness; PSNR; MSE.
    DOI: 10.1504/IJCSE.2022.10053218
     
  • Human behaviour analysis based on spatio-temporal dual-stream heterogeneous convolutional neural network   Order a copy of this article
    by Qing Ye, Yuqi Zhao, Haoxin Zhong 
    Abstract: At present, there are still many problems to be solved in human behaviour analysis, such as insufficient use of behaviour characteristic information and slow operation rate. We propose a human behaviour analysis algorithm based on spatio-temporal dual-stream heterogeneous convolutional neural network (STDNet). The algorithm improves on the basic structure of the traditional dual-stream network. When extracting spatial information, the DenseNet uses a hierarchical connection method to construct a dense network to extract the spatial feature of the video RGB image. When extracting motion information, BNInception is used to extract temporal features of video optical flow images. Finally, feature fusion is carried out by multi-layer perceptron and sent to Softmax classifier for classification. Experimental results on the UCF101 data set show that the algorithm can effectively use the spatio-temporal feature information in video, reduce the amount of calculation of the network model, and greatly improve the ability to distinguish similar actions.
    Keywords: human behaviour analysis; STDNet; optical flow; feature extraction; dual-stream network.
    DOI: 10.1504/IJCSE.2022.10048568
     
  • High-volume transaction processing in bitcoin lightning network on blockchains   Order a copy of this article
    by Rashmi P. Sarode, Divij Singh, Yutaka Watanobe, Subhash Bhalla 
    Abstract: Transactions on e-commerce platforms using blockchain technology are required to face a high volume of executing transactions. These systems are required to be scalable. Bitcoin Lightning Network (BLN) can execute high volumes of transactions and is scalable due to few hops in the network. It is an off-chain payment channel built on top of a blockchain which speeds up the transactions. In this paper, we discuss BLN along with its transaction processing in detail, its benefits and applications. Additionally, we discuss alternative networks for payments. We also propose a secure model on BLN that can be used for any e-commerce platform and compare it with existing applications such as that of Ethereum and Stellar.
    Keywords: lightning network; bitcoin; cryptocurrency; blockchain; Ethereum.

  • Data augmentation using fast converging CIELAB-GAN for efficient deep learning dataset generation   Order a copy of this article
    by Amin Fadaeddini, Babak Majidi, Alireza Souri, Mohammad Eshghi 
    Abstract: The commercial deep learning applications require large training datasets with many samples from different classes. The Generative Adversarial Networks (GAN) are able to create new data samples for training these machine learning models. However, the low speed of training these GANs in image and multimedia applications is a major constraint. In order to address this problem, in this paper a fast converging GAN called CIELAB-GAN for synthesizing new data samples for image data augmentation is proposed. The CIELAB-GAN simplifies the training process of GANs by transforming the images to the CIELAB colour space with fewer parameters. Then, the CIELAB-GAN translates the generated greyscale images into colourized samples using an autoencoder. The experimental results show that the CIELAB-GAN has lower computational complexity of 20% compared to the state of the art GAN models and is able to be trained substantially faster. The proposed CIELAB-GAN can be used for generating new image samples for various deep learning applications.
    Keywords: generative adversarial networks; deep learning; data augmentation; image processing.

  • Aerial remote sensing image registration based on dense residual network of asymmetric convolution   Order a copy of this article
    by Ying Chen, Wencheng Zhang, Wei Wang, Jiahao Wang, Xianjing Li, Qi Zhang, Yanjiao Shi 
    Abstract: The existing image registration frameworks pay less attention to important local feature information and part of global feature information, resulting in low registration accuracy. However, asymmetric convolution and dense connection can pay more attention to the key information and shallow information of the image. Therefore, this paper propose a novel feature extraction module to improve the feature extraction ability and registration accuracy of the model. Asymmetric convolution and dense connection are used to improve the residual structure to focus on both local and global information in the feature extraction stage. In the feature matching stage, bidirectional matching is used to alleviate asymmetric matching results by fusing two outcomes. Furthermore, a secondary affine transformation is proposed to estimate the real transformation between two images adequately. In contrast with several popular algorithms, the proposed method has a better registration effect on two public datasets, which has practical significance.
    Keywords: remote sensing image registration; residual network; asymmetric convolution; dense connection; transfer learning; regularization; affine transformation.

  • Non-parametric combination forecasting methods with application to GDP forecasting   Order a copy of this article
    by Wei Li, Yunyan Wang 
    Abstract: This work is devoted to constructing non-parametric combination prediction method, which can improve the forecasting effect and accuracy to some extent. In this paper, in order to forecast the regional gross domestic product, non-parametric autoregressive method is introduced into the autoregressive integrated moving average model, and a combined method of ARIMA model and non-parametric autoregressive model is established based on the residual correction. Furthermore, the specific prediction steps are proposed. The empirical results show that the new proposed combined model outperforms both the ARIMA model and the non-parametric autoregressive model in terms of regression effect and forecasting accuracy. The combination of parametric model and non-parametric model not only provides a method with better applicability and prediction effect for the establishment of GDP prediction model, but also provides a theoretical basis for the prediction of relevant economic data in the future. The prediction results show that during the Chinas 14th Five-Year Plan period, the gross domestic product of Jiangxi Province will increase by 7.01% annually.
    Keywords: GDP; ARIMA model; non-parametric autoregressive model; residual correction; combined model.

  • Comparative study of point matching method with spectral method on numerical solution electromagnetic problems   Order a copy of this article
    by Mahmoud Behroozifar 
    Abstract: The present study focuses on comparing the point matching method and spectral method for solving the integral equations arising in the electromagnetic domain. The point matching method, which is a traditional method, was based on basis functions which most of the time results in a singular and ill-posed system of nonlinear equations. In order to prevent these inconveniences, the physical structure of the object must be altered in some cases this yields a high error in the results and requires high CPU time and memory usage. Also in most cases, this method converges slowly and leads a singular and ill-posed system. Consequently, applying the point matching method for this problem causes to obtain an approximate solution with low accuracy and high computation volume. As an alternative, we present the spectral method based on Bernstein polynomials (BPs) as a robust nominee. Employing the BPs reduces the problem to an algebraic equations system. The other merits of the presented method are faster convergence and avoidance of occurring a singular system.
    Keywords: Bernstein polynomials; electrostatic; micro strip; point matching method; spectral method.

  • RCRE: radical-aware causal relationship extraction model oriented in the medical field   Order a copy of this article
    by Xiaoqing Li, Guangli Zhu, Zhongliang Wei, Shunxiang Zhang 
    Abstract: In the massive medical texts, the accuracy of causal relationship extraction is relatively low because of its special characteristic, a high correlation between semantics and radicals. To improve the extraction accuracy, this paper proposes a radical-aware causal relationship extraction model, which is oriented to the medical field. The BERT pre-training model is used to extract character-level features, which contain rich context information. To further deeply capture the semantics of characters, the Word2Vec model is used to extract radical features. Finally, the above two features are concatenated and passed into the extraction model to obtain the extraction results. Experimental results show that the proposed model can improve the accuracy of causal relationship extraction in medical texts.
    Keywords: causal relationship extraction; the medical field; radical features; BERT model; Word2Vec model.

  • Detection of computationally intensive functions in a medical image segmentation algorithm based on an active contour model   Order a copy of this article
    by Carlos Gulo, Antonio Sementille, João Tavares 
    Abstract: Common image segmentation methods are computationally expensive, particularly when run on large medical datasets, and require powerful hardware to achieve image-based diagnosis in real-time. For a medical image segmentation algorithm that is based on an active contour model, our work presents an efficient approach that detects computationally intensive functions and adapts the implementation for improved performance. We employ profiling methods that assess algorithm performance taking into account the overall cost of execution, including time, memory access, and performance bottlenecks. We apply performance analysis techniques commonly available in traditional computing operating systems, which obviates the need for new setup or measurement techniques ensuring a short learning curve. The article presents guidelines to aid researchers in a) using profiling tools and b) detecting and checking potential optimisation snippets in medical image segmentation algorithms by measuring overall performance bottlenecks.
    Keywords: medical image processing and analysis; profiling tools; performance analysis; high-performance computing.
    DOI: 10.1504/IJCSE.2022.10050929
     
  • Spaced retrieval therapy mobile application for Alzheimer's patients: a usability testing   Order a copy of this article
    by Kholoud Aljedaani, Reem Alnanih 
    Abstract: Alzheimers disease is the most common type of dementia. Statistics predict a sharp increase in patient numbers by 2050. Many applications support the patients in their daily activities and help them to engage in society. However, designing an acceptable and usable interface for this type of user is challenging. Spaced Retrieval Therapy (SRT) is a non-pharmacological therapy for Alzheimers disease that helps reduce the high cost of the treatments. The SRT application helps the patients to remember their vital information after a few sessions. In this paper, the authors develop the proposed application, which applies a non-pharmacological therapy to reduce the cost of treatments and help Alzheimers patients engage in society. The paper presents the findings on its usability. The usability test included 20 older adults divided into two groups (10 healthy and 10 with Alzheimers). Each group comprises two smaller groups (5 for each) to test the two types of interface. A list of tasks was given to both groups during the test, and the attributes of the task times and error numbers were collected. A post-task questionnaire evaluated the level of difficulty for each task. The result of the tasks confirmed that the Alzheimers group needed more time to complete the tasks than the healthy elderly group. Based on the post-task questionnaire, the healthy elderly group finds the default user interface simpler than the adapted one, which contrasts with the Alzheimers patients. Alzheimers patients performed faster in the adapted user interface. As recommendations: 1) use voice recognition instead of typing on keyboards because the typing tasks take the longest time in observation, and some Alzheimers patients cannot complete the tasks although they can read and write; 2) thicken the items borders in the menu because most errors result from confusion between the items.
    Keywords: spaced retrieval therapy; Alzheimer patients; usability testing; mobile application; designing user interface.
    DOI: 10.1504/IJCSE.2022.10050973
     
  • Design of heuristic model to improve block-chain-based sidechain configuration   Order a copy of this article
    by Nisha Balani, Pallavi V. Chavan 
    Abstract: Data security is a major concern for any modern-day network deployment. Blockchain resolves security issues to a large extent. Blockchains are nowadays widely accepted for secure transactions and network communications. Since there is no limitation on the amount of data being stored, blockchain-based networks tend to become slow as the length of the main blockchain increases. To overcome this issue, the concept of sidechain is introduced. With sidechains, blockchain systems become faster, and inherit characteristics of blockchain including security, transparency and traceability. This paper proposes a solution for creating context-aware sidechains to increase system performance using a heuristic approach. The proposed algorithm assists in creation of customised sidechains via optimisation of blockchain mining delay using stochastic modelling. It generates a large number of stochastic sidechain combinations, evaluates them on the basis of mining delay, and selects optimal configuration. The proposed model is evaluated on different network conditions by varying network size and traffic density.
    Keywords: blockchain; sidechain; data sharing; fitness; QoS; blockchain mining delay; computation.
    DOI: 10.1504/IJCSE.2022.10050704
     
  • Joint optimisation of feature selection and SVM parameters based on an improved fireworks algorithm   Order a copy of this article
    by Xiaoning Shen, Jiyong Xu, Mingjian Mao, Jiaqi Lu, Liyan Song, Qian Wang 
    Abstract: In order to reduce the redundant features and improve the accuracy in classification, an improved fireworks algorithm for joint optimisation of feature selection and SVM parameters is proposed. A new fitness evaluation method is designed, which can adjust the punishment degree adaptively with the increase of the number of selected features. A differential mutation operator is introduced to enhance the information interaction among fireworks and improve the local search ability of the fireworks algorithm. A fitness-based roulette wheel selection strategy is proposed to reduce the computational complexity of the selection operator. Three groups of comparisons on 14 UCI classification data sets with increasing scales validate the effectiveness of our strategies and the significance of joint optimisation. Experimental results show that the proposed algorithm can obtain a higher accuracy in classification with fewer features.
    Keywords: fireworks algorithm; support vector machines; feature selection; parameter optimisation; joint optimisation.

  • Statistical analysis for predicting residents travel mode based on random forest   Order a copy of this article
    by Lei Chen, Zhengyan Sun, Shunxiang Zhang, Guangli Zhu, Subo Wei 
    Abstract: Random forest has achieved good results in the prediction task, but due to the complexity of travel mode and the uncertainty of random forest, the prediction accuracy of travel mode is low. To improve the accuracy of prediction, this paper proposes a residents travel modes prediction method based on the random forest. To extract valuable feature information, the questionnaire survey data is collected, which is preprocessed by three kinds of appropriate methods. Then, each feature is analysed by the statistical learning method to obtain the important feature of transportation selection. Finally, a random forest is constructed to predict the travel mode of residents selection of transportation. The parameters of random forests are modified and improved to achieve higher prediction accuracy of travel mode. The experimental results show that the method proposed in this paper effectively improves the prediction accuracy of the travel mode.
    Keywords: random forest; residents’ travel mode; statistical analysis.

  • Wireless optimisation positioning algorithm with the support of node deployment   Order a copy of this article
    by Xudong Yang, Chengming Luo, Luxue Wang, Hao Liu, Lingli Zhang 
    Abstract: Position is one of the basic attributes of an object, which is one of the key technologies for its collaborative operation. As a distributed sensing method, Wireless Sensor Networks (WSNs) have become a feasible solution especially in satellite signal denied environments. Considering that the node deployment is the basis of target positioning in WSNs, this paper first researches the optimal deployment of wireless nodes, and then researches the optimal positioning of mobile targets. Based on the least squares equation, a feature matrix that can characterise the positioning error is derived so that the positioning error caused by wireless node deployment is minimised. Following that, the positioning results are refined using particle swarm optimisation, which makes the mobile target have a coarse to fine accuracy. The results indicate that the proposed algorithm can reduce the influence of network topology on positioning error, which is critical for some location-based applications.
    Keywords: distributed sensing; wireless positioning; node deployment; matrix eigenvalues; particle swarm.

  • CNN-based battlefield classification and camouflage texture generation for real environments   Order a copy of this article
    by Sachi Choudhary, Rashmi Sharma 
    Abstract: It is critical to understand the environment in which the military forces are deployed. For self-defence and greater concealment, they should camouflage themselves. Camouflage is being used by the defence system to hide its personnel and equipment. The industry demands an intelligent system that can categorise the battlefield before generating texture for camouflaging their assets and objects, allowing them to adopt the conspicuous features of the scene. In this study, a CNN-based battlefield classification model has been developed to learn background information and classify the terrain. The study also intended to develop the texture for specific terrain by matching its salient features and boosting the effectiveness of the camouflage. Saliency maps have been used to measure the effectiveness of blending a camouflaged object into an environment.
    Keywords: digital camouflage; terrain classification; battlefield classification; camouflage generation; scene classification; colour clustering; saliency map.
    DOI: 10.1504/IJCSE.2022.10051287
     
  • Investigation on the optimisation of Cholesky decomposition algorithm based on SIMD-DSP   Order a copy of this article
    by Huixiang Li, Huifu Zhang, Anxing Xie, Yonghua Hu, Wei Liang 
    Abstract: With the development of high-performance SIMD-DSP processors, corresponding highly efficient algorithms for matrix decomposition play an important role in the hardware performance of such processors. Cholesky decomposition is a fast decomposition method for symmetric positive definite matrices, which is widely used in matrix inversion and linear equation solving. According to the hardware characteristics of the FT-M7002 processors, in this paper, we optimise the algorithm in several ways. If the hardware has on-chip double-buffered memory, the parallel process of DMA transmitting and calculating is specially designed, which can hide most of the time cost of data movement and further improve the algorithms performance. The experimental results based on the FT-M7002 processor show that the performance of the optimised algorithm is 3.8~5.64 times that of the serial algorithm, and 1.39~2.14 times that of the TI library function.
    Keywords: Cholesky decomposition; DSP; SIMD.

  • JALNet: joint attention learning network for RGB-D salient object detection   Order a copy of this article
    by Xiuju Gao, Jianhua Cui, Jin Meng, Huaizhong Shi, Songsong Duan, Chenxing Xia 
    Abstract: The existing RGB-D saliency object detection (SOD) methods mostly explore the complementary information between depth features and RGB features. However, these methods ignore the bi-directional complementarity between RGB and depth features. From this view, we propose a joint attention learning network (JALNet) to learn the cross-modal mutual complementary effect between the RGB images and depth maps. Specifically, two joint attention learning networks are designed, namely, a cross modal joint attention fusion module (JAFM) and a joint attention enhance module (JAEM), respectively. The JAFM learns cross-modal complementary information from the RGB and depth features, which can strengthen the interaction of information and complementarity of useful information. At the same time, we utilize the JAEM to enlarge receptive field information to highlight salient objects. We conducted comprehensive experiments on four public datasets, which proved that the performance of our proposed JALNet outperforms 16 state-of-the-art (SOTA) RGB-D SOD methods.
    Keywords: salient object detection; depth map; bi-directional complementarity; cross-modal features.

  • Classifying blockchain cybercriminal transactions using hyperparameter tuned supervised machine learning models   Order a copy of this article
    by Rohit Saxena, Deepak Arora, Vishal Nagar 
    Abstract: Bitcoin is a crypto asset with transactions recorded on a decentralised, publicly accessible ledger. The real-world identity of a Bitcoin blockchain owner is masked behind a pseudonym, known as an address. As a result, Bitcoin is widely thought to provide a high level of anonymity, which is one of the reasons for its widespread use in criminal operations such as ransomware attacks, gambling, etc. As a result, classification and prediction of diverse cybercriminal users' activities and addresses in the Bitcoin blockchain are demanded. This research presents a classification of Bitcoin blockchain user activities and addresses associated with illicit transactions using supervised machine learning (ML). The labelled dataset samples with user activities are prepared using the unlabelled dataset available at the Blockchair repository and labelled dataset at WalletExplorer and are trained using classification models from the Decision Trees, Ensemble, Bayesian, and Instance-based Learning families. For balancing the classes of the dataset, the weighted mean and synthetic minority principles have been employed. The models' cross-validation (CV) accuracy is assessed. Extra Trees emerged as the best classification model, whereas Gaussian Na
    Keywords: blockchain; Bitcoin; supervised machine learning; classification; GridSearchCV.

  • An improved blind/referenceless image spatial quality evaluator algorithm for image quality assessment   Order a copy of this article
    by Xuesong Li, Jinfeng Pan, Jianrun Shang, Alireza Souri, Mingliang Gao 
    Abstract: Image quality assessment (IQA) methods are generally studied in the spatial or transform domain. Due to the BRISQUE algorithm evaluating the quality of an image only based on its natural scene statistics of the spatial domain, the frequency features that are extracted from the modulation transfer function (MTF) are applied to improve its performance. MTF is estimated based on the slanted-edge method. The two-dimensional grey fitting algorithm is utilized to estimate the edge slope more accurately. Then the three-order Fermi function is utilized to match the preliminary estimated edge spread function to reduce the aliasing influence on MTF estimation. The features such as crucial frequency and the MTF value at Nyquist frequency are calculated and adopted to the BRISQUE method to assess the image quality. Experimental results on the image quality assessment databases illustrated that the proposed method outperforms the BRISQUE method and some other common methods, based on the linear and nonlinear correlation between the image quality assessed by the methods and their subjective value.
    Keywords: image assessment; modulation transfer function; Fermi function; feature extraction.
    DOI: 10.1504/IJCSE.2022.10051266
     
  • Simple and compact finite difference formulae using real and complex variables   Order a copy of this article
    by Yohei Nishidate 
    Abstract: A new set of compact finite difference formulae is derived by simple combinations of the real and the complex Taylor series expansions. The truncation error is fourth-order in derived formulae for approximating first to fourth-order derivatives. Although there exist complex stencil finite difference formulae with better truncation errors, our formulae are computationally cheaper, requiring only three points for first to third-order and four points for fourth-order derivatives. The derived formulae are experimented with for approximating derivatives of relatively simple and highly nonlinear functions used in other literature. Although the new formulae suffer the subtractive cancellation, it is demonstrated that the derived formulae outperform finite difference formulae of comparable computational costs for relatively large step sizes.
    Keywords: Taylor series expansion; approximation in the complex domain; finite difference methods; compact finite difference formula; numerical approximation.

  • A generalised incomplete no-equilibria transformation method to construct a hidden multi-scroll system with no-equilibrium   Order a copy of this article
    by Lihong Tang, Zongmei He, Yanli Yao, Ce Yang 
    Abstract: At present, there is a lot of research on multi-scroll chaotic systems with equilibrium points. However, there are few studies on no-equilibrium multi-scroll chaotic systems. This paper proposes a generalised incomplete no-equilibrium transformation method to design no-equilibrium multi-scroll chaotic systems. Firstly, a no-equilibrium chaotic system is constructed by adopting the proposed method. Phase plots and Lyapunov exponents show that the constructed no-equilibrium chaotic system can generate hidden hyperchaotic attractors. Then, a no-equilibrium multi-scroll hyperchaotic system is realized by introducing multi-level logic pulse signals. Theoretical analysis and numerical simulation show that the designed no-equilibrium multi-scroll hyperchaotic system can generate hidden multidirectional multi-double-scroll attractors including 1-D, 2-D, and 3-D hidden multi-scroll hyperchaotic attractors. Finally, an analogue circuit of the no-equilibrium multi-scroll hyperchaotic system is implemented by using commercial electronic elements. Various typical hidden multi-scroll attractors are verified on MULTISIM platform.
    Keywords: no-equilibrium; hidden attractors; multi-scroll; multi-level pulse.

  • Selection of the best hybrid spectral similarity measure for characterising marine oil spills from multi-platform hyperspectral datasets   Order a copy of this article
    by Deepthi Deepthi, Deepa Sankar, Tessamma Thomas 
    Abstract: Marine oil pollution causes major economic crises in major industrial sectors such as fishing, shipping and tourism. It affects marine life even decades afterwards, necessitating very quick detection and remediation. Unfortunately, from remote sensing Hyperspectral Images (HSI), it is very difficult to detect oils, as oil slicks and seawater have nearly similar spectral properties. Therefore a cohesive and synergistic Hybrid Spectral Similarity Measure (HSSM) evaluating the multi-class, multi-platform capability of hyperspectral marine oil spill image classification is identified and recommended in this paper. Hyperspectral Images (HSI) procured from Spaceborne (Earth Observation (EO-1) Hyperion) and Airborne (Airborne Visible/Infrared Imaging Spectrometer (AVIRIS)) platforms are employed here to discriminate marine spectral classes. The statistical parameters such as Overall Accuracy (OA), Kappa, ROC/PR curve, AUC/PRAUC, weighted Youden index (Jw), F1 score and noise performances provided crucial evidence to find the best HSSM and Spectral Information Divergence-Chi square distance (SID-CHI). The stochastic capabilities of SID in capturing spectrum variations among spectral bands and the robustness to noises, inherited from CHI, are significant for the improved accuracy attained by SID-CHI over other HSSM. From the observations, it is established that SID-CHI can be used as a novel method for the multi-class and multi-platform classification of marine oil spill hyperspectral datasets.
    Keywords: hybrid spectral similarity measure; hyperspectral image; ROC curve; weighted Youden index; F1 score; optimal cut-off value.

  • A novel fertility intention prediction scheme based on naive Bayes   Order a copy of this article
    by Meijiao Zhang, Lan Yang, Weiping Jiang, Gejing Xu, Guoliang Hu 
    Abstract: In today's society, the problem of ageing has developed into a global problem. The ageing of the population is related to the decrease in fertility intention. Therefore, it is meaningful to predict the fertility intention of people. In this paper, we propose a fertility intention prediction scheme based on a polynomial naive Bayesian model, called the FPB scheme. In the proposed scheme, we first extract various features from the data we collected, and we divided the entire dataset into three labels according to the level of fertility intention. Then, we use these data to construct a classifier. Next, we use this classifier to design a perfect prediction algorithm to predict fertility intention. Finally, we conduct extensive experiments to evaluate the performance of the proposed scheme. The experimental results show that the proposed FPB scheme has high accuracy and can help families to make accurate fertility decisions.
    Keywords: fertility intention; polynomial naive Bayes; prediction algorithm.

  • SAPNN: self-adaptive probabilistic neural network for medical diagnosis   Order a copy of this article
    by Yibin Xiong, Jun Wu, Qian Wang, Dandan Wei 
    Abstract: Medical diagnosis has always been a hot topic of great concern in the medical field. For this purpose, a self-adaptive probabilistic neural network (SAPNN) is proposed in this paper. Firstly, a hybrid cuckoo search (HCS) algorithm is proposed. Secondly, HCS is used in probabilistic neural networks for adapting the smoothing factor parameters. In order to accurately evaluate SAPNN proposed in this paper, the disease data sets of breast cancer, diabetes and Parkinsons disease were used for testing. Finally, comparison with several other methods yielded that the accuracy of SAPNN was the best in all cases, where the accuracy was 97.51%, 96.53%, 75.74%, 96.61%; recall was 97.6%, 99.12%, 79.74%, 88.24%; specificity was 96.15%, 88.88%, 59.03%, 95.31%; the precision was 97.85%, 94.32%, 80.12%, 85%, respectively. The results of various evaluation indexes show that the proposed SAPNN in this paper is a new method that can be applied to medical diagnosis.
    Keywords: ancillary diagnosis of disease; cuckoo search; information sharing; mutation strategy; probabilistic neural network.
    DOI: 10.1504/IJCSE.2022.10053093
     
  • Minimum redundancy maximum relevance and VNS-based gene selection for cancer classification in high-dimensional data   Order a copy of this article
    by Ahmed Bir-Jmel, Sidi Mohamed Douiri, Souad Elbernoussi 
    Abstract: DNA microarray is a technique for measuring the expression levels of a huge number of genes. These levels have a significant impact on cancer classification tasks. In DNA datasets, the number of genes exceeds the number of samples that make the presence of irrelevant or redundant genes possible, which penalises the performance of classifiers. For that, the development of new methods for gene selection represents an active subject for researchers. In this paper, two-hybrid multivariate filters for gene selection, named VNSMI and VNSCor, are presented. The two methods surpass the univariate filters by considering the possible interaction between genes through the search for an optimal subset of genes that contains the minimum redundancy and the maximum relevance (MRMR). In the first stage of our proposed methods, we use a univariate filter by selecting the best-ranked genes based on information theory and the Pearson Correlation Coefficient (PCC). Then, we apply the Variable Neighbourhood Search (VNS) metaheuristic coupled with an innovative Stochastic Local Search (SLS) algorithm to find the final subset of genes that maximise the MRMR objective function. Evaluating the proposed method, the experiments were performed on six well-replicated microarray datasets. The obtained results show that the proposed approach leads to encouraging results in terms of accuracy and the number of selected genes. Also, improvements are observed consistently using the classifiers 1NN and SVM.
    Keywords: gene selection; feature selection; cancer classification; VNS; stochastic local search; normalised mutual information; MRMR; DNA microarray.

  • Synthesis and evaluation of the structure of CAM memory by QCA computing technique   Order a copy of this article
    by Nirupma Pathak, Neeraj Kumar Misra, Santosh Kumar 
    Abstract: The lithographically based CMOS technology revolutions of the past few years are long behind us, but the technology used in today's microelectronics faces significant challenges in terms of speed, area, and power consumption. In the domain of QCA, the purpose of this article is to design a novel CAM memory. This article deals with the compact structure of the novel CAM memory, which is based on GDI-CMOS and QCA technology, respectively. Compared with the recently reported designs in the literature, it is observed that the area, latency, majority gate, and cell count of the proposed CAM are decreased by more than 78.57%, 50%, 40%, and 67%, respectively. In addition to this, the clock delay of the CAM cell design is less than the other results that have been reported. This cutting-edge QCA-based CAM structure is not only one of a kind but also very cost-effective in today's nano-devices. This CAM design that has been proposed improves performance while also making the use of modern device development simpler and more cost-effective.
    Keywords: CAM memory; QCA; GDI-CMOS; nano-electronics.
    DOI: 10.1504/IJCSE.2022.10053117
     
  • Canopy centre-based fuzzy C-means clustering for enhancement of soil fertility Prediction   Order a copy of this article
    by M. Sujatha, C.D. Jaidhar 
    Abstract: For plants to develop, fertile soil is necessary. Estimating soil parameters based on time change is crucial for enhancing soil fertility. Sentinel-2s remote sensing technology produces images that can be used to gauge soil parameters. In this study, values for soil parameters such as electrical conductivity, pH, organic carbon, and nitrogen are derived using Sentinel-2 data. In order to increase the clustering accuracy, this study suggests using canopy centre-based fuzzy C-means clustering and comparing it with manual labelling and other clustering techniques, such as canopy density-based, expectation maximisation, farthest-first, k-Means, and fuzzy C-means clustering. The proposed clustering achieved the highest clustering accuracy of 78.42%. Machine learning-based classifiers were applied to classify soil fertility, including naive Bayes, support vector machine, decision trees, and random forest (RF). A dataset labelled with the proposed RF clustering classifier achieves a high classification accuracy of 99.69% with 10-fold cross-validation.
    Keywords: clustering; classification; machine learning; remote sensing; soil fertility.

  • Research on mobile robot path planning and tracking control   Order a copy of this article
    by Jieyun Yu 
    Abstract: Autonomous navigation of a robot is a promising research domain due to its extensive applications in which planning and motion control are the most important and interesting parts. The proposed techniques are classified into two main categories: the first session focus on the improvement model free adaptive control (MFAC) to meet the extreme performances of the control system, and the second concentrates on the classic artificial potential field (APF) algorithm to deal with the limitations like falling into local minima and a non-reachable goal problem. This paper proposes a novel exponential feedforward-feedback control strategy based on iterative learning control (ILC) MFAC to the reference trajectory tracking, and then introduces a virtual target with exponential coordinated form to realise local risk collision avoidance for path planning. Compared to some traditional models, our proposed methods have a faster trajectory convergence rate, lower avoidable error, and higher safe performance. The simulation results verify that our work would bring meaningful insights to future intelligent navigation research.
    Keywords: trajectory tracking; path planning; model-free adaptive control; artificial potential field; exponential-form virtual target.

  • Texture-based superpixel segmentation algorithm for classification of hyperspectral images   Order a copy of this article
    by Subhashree Subudhi, Ramnarayan Patro, Pradyut Kumar Biswal 
    Abstract: To increase classification accuracy, a variety of feature extraction techniques have been presented. A preprocessing method called superpixel segmentation divides an image into meaningful sub-regions, which simplifies the image. This substantially reduces single-pixel misclassification. In this work, a texture-based superpixel segmentation technique is developed for the accurate classification of Hyperspectral Images (HSI). Initially, the local binary pattern and Gabor filters are employed to extract local and global image texture information. The extracted texture features are then provided as input to the Simple Linear Iterative Clustering (SLIC) algorithm for segmentation map generation. The final classification map is constructed by using a majority vote strategy between the superpixel segmentation map and the pixel-wise classification map. The proposed method was validated on standard HSI datasets. In terms of classification performance, it outperformed other state-of-the-art algorithms. Furthermore, the algorithm may be incorporated into the UAV's onboard camera to automatically classify HSI.
    Keywords: hyperspectral image classification; superpixel segmentation; SLIC; spatial-spectral feature extraction.

  • FedCluster: a global user profile generation method based on vertical federated clustering   Order a copy of this article
    by Zheng Huo, Ping He, Lisha Hu 
    Abstract: Federated learning can serve as a basis to solve the data island problem and data privacy leakage problem in distributed machine learning. This paper proposes a privacy-preserving algorithm referred to as FedCluster, to construct a global user profile via vertical federated clustering. The traditional k-medoids algorithm was then extended to the federated learning architecture to construct the user profiles on vertical segmented data. The main interaction parameter between the participants and the server was the distance matrix from each point to the k medoids. Differential privacy was adopted to protect the privacy of the participant data during the exchange of training parameters. We conducted experiments on a real-world dataset. The results revealed that the precision of FedCluster reached 81.87%. The runtime exhibited a linear increase with an increase in the dataset size and the number of participants, which indicates a high performance in terms of precision and effectiveness.
    Keywords: federated learning; footrule distance; k-mediods clustering; order preserving encryption.

  • Value chain for smart grid data: a brief review   Order a copy of this article
    by Feng Chen, Huan Xu, Jigang Zhang, Guiyu Li 
    Abstract: Smart grids are now crucial infrastructures in many countries. They build two-way communication between customers and utility enterprises. Since power and energy are associated with human activities, smart grid data are extremely valuable. At present, these data are currently being used in some areas. As a novel asset, the value of smart grid data needs quantitative measurement for evaluation and pricing. To achieve this, it is essential to analyse the overall process of value creation, which can help to calculate costs and discover potential applications. The process can be effectively revealed by building a data value chain for smart grid data, which illustrates the data flow and clarifies the data sources, analytics, utilization, and monetization. This article provides a three-step data value chain for smart grid data and expounds on each step. This article also reviews various methods and some challenges with smart grid data.
    Keywords: smart grid; data value chain; data collection; data analysis; data monetisation.

  • A search pattern based on the repeated motion vector components for the fast block matching motion estimation in temporal coding   Order a copy of this article
    by Awanish Mishra, Narendra Kohli 
    Abstract: To reduce the amount of unnecessary data in a video's timeline, block-based motion estimate is routinely used. However, a significant reduction in the computational complexity of motion estimation remains a significant problem. In this paper, a search pattern approach is proposed to efficiently estimate the motion of blocks. The proposed algorithm estimates the motion based on the maximum frequency of magnitude and direction of the available motion vector components. Motion vector components with higher frequency have greater probability to provide early estimation of matching block. In this iterative process, searching for the matching block is terminated on getting the matched block. To demonstrate the enhanced performance of the proposed approach, a comprehensive analysis is carried out, and when the results are compared, the novel approach outperforms recent motion estimation approaches. The proposed approach improves the best case complexity until it finds one search per block for dynamic blocks. It improves the average case complexity because of the early termination of the process.
    Keywords: motion estimation; block matching; search parameter; source frame; reference frame.

  • Pyramid hierarchical network for multispectral pan-sharpening   Order a copy of this article
    by Zenglu Li, Xiaoyu Guo, Songyang Xiang 
    Abstract: Pan-sharpening aims to fuse high spatial-resolution panchromatic images (PAN) and low spatial-resolution multispectral images (MS) into high spatial-resolution multispectral images (HRMS). We propose a pyramid hierarchical multi-spectral fusion network, called PH-Net, which can automatically fuse MS images and PAN images to generate corresponding HRMS images. The architecture is based on the U-Net network. First, a multi-level receptive field is realised by constructing an input pyramid. Then, hierarchical features are extracted from the encoder, decoder, and input pyramid. Finally, the rich hierarchical features are used to calculate the residual error between the MS image and the corresponding HRMS image. The learned residual error is inserted into the MS image to obtain the final high spatial-resolution multispectral image. To demonstrate the effectiveness of each component in the network architecture, we conducted an ablation study. In addition, thanks to the design of the multi-layer architecture, model training does not require a large dataset, which greatly improves the training speed and significantly improves the generalizability and ease of deployment of this work in the field of remote sensing images. Through qualitative and quantitative experiments, we proved that the proposed method is superior to current advanced methods.
    Keywords: pan-sharpening; image fusion; pyramid Attention; multispectral image; deep learning.
    DOI: 10.1504/IJCSE.2022.10053377
     
  • Clustering ensemble by clustering selected weighted clusters   Order a copy of this article
    by Arko Banerjee, Suvendu Chandan Nayak, Chhabi Rani Panigrahi, Bibudhendu Pati 
    Abstract: Owing to the fact that no single clustering approach is capable of producing the optimal result for any given data, the notion of clustering ensembles has emerged, which attempts to extract a novel and robust consensus clustering from a given ensemble of base clusterings of the data. While forming the consensus, weights can be assigned to the base clusterings or their constituent clusters to prioritise those that accurately represent the underlying structure of the data. In this paper, we present a novel method of cluster selection from base clusterings and subsequently merging selected clusters into the desired number of clusters in order to build a high-quality consensus clustering without gaining access to the internal distribution of data points. The method has been shown to work well with a wide range of data and to be better than many well-known clustering methods.
    Keywords: clustering ensemble; weighted clustering; entropy; cluster selection.

  • Cell counting via attentive recognition network   Order a copy of this article
    by Xiangyu Guo, Jinyong Chen, Guisheng Zhang, Guofeng Zou, Qilei Li, Mingliang Gao 
    Abstract: Accurate cell counting in biomedical images is a fundamental yet challenging task for disease diagnosis. The early manual cell counting methods are mainly based on detection and regression, which are time-consuming and prone to errors. Benefitting from the advent of deep learning, convolutional neural network (CNN)-based cell counting has become the mainstream method. Despite the outstanding performance of CNN-based cell counting methods, the complex tissue background in medical images still hinders the accuracy of cell counting. In this paper, to solve the problem of complex tissue background and improve the performance of cell counting, an attentive recognition network (ARNet) is built. Specifically, the ARNet is composed of five convolution blocks and a channel attention (CA) module. The convolution blocks are employed to extract the basic features, and the CA module is introduced to suppress the complex background by recalibrating the weight of each channel to pay more attention to cells. Subjective and objective experiments on synthetic bacterial cells dataset and modified bone marrow dataset prove that the proposed ARNet outperforms the mainstream methods in accuracy and stability.
    Keywords: healthcare; cell counting; attention mechanism; convolutional neural network.

  • Information fusion and emergency knowledge graph construction of urban rail transit   Order a copy of this article
    by Guangyu Zhu, Rongzheng Yang, Jiaxin Fan, Wei Yun, Bo Wu, Qi Wu 
    Abstract: The core of an intelligent emergency system of urban rail transit (URT) is to build a knowledge system for operation and emergency management. This paper proposes a construction model of emergency knowledge graph of URT combined with information fusion. Firstly, the scheme layer of the knowledge graph is designed in the top-down style, which defines the knowledge framework, entity types and entity relationships of the knowledge graph. Secondly, the entity extraction model based on adversarial training and Bert is proposed to extract knowledge from the emergency record text. The information fusion method is used to normalise the knowledge extracted from multi-source data to complete the construction of the data layer. Finally, Neo4j graph database is used to store and manage the data, and then the emergency knowledge graph of URT is constructed. Experiments show that the extraction model proposed in this paper has a better extraction effect than the mainstream models in terms of F1 value, which is increased by 7.52%, 1.87% and 1.31%, respectively. In addition, the emergency knowledge graph of URT based on this method can better fuse multi-source information and provide better basic support for the construction of URT intelligent emergency system.
    Keywords: intelligent emergency; knowledge graph; knowledge extraction; information fusion; urban rail emergency knowledge graph.
    DOI: 10.1504/IJCSE.2022.10053044
     
  • A bibliometric analysis of the application of deep learning in economics, econometrics, and finance   Order a copy of this article
    by Arash Salehpour, Karim Samadzamini 
    Abstract: This research looked at the deep learning applications in economics, econometrics, and finance. Two hundred and fifty articles from the Scopus database's index of journals published between 2013 and 2022 were gathered using a bibliometric technique. The data was analysed using many programs (R studio, Excel, and Biblioshiny), and in terms of countries, organisations, publications, papers, and authors, the most prominent scientific players were highlighted. Our research found that as of 2019, the quantity of publications has increased. The literature analysis received the most contributions from China and the United States. The most significant findings and discussions came from the following analyses: estimation of share prices, asset management price fluctuations and liquidity, forecast of bankruptcies, evaluation of credit risk, risk assessment, commodity prices top trend analysis, citation analysis, thematic evolution, and thematic map. Our findings offer practical recommendations on how deep learning may be implemented into decision-making processes for market participants, particularly those working in fintech and finance.
    Keywords: deep learning; bibliometrics; economics; econometrics; finance.

  • Self-supervised learning with split batch repetition strategy for long-tail recognition   Order a copy of this article
    by Zhangze Liao, Liyan Ma, Xiangfeng Luo, Shaorong Xie 
    Abstract: Deep neural networks cannot be well applied to balance testing when the training data present a long tail distribution. Existing works improve the performance of the model in long tail recognition by changing the model training strategy, data expansion, and model structure optimisation. However, they tend to use supervised approaches when training the model representations, which makes the model difficult to learn the features of the tail classes. In this paper, we use self-supervised representation learning (SSRL) to enhance the model's representations and design a three-branch network to merge SSRL with decoupled learning. Each branch adopts different learning goals to enable the model to learn balanced image features in the long-tail data. In addition, we propose a Split Batch Repetition strategy for long-tailed datasets to improve the model. Our experiments on the Imbalance CIFAR-10, Imbalance CIFAR-100, and ImageNet-LT datasets outperform existing similar methods. The ablation experiments prove that our method performs better on more imbalanced datasets. All experiments demonstrate the effectiveness of incorporating the self-supervised representation learning model and split batch repetition strategy.
    Keywords: long-tail recognition; self-supervised learning; decoupled learning; image classification; deep learning; neural network; computer vision;.

  • SLIC-SSA: an image segmentation method based on superpixel and sparrow search algorithm   Order a copy of this article
    by Hao Li, Hong Wen, Jia Li, Lijun Xiao 
    Abstract: Clustering algorithms are widely used in image segmentation owing to their universality. However, the methods based on clustering algorithms are sensitive to noise and easily fall into local optimum. To address these issues, we propose an image segmentation method (SLIC-SSA) based on superpixel method and sparrow search algorithm. Firstly, the presegmentation result is obtained by superpixel method. Owing to the use of local spatial information, the influence of noise can be reduced. Then, the clustering algorithm based on sparrow search algorithm is performed on superpixel image to complete the segmentation. To improve the quality of the results, the chaotic strategy is used to initialise the population. A fitness function is proposed to ensure the similarity within the cluster and the difference between the clusters. Experiments on real images show that the proposed method can obtain better results than comparative methods. Meanwhile, time consumption can be reduced.
    Keywords: clustering; image segmentation; sparrow search; superpixel; swarm intelligence optimisation.
    DOI: 10.1504/IJCSE.2023.10053888
     
  • Title SMedia: social media data analysis for emergency detection and its type identification   Order a copy of this article
    by Sarmistha Nanda, Chhabi Rani Panigrahi, Bibudhendu Pati, Prasant Mohapatra 
    Abstract: Owing to the advancement of technology, social media can spread information very fast. People post information about themselves or about an event in the proximity of any emergency. However, proper analysis of social media data is necessary to address the challenges of emergency detection and its type identification. An early identification along with proper action is essential to minimize the loss due to occurrence of any type of emergency. In this work, authors used the keyword based tweets data to detect the emergency. First, the emergency tweets were classified using the proposed HDLed model and the accuracy obtained from the experimental study was 88% which was more as compared to the existing algorithms such as Convolutional Neural Network (CNN), Bidirectional-Long Short-Term Memory (Bi-LSTM), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU). Next, the type of emergency was detected using the baseline multiclass classifiers such as Na
    Keywords: social media; text classification; CNN; bidirectional LSTM; emergency.

  • Integration of statistical parameters based color-texture descriptors for radar remote sensing image retrieval applications
    by Naushad Varish, Sambidi Rohan Reddy, Nadimpalli Gautham Sashi Varma, Priyanka Singh 
    Abstract: In this paper, a novel image retrieval method based on color-texture contents for radar remote sensing applications is proposed, where global properties-based color contents are extracted from different number of groups of histograms of color image planes and local properties-based texture contents have been derived from block level GLCM of an image plane. The integration of color-texture contents represents the low dimensional feature which reduces overall computational overhead and increases the retrieval speed. To give importance to the feature components, suitable weights are imposed to both color-texture contents appropriately. The obtained feature information is describing the radar image effectively and also similarity measures play a significant role for better performance. This work compares eight similarity metrics to select the best one in the retrieval process. To validate the suggested method, experiments on two image datasets are performed and decent retrieval results have been attained with rich color-texture contents.
    Keywords: Remote sensing image retrieval; statistical parameters; gray-level co-occurrence matrix; feature descriptors; min-max; similarity measures

Special Issue on: CCPI'20 Smart Cloud Applications, Services and Technologies

  • A big data and cloud computing model architecture for a multi-class travel demand estimation through traffic measures: a real case application in Italy   Order a copy of this article
    by Armando Cartenì, Ilaria Henke, Assunta Errico, Marida Di Bartolomeo 
    Abstract: The big data and cloud computing are an extraordinary opportunity to implement multipurpose smart applications for the management and the control of transport systems. The aim of this paper is to propose a big data and cloud computing model architecture for a multi-class origin-destination demand estimation based on the application of a bi-level transport algorithm using traffic counts on a congested network, also to propose sustainable policies at urban scale. The proposed methodology has been applied to a real case study in terms of travel demand estimation within the city of Naples (Italy), also aiming to verify the effectiveness of a sustainable policy in term of reducing traffic congestion by about 20% through en-route travel information. The obtained results, although preliminary, suggest the usefulness of the proposed methodology in terms of ability in real time/pre-fixed time periods to estimate traffic demand.
    Keywords: cloud computing; big data; virtualisation; smart city; internet of things; transportation planning; demand estimation; sustainable mobility; simulation model.

  • A methodology for introducing an energy-efficient component within the rail infrastructure access charges in Italy   Order a copy of this article
    by Marilisa Botte, Ilaria Tufano, Luca D'Acierno 
    Abstract: After the separation of rail infrastructure managers from rail service operators occurred within the European Union in 1991, the necessity of defining an access charge framework for ensuring non-discriminatory access to the rail market arose. Basically, it has to guarantee an economic balance for infrastructure manager accounts. Currently, in the Italian context, access charge schemes neglect the actual energy-consumption of rail operators and related costs of energy traction for infrastructure managers. Therefore, we propose a methodology, integrating cloud-based tasks and simulation tools, for including such an aspect within the infrastructure toll, thus making the system more sustainable. Finally, to show the feasibility of the proposed approach, it has been applied to an Italian real rail context, i.e. the Rome-Naples high-speed railway line. Results have shown that customising the tool access charges, by considering the power supply required, may generate a virtuous loop with an increase in energy-efficiency of rail systems.
    Keywords: cloud-based applications; rail infrastructure access charges; environmental component; energy-saving policies.

  • Edge analytics on resource-constrained devices   Order a copy of this article
    by Sean Savitz, Charith Perera, Omer Rana 
    Abstract: Video and image cameras have become an important type of sensor within the Internet of Things (IoT) sensing ecosystem. Camera sensors can measure our environment at high precision, providing the basis for detecting more complex phenomena in comparison with other sensors e.g. temperature or humidity. This comes at a high computational cost on the CPU, memory and storage resources, and requires consideration of various deployment constraints, such as lighting and height of camera placement. Using benchmarks, this work evaluates object classification on resource-constrained devices, focusing on video feeds from IoT cameras. The models that have been used in this research include MobileNetV1, MobileNetV2 and Faster R-CNN, which can be combined with regression models for precise object localisation. We compare the models by using their accuracy for classifying objects and the demand they impose on the computational resources of a Raspberry Pi.
    Keywords: internet of things; edge computing; edge analytics; resource-constrained devices; camera sensing; deep learning; object detection.

  • Traffic control strategies based on internet of vehicles architectures for smart traffic management: centralised vs decentralised approach   Order a copy of this article
    by Houda Oulha, Roberta Di Pace, Rachid Ouafi, Stefano De Luca 
    Abstract: In order to reduce traffic congestion, real-time traffic control is one of the most widely adopted strategies. However, the effectiveness of this approach is constrained not only by the adopted framework but also by data. Indeed, the computational complexity may significantly affect this kind of application, thus the trade-off between the effectiveness and the efficiency must be analysed. In this context, the most appropriate traffic control strategy to be adopted must be accurately evaluated. In general, there are three main control approaches in the literature: centralised control, decentralised control and distributed control, which is an intermediate approach. In this paper, the effectiveness of a centralised and a decentralised approach is compared and applied to two network layouts. The results, evaluated not only in terms of performance index with reference to the network total delay but also in terms of emissions and fuel consumption, highlight that the considered centralised approach outperforms the adopted decentralised one and this is particularly evident in the case of more complex layouts.
    Keywords: cloud computing; internet of vehicles; transportation; centralised control; decentralised control; emissions; fuel consumption.

  • ACSmI: a solution to address the challenges of cloud services federation and monitoring towards the cloud continuum   Order a copy of this article
    by Juncal Alonso, Maider Huarte, Leire Orue-Echevarria 
    Abstract: The evolution of cloud computing has changed the way in which cloud service providers offer their services and how cloud customers consume them, moving towards the usage of multiple cloud services, in what is called multi-cloud. Multi-cloud is gaining interest by the expansion of IoT, edge computing and the cloud continuum, where distributed cloud federation models are necessary for effective application deployment and operation. This work presents ACSmI (Advanced Cloud Service Meta-Intermediator), a solution that implements a cloud federation, supporting the seamless brokerage of cloud services. Technical details addressing the discovered shortcomings are presented, including a proof of concept built on JHipster, Java, InfluxD, Telegraf and Grafana. ACSmI contributes to relevant elements of the European Gaia-X initiative, specifically to the federated catalogue, continuous monitoring, and certification of services. The experiments show that the proposed solution effectively saves up to 75% of the DevOps teams effort to discover, contract and monitor cloud services.
    Keywords: cloud service broker; cloud services federation; cloud services brokerage; cloud services intermediation; hybrid cloud; cloud service monitoring; multi-cloud; DevOps; cloud service level agreement; cloud service discovery; multi-cloud service management; cloud continuum.

  • User perception and economic analysis of an e-mobility service: development of an electric bus service in Naples, Italy   Order a copy of this article
    by Ilaria Henke, Assunta Errico, Luigi Di Francesco 
    Abstract: Among the sustainable mobility policies, electric mobility seems to be one of the best choices to reach sustainable goals, but it has limits that could be partially exceeded in the local public transport. This research presents a methodology to design a new sustainable public transport service that meets users needs by analysing economic feasibility. This methodology is then applied to a real case study: renewing an 'old' bus fleet with an electric one charged by a photovoltaic system in the city of Naples (Southern Italy). Its effects on users' mobility choices were assessed through a mobility survey. The bus line and the photovoltaic system were designed. Finally, the economic feasibility of the project was assessed through a cost-benefit analysis. This research is placed in the field of smart mobility and new technologies that increasingly need to store, manage, and process large amounts of data typical of cloud computing and big data applications
    Keywords: e-mobility; electric bus services; cloud computing; user perception; economic analysis; cost-benefit analysis; photovoltaic system; sustainable mobility policies; sustainable goals; new technologies; local emissions; environmental impacts.

Special Issue on: ICNC-FSKD 2021 Cutting-edge High-Performance Computing and Artificial Intelligence Technologies for Medical E-Diagnosis

  • Research on tracking of moving objects based on depth feature detection   Order a copy of this article
    by Guocai Zuo, Xiaoli Zhang, Jing Zheng 
    Abstract: In complex conditions such as illumination change, target rotation, and background clutter, tracking drift or failure of target tracking may occur. Convolutional neural networks (CNN) can achieve robust target tracking in complex scenes such as illumination, rotation, background clutter, and so on. Therefore, this paper proposes a target tracking algorithm CNNT based on a convolutional neural network. Use CNN deep learning model to extract the deep features of the sample to complete the target detection task, and then use the Kernel Correlation Filter (KCF) target tracking algorithm to complete the target tracking. We train the Visual Geometry Group (VGG), deep learning model using massive image data, extract the depth feature of tracking targets through the trained VGG deep learning model, and use the depth feature for target detection. The results of the experiment show that, compared with other algorithms such as KCF, the CNNT algorithm achieves a more robust target tracking effect in complex scenes such as illumination change, target rotation, and background clutter.
    Keywords: target tracking; deep learning; convolutional neural network.

  • FCAODNet: a fast freight train image detection model based on embedded FCA   Order a copy of this article
    by Longxin Zhang, Peng Zhou, Miao Wang, Chengkang Weng, Xiaojun Deng 
    Abstract: The fault detection of freight train image has some problems, such as low detection accuracy and slow detection speed. Aiming at the problem of slow detection speed in the process of train image fault detection, a lightweight object detection model fast channel attention network (FCAODNet) is proposed in this study. FCAODNet consists of four modules, including feature extraction network (FEN), lightweight multiscale feature fusion (LMFF), prediction across scales (PAS), and decoding modules. FEN extracts image features, LMFF fuses features, PAS predicts the location of the target object, and the decoding module obtains the final prediction result. FCAODNet's FEN adopts CSPDarknet53tiny. The designed LMFF is embedded with two FCA modules to improve the detection accuracy. Experiments on train datasets and public datasets show that FCAODNet outperforms other state-of-the-art models in detection speed and has good detection accuracy and robustness.
    Keywords: attention mechanism; fault detection; freight train; object detection.

Special Issue on: ICNC-FSKD 2021 Cutting-edge High-Performance Computing and Artificial Intelligence Technologies for Medical E-Diagnosis

  • Novel freight train image fault detection and classification models based on CNN   Order a copy of this article
    by Longxin Zhang, Yang Hu, Tianyu Chen, Hong Wen, Peng Zhou, Wenliang Zeng 
    Abstract: Freight train detection systems (TFDS) is deployed to detect the status of freight train components in railway stations, but TFDS is only used to collect, transmit, and store images of train components. Aiming at the problem of train image fault detection of typical cut-out cock, top rod, locking plate, and angle cock of freight trains, a multiclass freight train (MFT) fault recognition model is proposed in this study. First, an object detection model is designed to reduce the dependence on colour and texture, and a bounding box regression method is used to select candidate boxes. Second, a fault classification model is proposed to classify the image segmented by object detection and screen out the image identified as the same fault with the object detection model. The test image dataset of our experiment comes from the China Railway Guangzhou Group Co., Ltd. Experimental results show that the recognition mean accuracy rate (mAR) of the MFT model for typical faults can reach 92.55%, which is 9.83% higher than that of the traditional machine learning method and 5.37% and 3.83% higher than those of faster region-based convolutional neural network (R-CNN) and mask R-CNN, respectively, and has good anti-interference ability for image rotation and noise. In addition, the mAR of MFT on the Modified National Institute of Standards and Technology public dataset can reach 94.60%, and it also has good recognition performance.
    Keywords: convolutional neural network; deep learning; fault detection; freight train fault; image classification.