Forthcoming and Online First Articles

International Journal of Computational Science and Engineering

International Journal of Computational Science and Engineering (IJCSE)

Forthcoming articles have been peer-reviewed and accepted for publication but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

Online First articles are published online here, before they appear in a journal issue. Online First articles are fully citeable, complete with a DOI. They can be cited, read, and downloaded. Online First articles are published as Open Access (OA) articles to make the latest research available as early as possible.

Open AccessArticles marked with this Open Access icon are Online First articles. They are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.

Register for our alerting service, which notifies you by email when new issues are published online.

We also offer which provide timely updates of tables of contents, newly published articles and calls for papers.

International Journal of Computational Science and Engineering (88 papers in press)

Regular Issues

  • Application of virtual numerical control technology in cam processing   Order a copy of this article
    by Linjun Li 
    Abstract: Numerical control (NC) machining is an important processing method in the machinery manufacturing industry. In most cases, as the final processing procedure, NC machining directly determines the quality of the finished product. As the key components needed in many industries such as automobile, internal combustion engine, national defense and so on, the precision and efficiency of the cam processing have a direct impact on the quality, life and energy saving standard of the engine and related products. This paper takes the cam NC grinding machining as the research object, takes the optimisation and intelligentisation of processing technology as the goal and uses the virtual NC technology to develop a process intelligent optimisation and NC machining software platform specifically for cam NC grinding machining. The software platform has machine tool library, grinding wheel library, material library, coolant reservoir library, process accessory library and other basic technology libraries, and it also has process example library, meta-knowledge rule library, forecast model library and other process intelligent libraries. With the support of database, the software platform can realise intelligent optimisation and automatic NC machining programming of cam grinding process plan. Because the software platform involves many research contents, this paper mainly focuses on the modelling of the motion process of the NC machining process system, the architecture of the intelligent platform software of the cam NC machining, and the virtual NC machining simulation of the process system. Therefore, the study of this paper is of great significance.
    Keywords: cam grinding; numerical control grinding; intelligent platform software; process problems; virtual grinding.

  • Research on oil painting effect based on image edge numerical analysis   Order a copy of this article
    by Yansong Zhang 
    Abstract: With the continuous development of the technology of non-photorealistic rendering, the effect of oil painting on image is increasing. The traditional oil painting effect is not satisfactory enough to satisfy people's needs. Therefore, this paper puts forward the research of oil painting effect based on image edge numerical analysis, and constructs a corresponding algorithm for image edge numerical analysis and detection. Through the comparison experiment with traditional algorithm oil painting results, the conclusion is drawn that the algorithm proposed in this paper can accurately analyse and detect the image edge, and the final rendering effect is more natural and more smooth than the traditional algorithm oil painting effect.
    Keywords: image; edge value; analysis; oil painting effect.

  • Research on multimedia and interactive teaching model of college English   Order a copy of this article
    by Zhang Juan 
    Abstract: Since the current higher education focuses on cultivating comprehensive practical ability rather than simply inculcating theoretical ideas, English should be adopted from the aspects of teaching purpose, teaching content and teaching strategy. A multiple interactive English teaching model is constructed to improve the information of a constructed method. Spatial reconstruction is used to extract and retrieve the information of multiple teaching resources, optimise and control the allocation of resources under the condition of load balance, and construct the data-mining model of College English teaching resources in the environment of information technology. With the result of information processing, optimised to maximise enthusiasm and creativity of the teachers and students, to continue the development of multimedia network resources and create a multiple interactive teaching environment, so as to create a platform for students.
    Keywords: information technology environment; college English; multiple interactive teaching mode;.

  • Design and application of system platform in piano teaching based on feature comparison   Order a copy of this article
    by Tingting Rao 
    Abstract: Traditional piano teaching is managed mainly by hand, but there are low management efficiency, management confusion and other problems, seriously restricting the development of piano teaching activities. In order to make up for the limitations of piano music teaching materials and the shortage of music teachers in some areas, the automatic score of computer is introduced into music learning, and a set of piano music singing and singing system based on characteristic comparison is developed. The difference between the score system and the existing commercial music scoring system on the internet lies in the educational orientation of the system, which is mainly reflected in the design and implementation of the feedback evaluation module. The system uses melody feature extraction, similarity comparison and pitch data analysis to perform the automatic singing score, locate the error position, estimate the cause of the error and give the learners detailed feedback and guidance suggestions. The application case study shows that the system has practical application value.
    Keywords: piano music teaching material; similarity comparison; learning feedback.

  • Recommendation Service for hotel applications on blockchain   Order a copy of this article
    by Meng-Yen Hsieh, Pei-Wei Wang, Chih-Hong Kao 
    Abstract: Adopting recommendation mechanisms to process users data is available on cloud computing to enhance the performance of modelling user preference. Using recommendation APIs available in cloud computing, our work focuses on developing hotel or lodging web applications with a trust-based recommendation service. The recommendation service accompanying trust relationship among users is advanced further to reduce the problem of cold-start users and data-rating sparsity. Additionally, a blockchain service is assisted with an online room-booking service. We suggest that the architecture for hotel or lodging applications is incorporated with a number of requested modules. A prototype is built by the proposed modules over a cluster platform and a blockchain net. The experimental results show that the trust-based recommender of the prototype contains more improved accuracy than general recommenders only with explicit rating data. A smart contract in a blockchain test net for the online room-booking service is implemented, executed, and evaluated.
    Keywords: recommendation; booking; trust; blockchain.

  • Service recommendation through graph attention network in heterogeneous information networks   Order a copy of this article
    by Fenfang Xie, Yangjun Xu, Angyu Zheng, Liang Chen, Zibin Zheng 
    Abstract: Recommending suitable services to users autonomously has become the key to solve the problem of service information overload. Existing recommendation algorithms have some limitations, either discarding the side information of the node, or ignoring the information of the intermediate node, or omitting the feature information of the neighbour nodes, or not modelling the pairwise attentive interaction between users and services. To solve the above-mentioned limitations, this paper proposes a service recommendation approach by leveraging the graph attention network (GAT) and co-attention mechanism in heterogeneous information networks (HINs). Specifically, different types of meta-path are first constructed, and a feature expression is learned for each node in HINs. Then, the feature information of mashups/services are aggregated by the co-attention mechanism. Finally, the multi-layer perceptron (MLP) is applied to recommend suitable services for users. Experiments on a real-world dataset illustrate that the proposed method outperforms other state-of-the-art comparison methods.
    Keywords: service recommendation; graph attention network; co-attention mechanism; heterogeneous information network.

  • Constrained-based power management algorithm for green cloud computing   Order a copy of this article
    by Sanjib Kumar Nayak, Sanjaya Kumar Panda, Satyabrata Das 
    Abstract: In green cloud computing (GCC), power management provides many advantages, such as reducing costs, saving the environment and improving system efficiency. It is adopted in various facilities, like datacenters, which are backed by non-renewable energy (NRE) sources. These sources are not only costly, but also drastically impact the environment. This paper introduces a constrained-based power management algorithm, which considers four NRE and RE power supplies of the datacenters, grid, photovoltaics (PV), wind and battery, to fulfill the cumulative load power demand of submitted user requests (URs). The URs are fulfilled in the order of PV, wind and battery, and grid, respectively. The simulation is carried out by taking NRE sources, RE sources and both, and ten instances. The simulation results are compared using overall cost, UR assigned to NRE and UR assigned to RE to show the performance in three scenarios of the proposed algorithm.
    Keywords: green cloud computing; power management; non-renewable energy; renewable energy; fossil fuel; solar energy; wind energy; load balancing.

  • KH-FC: krill herd-based fractional calculus algorithm for text document clustering using MapReduce structure   Order a copy of this article
    by Priyanka Shivaprasad More, Dr. Baljit Singh Saini 
    Abstract: In this paper, Krill Herd-based Fractional Calculus (KH-FC) using MapReduce framework is proposed for effective text document clustering. Here, the stop word removal and stemming model is applied in the pre-processing step, helps to remove redundant information and hence the size of the information is reduced, which further enhances the clustering accuracy. Furthermore, Term Frequency (TF) and Inverse Document Frequency (IDF) are employed for extracting significant features. Finally, the developed KH-FC model is used for clustering the text documents. The developed KH-FC algorithm is developed by combining the FC concept into the KH technique. In this method, pre-processing and feature extraction is performed in the mapper phase, whereas the clustering process is executed in the reducer phase. The performance of the developed approach is evaluated based on performance metrics, such as accuracy, Jaccard coefficient, and F-measure. The developed KH-FC approach obtained better performance in terms of accuracy, Jaccard coefficient, and F-measure is 0.983, 0.936 and 0.967, respectively.
    Keywords: text document clustering; fractional calculus; krill herd algorithm; term frequency–inverse document frequency; Jaccard similarity.

  • Intelligent recommendation of personalised tourist routes based on improved discrete particle swarm   Order a copy of this article
    by Jie Luo, Xilian Duan 
    Abstract: In order to overcome the problems of low accuracy and long time consuming in traditional personalised travel route recommendation methods, this paper proposes an intelligent recommendation of personalized tourist routes based on improved discrete particle swarm. This method analyses the key problems of tourism recommendation according to the personalised tourism characteristics, collects the information of tourists' interest, and establishes the model of tourists' interest. On this basis, the discrete particle swarm optimisation algorithm is improved, and the improved discrete particle swarm optimisation algorithm is used to select the personalised travel route, and the selection results are recommended to the passengers, so as to realise the personalised travel route intelligent recommendation. The experimental results show that the recommendation accuracy of this method is between 82.5% and 96.9%, and the recommendation time is always less than 0.5 s, which can realise the accurate and rapid recommendation of personalised tourist routes.
    Keywords: discrete particle swarm; personalised travel route; intelligent recommendation; passenger interest.

  • Web API service recommendation for mashup creation   Order a copy of this article
    by Gejing Xu, Sixian Lian, Mingdong Tang 
    Abstract: Mashup refers to a sort of Web application developed by reusing or combining Web API services, which are very popular software components for building various applications. As the number of open Web APIs increases, to find suitable Web APIs for mashup creation, however, becomes a challenging issue. To address this issue, a number of Web API service recommendation methods have been proposed. Content-based methods rely on the description of the service candidates and the users request to make recommendations. Collaborative filtering-based methods use the invocation records of a set of services generated by a set of users to make recommendations. There are also some studies employing both the description and invocation records of services to make recommendations. In this paper, we survey the state-of-the-art Web API service recommendation methods, and discuss their characteristics and differences. We also present some possible future research directions.
    Keywords: web service; recommendation; collaborative filtering; mashup creation.

  • A novel dual-fusion algorithm of single image dehazing based on anisotropic diffusion and Gaussian filter   Order a copy of this article
    by Kaihan Xiao, Qingshan Tang, Si Liu, Sijie Li, Jiayi Huang, Tao Huang 
    Abstract: Dark channel prior (DCP) is a widely used method in single image dehazing technology. Here, we propose a novel dual-fusion algorithm of single image dehazing based on anisotropic diffusion and Gaussian filter to suppress the halo effect or colour distortion in traditional DCP algorithms. Anisotropic diffusion is used for edge-preserving smooth images and a Gaussian filter is used to smooth the local white objects. A dual-fusion strategy is conducted to optimise the atmospheric veil. Besides, the fast explicit diffusion (FED) scheme is used to accelerate the numerical solution of the anisotropic diffusion to reduce time consumption. The subjective and objective evaluation of the experiment shows that the proposed algorithm can effectively suppress the halo effect and colour distortion, and has good dehazing performance on evaluation metrics. The proposed algorithm also reduces the time consumption by 54.2% compared with DCP with guided filter. This study provides an effective solution for single image dehazing.
    Keywords: image dehazing; dark channel prior; anisotropic diffusion; fast explicit diffusion; image fusion.

  • Robust pedestrian detection using scale and illumination invariant mask R-CNN   Order a copy of this article
    by Ujwalla Gawande, Kamal Hajari, Yogesh Golhar 
    Abstract: In this paper, we address the challenging difficulty of detecting pedestrians with variation in scale and the illumination of the images. Occurrences of pedestrians with such variations exhibit diverse features. Therefore, it intensely affects the performance of recent pedestrian detection methods. We propose a new robust approach for overcoming the antecedent challenges. We proposed a Scale and Illumination invariant Mask R-CNN (SII Mask-RCNN) framework. The first phase of the proposed framework wields illumination variations by performing a logarithmic transformation and adaptive illumination enhancement. In addition, the non-subsampled contourlet transform used to decompose the image into multi-resolution components. Finally, we convolved the image with the multi-scale masks to find corresponding points that are illumination and scale-invariant. Extensive evaluations on pedestrian benchmark databases illustrate the effectiveness and robustness of the proposed framework. The experimental results contribute the notable performance improvements in pedestrian detection compared with the state-of-the-art approaches.
    Keywords: deep learning; pedestrian detection; computer vision; neural network; CNN.
    DOI: 10.1504/IJCSE.2022.10045688
     
  • Spam email classification and sentiment analysis based on semantic similarity methods   Order a copy of this article
    by Ulligaddala Srinivasarao, Aakanksha Sharaff 
    Abstract: Electronic mail is widely used for communication purposes, and the spam filter is required in the email to save storage and protect from security issues. Various techniques based on NLP methods are used to increase spam detection efficiency. Spam detection cannot handle the unbalanced classes and lower efficiency owing to irrelevant feature extraction in existing approaches. In this research, sentiment analysis-based semantic FE and hybrid FS techniques were used to increase the spam and non-spam detection efficiency in email. The sentiment analysis is carried out in this proposed method with semantic feature extraction and hybrid FS. The sentiment analysis measures the polarity of the input text and used for email spam classification. Different semantic similarity feature extraction methods are used in this proposed method. The TF-IDF, Information Gain, and Gini Index were used. The proposed semantic similarity and hybrid FS were evaluated with various classifiers. The experimental analysis shows that the Gini index FS technique, word2vec FE, and SVM classifier show the higher performance of 95.17% and RF with Gini index and word2vec methods has 93.3% accuracy in email spam detection.
    Keywords: artificial neural network; hybrid feature selection; semantic similarity; SVM; TF-IDF.

  • An efficient algorithm for maximum cliques problem on IoT devices   Order a copy of this article
    by Bouneb Zine El Abidine 
    Abstract: This work describes how the maximum clique problem (MCP) algorithm can be performed on microcontrollers in a dynamic environment. In practice, many problems can be formalised using MCP and graphs where our problem is considered in the context of a dynamic environment. MCP is, however, a tricky problem NP-Complete, for which suitable solutions must be designed for microcontrollers. Microcontrollers are built for specific purposes and optimised to meet different constraints, such as timing, nested recursion depth limitation, or no recursion at all due to recursion stack limitation, power, and RAM limitation. On another side, graph representation and all the algorithms mentioned in the literature to solve the MCP, which is recursive, consume memory and are designed specifically for computers rather than a microcontroller.
    Keywords: MCE; MCP; microcontrollers ; IoT; agent coalition; symbolic computation; n queens completion problem ; MicroPython.

  • A new resource-sharing protocol in the light of a token-based strategy for distributed systems   Order a copy of this article
    by Ashish Singh Parihar, Swarnendu Chakraborty 
    Abstract: One of the highly researched areas in distributed systems is mutual exclusion. To avoid any inconsistent state of a system, more than one process executing on different processors are not allowed to invoke their critical sections simultaneously for the purpose of resource sharing. As a solution to such resource allocation issues, a token-based strategy for distributed mutual exclusion algorithms as a prime classification of solutions is one of the most popular and significant ways to handle mutual exclusion in this field. Through this research article, we propose a novel token-based distributed mutual exclusion algorithm. The proposed solution is scalable and has better results in terms of message complexity compared with existing solutions. In this proposed method, the numbers of messages exchanged per critical section invocation are 3(?log? N?-1), 3?(?log?(N+1) ?-1)/2? and 6[?log?(N+1)?+2(2^(-?log?(N+1)?)-1)] in the cases of light load, medium load and high load situations, respectively.
    Keywords: distributed system; mutual exclusion; critical section; token-based; resource allocation.

  • Low-loss data compression using deep learning framework with attention-based autoencoder   Order a copy of this article
    by S. Sriram, P. Chitra, V.V. Sankar, S. Abirami, S.j. Rethina Durai 
    Abstract: With the rapid development of media, representation and compression of data plays a vital role for efficient data storage and data transmission. Deep learning can help the research objective of compression by exploring its technical avenues to overcome the challenges faced by the traditional Windows archiver. The proposed work is an experimental effort to employ deep learning for data compression to achieve high compression rate with reduced loss. Initially, the work explored multilayer autoencoder models that obtained efficient data compression with higher compression ratio than the traditional Windows archiver. However, the autoencoders suffered from reconstruction loss. Therefore, an added attention mechanism in the autoencoder is proposed for reducing the reconstruction loss. The objective of the attention mechanism is to reduce the difference between the latent representation of an input at the encoder with its corresponding representation in the decoder along with the difference between the original input and its corresponding reconstructed output. This attention layer introduced in a multilayer autoencoder that capably compresses the data with improved compression ratio and reduced reconstruction loss. The proposed attention-based autoencoder is extensively evaluated on the atmospheric and oceanic data obtained from the Centre for Development of Advanced Computing. The validation shows that the proposed attention-based autoencoder could proficiently compress the data with around 89.7% improved compression rate compared with the traditional Windows archiver. Also, the results demonstrate that the proposed attention mechanism reduces the reconstruction loss by up to 25% more than the multilayer autoencoder
    Keywords: deep learning; multilayer autoencoder; compression ratio; attention; reconstruction loss.
    DOI: 10.1504/IJCSE.2022.10050669
     
  • Cross-modal correlation feature extraction with orthogonality redundancy reduce and discriminant structure constraint   Order a copy of this article
    by Qianjin Zhao, Xinrui Ping, Shuzhi Su 
    Abstract: Canonical Correlation Analysis (CCA) is a classic feature extraction method that widely used in the field of pattern recognition. Its goal is to learn correlation projection directions to maximize the correlation between the two sets of variables, but it does not consider the class label information among samples and the within-modal redundancy information from the correlation projection directions. To this end, this paper proposes a class label embedded orthogonal correlation feature extraction method. This method embeds label-guide discriminant structure information into correlation analysis theories for improving discrimination of correlation features, and within-modal orthogonality constraints are added to further reduce the projection redundancy of correlation features. Experiments on multiple image databases show that this method is an effective feature extraction method. This method provides a new solution to pattern recognition.
    Keywords: feature extraction; correlation analysis theory; discriminative subspace learning; orthogonality redundancy reduce.

  • Hurst exponent estimation using neural network   Order a copy of this article
    by Somenath Mukherjee, Bikash Sadhukhan, Arghya Kusum Das, Abhra Chaudhuri 
    Abstract: The Hurst exponent is used to identify the autocorrelation structure of a stochastic time series, which allows for detecting persistence in time series data. Traditional signal processing techniques work reasonably well in determining the Hurst exponent of a stochastic time series. However, a notable drawback of these methods is their speed of computation. On the other hand, neural networks have repeatedly proven their ability to learn very complex input-output mappings, even in very high dimensional vector spaces. Therefore, an endeavour has been undertaken to employ neural networks to determine the Hurst exponent of a stochastic time series. Unlike previous attempts to solve such problems using neural networks, the proposed architecture can be recognised as the universal estimator of the Hurst exponent for short-range and long-range dependent stochastic time series. Experiments demonstrate that, if sufficiently trained, a neural network can predict the Hurst exponent of any stochastic time series at least 15 times faster than standard signal processing approaches.
    Keywords: neural network; regression; Hurst exponent estimation; stochastic time series.
    DOI: 10.1504/IJCSE.2022.10045166
     
  • Enhancement of classification and prediction accuracy for breast cancer detection using fast convolution neural network with ensemble algorithm   Order a copy of this article
    by Naga Deepti Ponnaganti, Raju Anitha 
    Abstract: Breast cancer is a leading cancer found mostly in women across the world and are more in number in the developing countries where they are not diagnosed in the early stages. The recent works have compared machine learning algorithms using various techniques, such as ensemble methods and classification. Hence, the requirement now is to develop a technique that gives minimum error to increase accuracy. So, this paper proposes the neural network where classifying and predicting breast cancer are enhanced with maximum accuracy. The novel technique of fast convolution neural network (FCNN) has been used for enhancing the classification and for improving the prediction accuracy ensemble algorithm of gradient boosting and adaptive boosting. By this proposed technique with ensemble algorithm, the huge data has been taken and predicted for detecting the cancer, and this combined boosting algorithm will reduce the misclassification and will improve the binary classification. The training and testing of the dataset has been done with FCNN where the numerous datasets can be classified for earlier detection of cancer. The simulation result shows the improved accuracy, prediction class, precision and F-1 score.
    Keywords: breast cancer; machine learning algorithms; classification; prediction accuracy; fast convolution neural network; gradient boosting and adaptive boosting.
    DOI: 10.1504/IJCSE.2022.10045291
     
  • Combining DNA sequences and chaotic maps to improve robustness of RGB image encryption   Order a copy of this article
    by Onkar Thorat, Ramchandra Mangrulkar 
    Abstract: In todays world, colour images are generated and stored for a variety of purposes by organisations. Standard encryption schemes, such as AES or DES, are not well suited for encryption of multimedia data owing to their pattern appearance and high computational cost. Many methods are being proposed for the encryption of greyscale images. However, there are only a few methods proposed in the literature specifically for encrypting colour images. This paper presents a new method, called RGB Image Encryption Scheme (RGBIES), for encrypting colour images based on chaotic maps and DNA sequences. RGBIES has three major stages. Stages 1 and 3 propose a powerful scrambling algorithm based on a chaotic logistic map. The intermediate step uses a chaotic Lorenz map, the keys of which are obtained using DNA sequences. Various visual and quantitative analyses are performed that prove the resistance of the method against modern-day attacks.
    Keywords: image encryption; image security; chaotic maps; deoxyribonucleic acid sequences; security analysis; cryptography; diffusion.
    DOI: 10.1504/IJCSE.2022.10050834
     
  • Improved class-specific vector for biomedical question type Classification   Order a copy of this article
    by Tanu Gupta, Ela Kumar 
    Abstract: Polysemy words in questions are problematic in labelling questions correctly. This paper proposes two word embedding based approaches for tackling the polysemy problem in biomedical question type classification. In the first approach, the label-independent class vector is drawn using a statistical score of the word for classifying multi-class questions dataset, unlike previous work where the class vector is label-dependent. Secondly, the sense vector of a word interpreted using a context group discrimination algorithm is concatenated with class-specific word embedding to increase the discriminative property of the word. Besides this, a survey dataset Covid-S is introduced in this paper, which is a collection of public queries, myths, and doubts about novel Covid-19 diseases. The efficacy of our introduced approach for question classification is evaluated for BioASQ8b and Covid-S datasets with three well-known evaluation measures: accuracy, micro-average and Hamming loss.
    Keywords: class vector; polysemy; biomedical question classification; sense embedding; Covid-S.

  • Research on big data access control mechanism   Order a copy of this article
    by Yinxia Zhuang, Yapeng Sun, Han Deng, Jun Guo 
    Abstract: The increasing amount of data provides an excellent opportunity for the analysis of big data. But when we consider the convenience provided by big data, we should also consider the security issues involved behind it. In recent years, the leakage of data resources has become a troublesome problem in the field of big data. The emergence of access control technology adds a barrier to the access of data resources, avoids some illegal users' access to resources, and reduces the problem of resource leakage to a certain extent. This paper summarises the development of some access control technologies in the field of big data access control, including discretionary access control technology, role-based access control technology, attribution-based access control technology, blockchain-based access control technology, etc. Then it summarises the application characteristics of access control technology in the field of big data, and finally looks forward to the development prospect of access control.
    Keywords: big data; Access control; Security.

  • Combining machine learning and effective feature selection for real-time stock trading in variable time-frames   Order a copy of this article
    by A. K. M. Amanat Ullah, Fahim Imtiaz, Miftah Uddin Md Ihsan, Md. Golam Rabiul Alam, Mahbub Majumdar 
    Abstract: The unpredictability and volatility of the stock market render it challenging to make a substantial profit using any generalised scheme. Many previous studies tried different techniques to build a machine learning model, which can make a significant profit in the US stock market by performing live trading. However, very few studies have focused on the importance of finding the best features for a particular period for trading. Our top approach used the performance to narrow down the features from a total of 148 to about 30. Furthermore, the top 25 features were dynamically selected before each time training our machine learning model. It uses ensemble learning with four classifiers: Gaussian Naive Bayes, Decision Tree, Logistic Regression with L1 regularization, and Stochastic Gradient Descent, to decide whether to go long or short on a particular stock. Our best model performed daily trade between July 2011 and January 2019, generating 54.35% profit. Finally, our work showcased that mixtures of weighted classifiers perform better than any individual predictor about making trading decisions in the stock market.
    Keywords: feature selection; feature extraction; stock trading; ensemble learning.
    DOI: 10.1504/IJCSE.2022.10046373
     
  • Privacy-preserving global user profile construction through federated learning   Order a copy of this article
    by Zheng Huo, Teng Wang, Yilin Fan, Ping He 
    Abstract: User profiles are derived from big data left on the internet through machine learning algorithms. However, threats of data privacy leakages restrict access to the sources of data in centralised machine learning. Federated learning can avoid privacy leakage during the data collection phase. In this study, we propose an algorithm for constructing a privacy-preserving global user profile through federated learning in a vertical data-segmentation scenario. Each participant has some of the characteristics of user data, they train the local clusters on their data using the CLIQUE algorithm, and carefully encrypt the cluster parameters using Paillier encryption to protect the cluster parameters from the untrusted aggregator. The aggregator then makes intersections over the cluster parameters without decryption to construct a global user profile. Finally, we conduct experiments on real datasets, and the results verify that the algorithm shows good performance in terms of precision and effectiveness.
    Keywords: differential privacy; federated learning; CLIQUE algorithm; encryption.

  • Parameter-free marginal Fisher analysis based on L2,1-norm regularisation for face recognition   Order a copy of this article
    by Yu'e Lin, Zhiyuan Ren, Xingzhu Liang, Shunxiang Zhang 
    Abstract: Marginal Fisher analysis is an effective feature extraction algorithm for face recognition, but the algorithm is sensitive to the influence of the neighbourhood parameter setting, and does not have the function of feature selection. In order to solve the above problems, this paper proposes a parameter-free marginal discriminant analysis based on L2,1-norm regularisation (PFMDA/L2,1). The algorithm calculates the weights using the cosine distance between samples and dynamically determines neighbours of each data point so that it doesn't set any parameters. In order to enable both feature extraction and feature selection to proceed simultaneously, two optimisation models with the L2,1-norm constraint are presented and then the complete solution for PFMDA/L2,1 is given. The experimental results on the ORL, YaleB and AR face databases show that the proposed method is feasible and effective.
    Keywords: marginal Fisher analysis; feature extraction; feature selection; parameter-free; L2,1-norm regularisation; cosine distance.

  • Research on credit risk evaluation of B2B platform supply chain financing enterprises based on improved TOPSIS   Order a copy of this article
    by Hong Zhang, Yuan Chen, Xiyue Yan, Han Deng 
    Abstract: This paper establishes a credit evaluation index system for B2B platform supply chain financing enterprises, which consists of four levels: B2B platform, supply chain financing enterprises, core enterprises and supply chain collaboration. Taking 24 samples of supply chain financing enterprises listed on GEM from six well-known B2B platforms in China as the research objects, the credit risk of supply chain financing enterprises on B2B platform is dynamically evaluated by using the improved TOPSIS method incorporating time dimension. The research shows that the strong comprehensive strength of core enterprises, close supply chain collaboration and good credit status of enterprises themselves have a favourable impact on the credit evaluation of supply chain financing of enterprises. A high-quality B2B platform is beneficial for enterprises to carry out supply chain financing and attract more high-quality supply chain enterprises to cooperate.
    Keywords: improved TOPSIS method; TOPSIS; B2B platform; supply chain financing; credit risk.

  • BITSAT: an efficient approach to modern image cryptography   Order a copy of this article
    by Sheel Sanghvi, Ramchandra Mangrulkar 
    Abstract: This paper proposes a new approach towards efficient image cryptography that works using the concepts of bit-plane decomposition and bit-level scrambling. This method does not require the involvement of additional or extra images. Users have the flexibility of choosing (1) any bit-plane decomposition method; (2) logic that runs the key generation block; (3) customisation in the bit-level permutations performed during scrambling. The implementation of the method is simple and free of heavily complex steps and operations. This makes the algorithm applicable in a real-world scenario. The BITSAT algorithm is applied to a variety of images and, consequently, the encrypted images generated showcase a high level of encryption. The analysis and evaluation of the algorithm and its security aspects are performed and described in detail. The paper also presents the application domains of the method. Overall, the results and analysis indicate a positive working scope and suitability for real-life applications.
    Keywords: hybrid image encryption; image cryptography; novel image encryption scheme; bit-level scrambling; bit-plane decomposition.
    DOI: 10.1504/IJCSE.2022.10049691
     
  • A GRU-based hybrid global stock price index forecasting model with group decision-making   Order a copy of this article
    by Jincheng Hu, Qingqing Chang, Siyu Yan 
    Abstract: To predict the global stock price index daily more effectively, this study develops a new filtering gate recurrent unit group-based decision-making (FiGRU_G) model that combines GRU group network and group decision-making strategy. This proposed FiGRU_G model can effectively overcome the shortcoming of traditional neural network algorithms that the random initialisation of network weights may cause worse performance to some extent. The experimental results indicate visually that the proposed FiGRU_G framework outperforms other competing methods in terms of prediction accuracy and robustness for both Chinese and international stock markets. In the short-term prediction scenario, the FiGRU_G framework achieves 20% and 19% performance improvements on evaluation criteria MAPE and SDAPE, respectively, compared with the GRU model in the Chinese stock market. For the international markets, this FiGRU_G framework also achieves 23% and 22% performance improvements, respectively, compared with the GRU model.
    Keywords: stock closing price prediction; deep learning; GRU model; group decision-making.
    DOI: 10.1504/IJCSE.2022.10047524
     
  • Patient reviews analysis using machine learning   Order a copy of this article
    by Bijayalaxmi Panda, Chhabi Rani Panigrahi, Bibudhendu Pati 
    Abstract: In the present situation, web technologies provide opportunities for online communication. Health-related tweets are available in online forums. Doctors and patients share their views in discussion forums that help other people seeking similar information. An investigation was performed on patient reviews collected from different forums regarding different diseases. These are unstructured to identify positive and negative tweets. The dataset collected from figshare identified several features from the text provided by patients into numerical forms. Specific features are selected from the dataset and machine learning classification algorithms, such as Support Vector Machine (SVM), Gaussian na
    Keywords: classification; support vector machine; Gaussian naïve Bayes; random forest; feature selection.
    DOI: 10.1504/IJCSE.2022.10050923
     
  • Parameter identification and SOC estimation for power battery based on multi-timescale double Kalman filter algorithm   Order a copy of this article
    by Likun Xing, Mingrui Zhan, Min Guo, Liuyi Ling 
    Abstract: Accurate modelling and state of charge (SOC) estimation are great significance to improve efficiency and extend service life of power batteries. A joint extended Kalman filter (EKF)-unscented Kalman filter (UKF) algorithm for online identification of battery model parameters and SOC estimation is proposed, in order to solve the problems of time-varying internal parameters resulting in inaccurate SOC estimation. Based on the second-order RC equivalent circuit model, UKF is used for online parameter identification at the macroscopic time scale, and EKF is used for estimating the lithium battery SOC at the microscopic time scale. The experimental results show that the mean absolute error (MAE) and root mean square error (RMSE) of SOC estimated are significantly reduced by the proposed method, respectively, compared with the conventional SOC estimation method which the parameters are identified off-line. The SOC estimation results demonstrate the accuracy and robustness of the joint EKF-UKF algorithm.
    Keywords: state of charge; multi-timescale; online parameter identification; double Kalman filter algorithm.

  • Multiple correlation based decision tree model for classification of software requirements   Order a copy of this article
    by Pratvina Talele, Rashmi Phalnikar 
    Abstract: Recent research in Requirements Engineering (RE) includes requirements classification, and use of Machine Learning (ML) algorithms to solve RE problems. The limitation of existing techniques is that they consider only one feature at a time to map the requirements without considering the correlation of two features and are biased. To understand these limitations, our study compares and extends the ML algorithms to classify requirements in terms of precision and accuracy. Our literature survey shows that decision tree (DT) algorithm can identify different requirements and outperforms existing ML algorithms. As the number of features increases, the accuracy using the DT is improved by 1.65%. To overcome the limitations of DT, we propose a Multiple Correlation Coefficient based DT algorithm. When compared with existing ML approaches, the results showed that the proposed algorithm can improve classification performance. The accuracy of the proposed algorithm is improved by 5.49% compared with the DT algorithm.
    Keywords: machine learning; requirements engineering; decision tree; multiple correlation coefficient.

  • Improved performance on tomato pest classification via transfer learning based deep convolutional neural network with regularisation techniques   Order a copy of this article
    by Gayatri Pattnaik, Vimal K. Shrivastava, K. Parvathi 
    Abstract: Insect pests are major threat to the quality and quantity of crop yield. Hence, early detection of pests using a fast, reliable and non-chemical method is essential to control the infestations. Hence, we have focused on tomato pest classification using pre-trained deep convolutional neural network (CNN) in this paper. Four models (VGG16, DenseNet121, DenseNet169, and Xception) were explored with transfer learning approach. In addition, we have adopted two regularization techniques viz. early stopping and data augmentation to prevent the model from overfitting and improve its generalization ability. Among four models, the DenseNet169 achieved highest classification accuracy of 95.23%. The promising result shows that the DenseNet169 model with transfer learning and regularization techniques can be used in agricultural for pest management.
    Keywords: agriculture; convolutional neural network; data augmentation; early stopping; pest; regularisation; tomato plants.

  • A collaborative filtering recommendation algorithm based on DeepWalk and self-attention   Order a copy of this article
    by Jiaming Guo, Hong Wen, Weihong Huang, Ce Yang 
    Abstract: Graph embedding is one of the vital technologies in solving the problem of information overload in recommendation systems. It can simplify the vector representations of items and accelerates the calculation process. Unfortunately, the recommendation system using graph embedding technology does not consider the deep relationships between items when it learns embedding vectors. In order to solve this problem, we propose a collaborative filtering recommendation algorithm based on DeepWalk and self-attention in this paper. This algorithm can enhance the accuracy in measuring the similarity between items and obtain more accurate embedding vectors. Chronological order and mutual information are used to construct a weighted directed relationship graph. Self-attention and DeepWalk are used to generate embedding vectors. Then item-based collaborative filtering is used to obtain recommended lists. The results of the relative experiments and evaluations on three public datasets show that our algorithm is better than the existing ones.
    Keywords: DeepWalk; self-attention; mutual information; collaborative filtering; recommendation algorithm.
    DOI: 10.1504/IJCSE.2022.10050515
     
  • A proxy signcryption scheme for secure sharing of industrial IoT data in fog environment   Order a copy of this article
    by Rachana Patil, Yogesh Patil 
    Abstract: Rapid technical advancements have transformed the industrial segment. The IIoT and Industry 4.0 comprise a complete instrumentation system that has sensors, positioners, actuators, instruments and processes. Owing to various delays and safety concerns, the industrial process necessitates that specified data be transferred across the internet. Considering this, fog computing is potentially helpful as a mediator, as it performs localised processing of data, so that may be applied to a variety of industrial applications. Furthermore, industrial big data is often required to be shared among different applications. Here, we proposed an ECC-based Proxy Signcryption for IIoT (ECC-PSC-IIoT) in a fog computing environment. The proposed scheme provides the features of signature and encryption in a single cycle. The ECC-PSC-IIoT system is proven to be secure by using the AVISPA tool. Moreover, extensive performance assessment indicates the competency of the proposed scheme with respect to computation and communication time.
    Keywords: IIoT; elliptic curve cryptography; signcryption; fog computing; proxy signature.

  • An aeronautic X-ray image security inspection network for rotation and occlusion   Order a copy of this article
    by Bingshan Su, Shiyong An, Xuezhuan Zhao, Jiguang Chen, Xiaoyu Li, Yuantao He 
    Abstract: Aviation security inspection needs lots of time and human labour. In this paper, we establish a new network for detecting prohibited objects in aeronautic security inspection X-ray images. Objects in the X-ray image often present rotated shapes and overlap heavily with each other. In order to solve the rotation and occlusion in X-ray image detection, we construct the De-rotation-and-occlusion Module (DROM), an efficient module that can be embedded into most deep learning detectors. Our DROM leverages the edge, colour and Oriented Fast and Rotated BRIEF (ORB) features to generate an integrated feature map, while the ORB features could be extracted quickly and diminish the deviation produced by rotation effectively. Finally, we evaluate DROM on the OPIXRay dataset; compared with several latest approaches, the experimental results certify that our module promotes the performance of Single Shot MultiBox Detector (SSD) and obtains higher accuracy, which proves the modules application value in practical security inspection.
    Keywords: X-ray image detection; de-rotation-and-occlusion; deep learning; ORB; SSD.

  • ReFIGG: retinal fundus image generation using generative adversarial networks   Order a copy of this article
    by Sharika Sasidharan Nair, M.S. Meharban 
    Abstract: The effective training of deep architectures mainly depends on a large number of well-explained data. This is a problem in the medical field where it is hard and costly to gain such images. The tiny blood-vessels of the retina are the only part of the human structure that can be directly and nonintrusively foreseen within the living person. Hence, it can be easily obtained and examined by automatic tools. Fundus imaging is a basic check-up process in ophthalmology that provides essential data to make it easier for doctors to detect various eye-related diseases at early stages. Fundus image generation is a challenging process to carry out by constructing composite models of the eye structure. In this paper, we overcome the issue of unavailability of medical fundus datasets by synthesising them artificially through an encoder-decoder generator model to the existing MCML method of generative adversarial networks for easier, quicker, and early analysis.
    Keywords: fundus image; generative adversarial networks; encoder-decoder model; image synthesis; deep learning.

  • CBSOACH: design of an efficient consortium blockchain-based selective ownership and access control model with vulnerability resistance using hybrid decision engine   Order a copy of this article
    by Kalyani Pampattiwar, Pallavi Vijay Chavan 
    Abstract: Cloud deployments are prone to vulnerabilities and attacks, mitigated via security patches. However, these patches increase the computational complexity of the deployments, thus reducing their quality-of-service performance. To overcome this limitation and maintain high-security levels, this text proposes the design of the CBSOACH model, which is a novel Consortium Blockchain-based Selective Ownership and Access Control model with vulnerability resistance using a Hybrid decision engine. The model introduces header-level pattern analysis, which processes all incoming traffic using a light-weight rule-based method. Header-level pattern analysis is backed by a consortium blockchain model that allows for efficient ownership control with minimum overheads. Owing to a combination of header-level pattern analysis with consortium blockchain, the model can maintain traceability, trustability, immutability, and distributed computing capabilities. The model can reduce attack probability while maintaining lower delay and high-efficiency ownership transfer. This increases the scalability and usability of the model for large-scale deployments.
    Keywords: cloud; ownership; blockchain; authentication; access control; consortium; attacks; accuracy; delay.
    DOI: 10.1504/IJCSE.2022.10050702
     
  • Blockchain-based secure deduplication against duplicate-faking attack in decentralized storage   Order a copy of this article
    by Jingkai Zhou, Guohua Tian, Jianghong Wei 
    Abstract: Secure client-side deduplication enables cloud server to efficiently save storage space and communication bandwidth without compromising privacy. However, the potential duplicate-faking attack (DFA) may cause data users to lose their outsourced data. Existing solutions can either only detect DFA and fail to avoid data loss, or have high storage costs. In this paper, we propose a blockchain-based secure deduplication scheme against DFA in decentralised storage. Specifically, we firstly proposed a client-side deduplication protocol, in which server does not need to store additional metadata to check subsequent uploaders, who only need to encrypt the challenged partial blocks instead of the entire file. Besides, we design a battle mechanism based on smart contract to protect users from losing data. When an uploader detects a DFA, he can apply for a battle with the previous uploader to achieve an effective punishment. Finally, security and performance analysis indicate the practicality of the proposed scheme.
    Keywords: secure data deduplication; duplicate-faking attack; proof of ownership; blockchain; smart contract.

  • Blockchain-based collaborative intrusion detection scheme   Order a copy of this article
    by Tianran Dang, Guohua Tian, Jianghong Wei, Shuqin Liu 
    Abstract: The collaborative intrusion detection technique is an effective solution to protect users from various cyber-attacks, among which the large-scale trusted sharing and real-time updating of attack instances are the main challenges. However, the existing collaborative intrusion detection systems (CIDS) either can only achieve real-time instance sharing in a centralised setting or implement large-scale instance sharing through blockchain. In this paper, we propose a novel blockchain-based CIDS scheme. Specifically, we present a reputation-based consensus protocol, which incentives service providers (SP) to evaluate the attack instances collected from collectors and punishes the malicious evaluators. Then, only trusted attack instances will be published on the blockchain to provide large-scale trusted intrusion detection services. Furthermore, we introduce a redactable blockchain technique to achieve dynamic instances update, which enables our scheme to provide a real-time intrusion detection service. Finally, we demonstrate the practicality of the proposed scheme through security analysis, theoretical analysis and performance evaluation.
    Keywords: collaborative intrusion detection; blockchain; reputation-based consensus; redactable blockchain.

  • Application of bagging and particle swarm optimisation techniques to predict technology sector stock prices in the era of the Covid-19 pandemic using the support vector regression method   Order a copy of this article
    by Heni Sulastri, Sheila Maulida Intani, Rianto Rianto 
    Abstract: The increase in positive cases of Covid-19 not only affects the health and lifestyle, but also the economy and the stock market. Tech and digital sector stocks can be predicted to be one of the most profitable. Therefore, the prediction of the stock price is required to be able to see how the prospects of investment in the future. In this study, the prediction of the stock prices of Multipolar Technologies Ltd (MLPT) was carried out using the Support Vector Regression (SVR) method with Bootstrap Aggregation (Bagging) Technique and Particle Swarm Optimisation (PSO) as SVR optimisation. From the results of the prediction process, it is shown that the application of bagging and PSO techniques in predicting stock prices in the technological sector can reduce the Root Mean Squared Error (RMSE) value on the SVR, the RMSE value from 22.142 to 21.833. Although it does not have a big impact, it is better to apply a combination of bagging and PSO techniques to SVR than one of them (SVR / SVR - PSO / SVR-bagging).
    Keywords: bootstraps aggregation; bagging; Covid-19; particle swarm optimisation; prediction; share prices; support vector regression.

  • Kidney diseases classification based on SONN and MLP-GA in ultrasound radiography images   Order a copy of this article
    by Anuradha Laishram, Khelchandra Thongam 
    Abstract: A strategy for robust classification of renal ultrasound images for the identification of three kidney disorders, renal calculus, cortical cyst, and hydronephrosis, has been attempted. Features were retrieved using the Intensity Histogram (IH), grey level co-occurrence matrices (GLCM), and grey level run length matrices (GLRLM) techniques. Using the extracted features, input samples are created and then fed to a hybrid model which is a combination of self-organising neural network (SONN) and multilayer perceptron (MLP) trained with a genetic algorithm (GA). Self-Organising Neural Network (SONN) is used to cluster the input patterns into four groups or clusters and finally, MLP using genetic algorithm is employed on each cluster to classify the input patterns. The proposed hybrid method using SONN and MLP-GA has more potential to classify the ultrasound images by achieving a precision of 93.9%, recall of 93.0%, F1 score of 93.0%, and overall accuracy of 96.8%.
    Keywords: genetic algorithm; grey level co-occurrence matrix; grey level run length matrix; intensity histogram; multilayer perceptron; self-organising neural network; ultrasound images.

  • LightNet: pruned sparsed convolutional neural network for image classification   Order a copy of this article
    by Edna Too 
    Abstract: Deep learning has become a most sought after approach in the area of Artificial intelligence (AI). However, deep learning models pose some challenges in the learning process. It is computationally intensive to train deep learning networks and also resource intensive. Therefore, it cannot be applied in limited resource devices. Limited research is being done on implementation of efficient approaches for real world problems. This study tries bridge the gaps towards an applicable system in real world, especially in the agricultural sector for plant disease management and fruit classification. We introduce a novel architecture called LightNet. LighNet is an architecture that employs two strategies to achieve sparsity of DenseNet: the skip connections and pruning strategy. The resultant is a small network with reduced parameters and model size. Experimental evaluation reveal that LightNet is more efficient that the DenseNet architecture. The model is evaluated on real world datasets PlantsVillage and Fruits-360.
    Keywords: image classification; deep learning; convolution neural network; LightNet; ConvNet.

  • Predicting possible antiviral drugs against COVID-19 based on Laplacian regularised least squares and similarity kernel fusion   Order a copy of this article
    by Xiaojun Zhang, Lan Yang, Hongbo Zhou 
    Abstract: COVID-19 has produced a severe impact on global health and wealth. Drug repurposing strategies provide effective ways for inhibiting COVID-19. In this manuscript, a drug repositioning-based virus-drug association prediction method, VDA-LRLSSKF, was developed to screen potential antiviral compounds against COVID-19. First, association profile similarity matrices of viruses and drugs are computed. Second, similarity kernel fusion model is presented to combine biological similarity and association profile similarity from viruses and drugs. Finally, a Laplacian regularised least squares method is used to compute the probability of association between each virus-drug pair. We compared VDA-LRLSSKF with four of the best VDA prediction methods. The experimental results and analysis demonstrate that VDA-LRLSSKF calculated better AUCs of 0.8286, 0.8404, 0.8882 on three datasets, respectively. VDA-LRLSSKF predicted that ribavirin and remdesivir could be underlying therapeutic clues for inhibiting COVID-19 and need further experimental validation.
    Keywords: SARS-CoV-2; VDA-LRLSSKF; drug repurposing; Laplacian regularised least squares; similarity kernel fusion.

  • A modified Brown and Gibson model for cloud service selection   Order a copy of this article
    by Munmun Saha, Sanjaya Kumar Panda, Suvasini Panigrahi 
    Abstract: Cloud computing has been widely accepted in the information technology (IT) industry as it provides on-demand services, lower operational and investment costs, scalability, and many more. Nowadays, small and medium enterprises (SMEs) use the cloud infrastructure for building their applications, which makes their business more agile by using elastic and flexible cloud services. Many cloud service providers (CSPs) have offered numerous services to their customers. However, owing to the vast availability of cloud services and the wide range of CSPs, decision-making for cloud selection or adopting cloud services is not consistently straightforward. This paper proposes a modified Brown and Gibson model (M-BGM) to select the best CSP. We consider both the subjective and objective criteria for the non-quantifiable and quantifiable values, respectively. Here, various decision-makers can express their views about the alternatives. We compare M-BGM with multi-attribute group decision-making (MAGDM) approach and perform a sensitivity analysis to show the robustness.
    Keywords: cloud computing; multi-criteria decision-making; quality of service; cloud service provider; Brown and Gibson model; analytic hierarchy process; Delphi method; decision maker.

  • Short-term load forecasting with bidirectional LSTM-attention based on the sparrow search optimisation algorithm   Order a copy of this article
    by Jiahao Wen, Zhijian Wang 
    Abstract: Short-term power load forecasting has always been a complex problem for distribution networks due to their insufficient accuracy and poor training effects. To solve this problem, a bidirectional long short-term memory (BILSTM) prediction model based on attention was proposed to process collected data, and the different observed data characteristics were divided through a pretreatment unit to obtain a training set and test set. The BILSTM layer was used for modelling to learn historical load data and daily feature data, enabling the extraction of the internal dynamic change rules of features. An attention mechanism was used to give different weights to the implied BILSTM states through mapping, weighting and parameter matrix learning, which reduced the loss of historical information and enhanced the influence of important information. The sparrow search (SS) algorithm was used to optimise the hyperparameter selection process of the model. The test results showed that the performance of the proposed method was better than that of the traditional prediction model, and the root mean square errors decreased by (1.18, 1.09, 0.60, 0.54) and (2.11, 0.45, 0.21, 0.11) on different datasets.
    Keywords: short-term load prediction; sparrow search algorithm; neural network; weight assignment; attention mechanism.
    DOI: 10.1504/IJCSE.2022.10049692
     
  • 3DL-PS: an image encryption technique using a 3D logistic map, hashing functions and pixel scrambling techniques   Order a copy of this article
    by Parth Kalkotwar, Rahil Kadakia, Ramchandra Mangrulkar 
    Abstract: With the advancement in technology over the years, the security of data transferred over the internet is a major concern. In this paper, a robust and efficient image encryption scheme has been implemented using a 3D logistic map, SHA-512, and pixel scrambling. A good image encryption scheme should be able to produce two drastically different encrypted images for two original images with minute differences. Chaotic systems have proved to be highly efficient in providing this property, mainly because of their high randomness and volatility depending on the initial conditions. A 3D logistic map can be preferred over a 1D Logistic map owing to its increased encryption complexity, enhanced security, and better chaotic properties. To start the process, two secret keys are generated using two different user-provided keys and the input image, which makes it resistant to classical attacks such as the chosen-plaintext attack and chosen ciphertext attack. Further, it is necessary to change the pixel values of the original image so that it becomes difficult to trace back the original image from the encrypted image. Pixels of the images are altered using the values obtained upon the iteration of the 3D logistic map. In addition to this, two different pixel scrambling techniques are employed to enhance security. Firstly, different fragments of varying sizes are swapped depending on the secret keys generated earlier. Finally, a jumbling technique is used to mix the pixels horizontally and vertically in a completely dynamic way depending on the secret keys. The keyspace of the algorithm is found to be large enough to resist brute force attacks. The encrypted image has been observed and analysed against several attacks such as classical attacks, statistical attacks, and noise resistance. Key sensitivity analysis has also been performed. The results prove that the 3DL-PS algorithm is found to be resistant to several well-known attacks, providing an efficient image encryption scheme that can be used in various real-time applications.
    Keywords: image cryptography; chaotic systems; pixel scrambling; SHA-512; security analysis; classical attacks; 3D logistic map; noise resistance.
    DOI: 10.1504/IJCSE.2022.10049693
     
  • Hybrid grasshopper and ant lion algorithms to improve imperceptibility, robustness and convergence rate for the video steganography   Order a copy of this article
    by Sahil Gupta, Naresh Kumar Garg 
    Abstract: The need for securing multimedia content from being intercepted is a prominent research issue. This work proposes an optimised video steganography model that improves imperceptibility and robustness by extracting keyframes and calculating the optimal scaling factor. The Squirrel Search Algorithm (SSA) is used to extract keyframes since it ensures distinct position updation processes through Levy flying and predator features, whilst the grasshopper optimisation and ant lion optimisation algorithms are hybridised to compute the optimal value of the scaling factor. In terms of imperceptibility and robustness, the simulation results suggest that the proposed approach outperforms existing data-hiding models. It also discovers the optimal scaling factor in under 10 iterations, indicating that the fastest convergence rate is possible.
    Keywords: ant-lion optimisation; grasshopper optimisation; SVD; video steganography; impercepbility; robustness; PSNR; MSE.

  • Human behaviour analysis based on spatio-temporal dual-stream heterogeneous convolutional neural network   Order a copy of this article
    by Qing Ye, Yuqi Zhao, Haoxin Zhong 
    Abstract: At present, there are still many problems to be solved in human behaviour analysis, such as insufficient use of behaviour characteristic information and slow operation rate. We propose a human behaviour analysis algorithm based on spatio-temporal dual-stream heterogeneous convolutional neural network (STDNet). The algorithm improves on the basic structure of the traditional dual-stream network. When extracting spatial information, the DenseNet uses a hierarchical connection method to construct a dense network to extract the spatial feature of the video RGB image. When extracting motion information, BNInception is used to extract temporal features of video optical flow images. Finally, feature fusion is carried out by multi-layer perceptron and sent to Softmax classifier for classification. Experimental results on the UCF101 data set show that the algorithm can effectively use the spatio-temporal feature information in video, reduce the amount of calculation of the network model, and greatly improve the ability to distinguish similar actions.
    Keywords: human behaviour analysis; STDNet; optical flow; feature extraction; dual-stream network.
    DOI: 10.1504/IJCSE.2022.10048568
     
  • High-volume transaction processing in bitcoin lightning network on blockchains   Order a copy of this article
    by Rashmi P. Sarode, Divij Singh, Yutaka Watanobe, Subhash Bhalla 
    Abstract: Transactions on e-commerce platforms using blockchain technology are required to face a high volume of executing transactions. These systems are required to be scalable. Bitcoin Lightning Network (BLN) can execute high volumes of transactions and is scalable due to few hops in the network. It is an off-chain payment channel built on top of a blockchain which speeds up the transactions. In this paper, we discuss BLN along with its transaction processing in detail, its benefits and applications. Additionally, we discuss alternative networks for payments. We also propose a secure model on BLN that can be used for any e-commerce platform and compare it with existing applications such as that of Ethereum and Stellar.
    Keywords: lightning network; bitcoin; cryptocurrency; blockchain; Ethereum.

  • Data augmentation using fast converging CIELAB-GAN for efficient deep learning dataset generation   Order a copy of this article
    by Amin Fadaeddini, Babak Majidi, Alireza Souri, Mohammad Eshghi 
    Abstract: The commercial deep learning applications require large training datasets with many samples from different classes. The Generative Adversarial Networks (GAN) are able to create new data samples for training these machine learning models. However, the low speed of training these GANs in image and multimedia applications is a major constraint. In order to address this problem, in this paper a fast converging GAN called CIELAB-GAN for synthesizing new data samples for image data augmentation is proposed. The CIELAB-GAN simplifies the training process of GANs by transforming the images to the CIELAB colour space with fewer parameters. Then, the CIELAB-GAN translates the generated greyscale images into colourized samples using an autoencoder. The experimental results show that the CIELAB-GAN has lower computational complexity of 20% compared to the state of the art GAN models and is able to be trained substantially faster. The proposed CIELAB-GAN can be used for generating new image samples for various deep learning applications.
    Keywords: generative adversarial networks; deep learning; data augmentation; image processing.

  • Aerial remote sensing image registration based on dense residual network of asymmetric convolution   Order a copy of this article
    by Ying Chen, Wencheng Zhang, Wei Wang, Jiahao Wang, Xianjing Li, Qi Zhang, Yanjiao Shi 
    Abstract: The existing image registration frameworks pay less attention to important local feature information and part of global feature information, resulting in low registration accuracy. However, asymmetric convolution and dense connection can pay more attention to the key information and shallow information of the image. Therefore, this paper propose a novel feature extraction module to improve the feature extraction ability and registration accuracy of the model. Asymmetric convolution and dense connection are used to improve the residual structure to focus on both local and global information in the feature extraction stage. In the feature matching stage, bidirectional matching is used to alleviate asymmetric matching results by fusing two outcomes. Furthermore, a secondary affine transformation is proposed to estimate the real transformation between two images adequately. In contrast with several popular algorithms, the proposed method has a better registration effect on two public datasets, which has practical significance.
    Keywords: remote sensing image registration; residual network; asymmetric convolution; dense connection; transfer learning; regularization; affine transformation.

  • Non-parametric combination forecasting methods with application to GDP forecasting   Order a copy of this article
    by Wei Li, Yunyan Wang 
    Abstract: This work is devoted to constructing non-parametric combination prediction method, which can improve the forecasting effect and accuracy to some extent. In this paper, in order to forecast the regional gross domestic product, non-parametric autoregressive method is introduced into the autoregressive integrated moving average model, and a combined method of ARIMA model and non-parametric autoregressive model is established based on the residual correction. Furthermore, the specific prediction steps are proposed. The empirical results show that the new proposed combined model outperforms both the ARIMA model and the non-parametric autoregressive model in terms of regression effect and forecasting accuracy. The combination of parametric model and non-parametric model not only provides a method with better applicability and prediction effect for the establishment of GDP prediction model, but also provides a theoretical basis for the prediction of relevant economic data in the future. The prediction results show that during the Chinas 14th Five-Year Plan period, the gross domestic product of Jiangxi Province will increase by 7.01% annually.
    Keywords: GDP; ARIMA model; non-parametric autoregressive model; residual correction; combined model.

  • Comparative study of point matching method with spectral method on numerical solution electromagnetic problems   Order a copy of this article
    by Mahmoud Behroozifar 
    Abstract: The present study focuses on comparing the point matching method and spectral method for solving the integral equations arising in the electromagnetic domain. The point matching method, which is a traditional method, was based on basis functions which most of the time results in a singular and ill-posed system of nonlinear equations. In order to prevent these inconveniences, the physical structure of the object must be altered in some cases this yields a high error in the results and requires high CPU time and memory usage. Also in most cases, this method converges slowly and leads a singular and ill-posed system. Consequently, applying the point matching method for this problem causes to obtain an approximate solution with low accuracy and high computation volume. As an alternative, we present the spectral method based on Bernstein polynomials (BPs) as a robust nominee. Employing the BPs reduces the problem to an algebraic equations system. The other merits of the presented method are faster convergence and avoidance of occurring a singular system.
    Keywords: Bernstein polynomials; electrostatic; micro strip; point matching method; spectral method.

  • RCRE: radical-aware causal relationship extraction model oriented in the medical field   Order a copy of this article
    by Xiaoqing Li, Guangli Zhu, Zhongliang Wei, Shunxiang Zhang 
    Abstract: In the massive medical texts, the accuracy of causal relationship extraction is relatively low because of its special characteristic, a high correlation between semantics and radicals. To improve the extraction accuracy, this paper proposes a radical-aware causal relationship extraction model, which is oriented to the medical field. The BERT pre-training model is used to extract character-level features, which contain rich context information. To further deeply capture the semantics of characters, the Word2Vec model is used to extract radical features. Finally, the above two features are concatenated and passed into the extraction model to obtain the extraction results. Experimental results show that the proposed model can improve the accuracy of causal relationship extraction in medical texts.
    Keywords: causal relationship extraction; the medical field; radical features; BERT model; Word2Vec model.

  • Detection of computationally intensive functions in a medical image segmentation algorithm based on an active contour model   Order a copy of this article
    by Carlos Gulo, Antonio Sementille, João Tavares 
    Abstract: Common image segmentation methods are computationally expensive, particularly when run on large medical datasets, and require powerful hardware to achieve image-based diagnosis in real-time. For a medical image segmentation algorithm that is based on an active contour model, our work presents an efficient approach that detects computationally intensive functions and adapts the implementation for improved performance. We employ profiling methods that assess algorithm performance taking into account the overall cost of execution, including time, memory access, and performance bottlenecks. We apply performance analysis techniques commonly available in traditional computing operating systems, which obviates the need for new setup or measurement techniques ensuring a short learning curve. The article presents guidelines to aid researchers in a) using profiling tools and b) detecting and checking potential optimisation snippets in medical image segmentation algorithms by measuring overall performance bottlenecks.
    Keywords: medical image processing and analysis; profiling tools; performance analysis; high-performance computing.
    DOI: 10.1504/IJCSE.2022.10050929
     
  • Spaced retrieval therapy mobile application for Alzheimer's patients: a usability testing   Order a copy of this article
    by Kholoud Aljedaani, Reem Alnanih 
    Abstract: Alzheimers disease is the most common type of dementia. Statistics predict a sharp increase in patient numbers by 2050. Many applications support the patients in their daily activities and help them to engage in society. However, designing an acceptable and usable interface for this type of user is challenging. Spaced Retrieval Therapy (SRT) is a non-pharmacological therapy for Alzheimers disease that helps reduce the high cost of the treatments. The SRT application helps the patients to remember their vital information after a few sessions. In this paper, the authors develop the proposed application, which applies a non-pharmacological therapy to reduce the cost of treatments and help Alzheimers patients engage in society. The paper presents the findings on its usability. The usability test included 20 older adults divided into two groups (10 healthy and 10 with Alzheimers). Each group comprises two smaller groups (5 for each) to test the two types of interface. A list of tasks was given to both groups during the test, and the attributes of the task times and error numbers were collected. A post-task questionnaire evaluated the level of difficulty for each task. The result of the tasks confirmed that the Alzheimers group needed more time to complete the tasks than the healthy elderly group. Based on the post-task questionnaire, the healthy elderly group finds the default user interface simpler than the adapted one, which contrasts with the Alzheimers patients. Alzheimers patients performed faster in the adapted user interface. As recommendations: 1) use voice recognition instead of typing on keyboards because the typing tasks take the longest time in observation, and some Alzheimers patients cannot complete the tasks although they can read and write; 2) thicken the items borders in the menu because most errors result from confusion between the items.
    Keywords: spaced retrieval therapy; Alzheimer patients; usability testing; mobile application; designing user interface.
    DOI: 10.1504/IJCSE.2022.10050973
     
  • Design of heuristic model to improve block-chain-based sidechain configuration   Order a copy of this article
    by Nisha Balani, Pallavi V. Chavan 
    Abstract: Data security is a major concern for any modern-day network deployment. Blockchain resolves security issues to a large extent. Blockchains are nowadays widely accepted for secure transactions and network communications. Since there is no limitation on the amount of data being stored, blockchain-based networks tend to become slow as the length of the main blockchain increases. To overcome this issue, the concept of sidechain is introduced. With sidechains, blockchain systems become faster, and inherit characteristics of blockchain including security, transparency and traceability. This paper proposes a solution for creating context-aware sidechains to increase system performance using a heuristic approach. The proposed algorithm assists in creation of customised sidechains via optimisation of blockchain mining delay using stochastic modelling. It generates a large number of stochastic sidechain combinations, evaluates them on the basis of mining delay, and selects optimal configuration. The proposed model is evaluated on different network conditions by varying network size and traffic density.
    Keywords: blockchain; sidechain; data sharing; fitness; QoS; blockchain mining delay; computation.
    DOI: 10.1504/IJCSE.2022.10050704
     
  • Joint optimisation of feature selection and SVM parameters based on an improved fireworks algorithm   Order a copy of this article
    by Xiaoning Shen, Jiyong Xu, Mingjian Mao, Jiaqi Lu, Liyan Song, Qian Wang 
    Abstract: In order to reduce the redundant features and improve the accuracy in classification, an improved fireworks algorithm for joint optimisation of feature selection and SVM parameters is proposed. A new fitness evaluation method is designed, which can adjust the punishment degree adaptively with the increase of the number of selected features. A differential mutation operator is introduced to enhance the information interaction among fireworks and improve the local search ability of the fireworks algorithm. A fitness-based roulette wheel selection strategy is proposed to reduce the computational complexity of the selection operator. Three groups of comparisons on 14 UCI classification data sets with increasing scales validate the effectiveness of our strategies and the significance of joint optimisation. Experimental results show that the proposed algorithm can obtain a higher accuracy in classification with fewer features.
    Keywords: fireworks algorithm; support vector machines; feature selection; parameter optimisation; joint optimisation.

  • Statistical analysis for predicting residents travel mode based on random forest   Order a copy of this article
    by Lei Chen, Zhengyan Sun, Shunxiang Zhang, Guangli Zhu, Subo Wei 
    Abstract: Random forest has achieved good results in the prediction task, but due to the complexity of travel mode and the uncertainty of random forest, the prediction accuracy of travel mode is low. To improve the accuracy of prediction, this paper proposes a residents travel modes prediction method based on the random forest. To extract valuable feature information, the questionnaire survey data is collected, which is preprocessed by three kinds of appropriate methods. Then, each feature is analysed by the statistical learning method to obtain the important feature of transportation selection. Finally, a random forest is constructed to predict the travel mode of residents selection of transportation. The parameters of random forests are modified and improved to achieve higher prediction accuracy of travel mode. The experimental results show that the method proposed in this paper effectively improves the prediction accuracy of the travel mode.
    Keywords: random forest; residents’ travel mode; statistical analysis.

  • Wireless optimisation positioning algorithm with the support of node deployment   Order a copy of this article
    by Xudong Yang, Chengming Luo, Luxue Wang, Hao Liu, Lingli Zhang 
    Abstract: Position is one of the basic attributes of an object, which is one of the key technologies for its collaborative operation. As a distributed sensing method, Wireless Sensor Networks (WSNs) have become a feasible solution especially in satellite signal denied environments. Considering that the node deployment is the basis of target positioning in WSNs, this paper first researches the optimal deployment of wireless nodes, and then researches the optimal positioning of mobile targets. Based on the least squares equation, a feature matrix that can characterise the positioning error is derived so that the positioning error caused by wireless node deployment is minimised. Following that, the positioning results are refined using particle swarm optimisation, which makes the mobile target have a coarse to fine accuracy. The results indicate that the proposed algorithm can reduce the influence of network topology on positioning error, which is critical for some location-based applications.
    Keywords: distributed sensing; wireless positioning; node deployment; matrix eigenvalues; particle swarm.

  • CNN-based battlefield classification and camouflage texture generation for real environments   Order a copy of this article
    by Sachi Choudhary, Rashmi Sharma 
    Abstract: It is critical to understand the environment in which the military forces are deployed. For self-defence and greater concealment, they should camouflage themselves. Camouflage is being used by the defence system to hide its personnel and equipment. The industry demands an intelligent system that can categorise the battlefield before generating texture for camouflaging their assets and objects, allowing them to adopt the conspicuous features of the scene. In this study, a CNN-based battlefield classification model has been developed to learn background information and classify the terrain. The study also intended to develop the texture for specific terrain by matching its salient features and boosting the effectiveness of the camouflage. Saliency maps have been used to measure the effectiveness of blending a camouflaged object into an environment.
    Keywords: digital camouflage; terrain classification; battlefield classification; camouflage generation; scene classification; colour clustering; saliency map.

  • Investigation on the optimisation of Cholesky decomposition algorithm based on SIMD-DSP   Order a copy of this article
    by Huixiang Li, Huifu Zhang, Anxing Xie, Yonghua Hu, Wei Liang 
    Abstract: With the development of high-performance SIMD-DSP processors, corresponding highly efficient algorithms for matrix decomposition play an important role in the hardware performance of such processors. Cholesky decomposition is a fast decomposition method for symmetric positive definite matrices, which is widely used in matrix inversion and linear equation solving. According to the hardware characteristics of the FT-M7002 processors, in this paper, we optimise the algorithm in several ways. If the hardware has on-chip double-buffered memory, the parallel process of DMA transmitting and calculating is specially designed, which can hide most of the time cost of data movement and further improve the algorithms performance. The experimental results based on the FT-M7002 processor show that the performance of the optimised algorithm is 3.8~5.64 times that of the serial algorithm, and 1.39~2.14 times that of the TI library function.
    Keywords: Cholesky decomposition; DSP; SIMD.

  • JALNet: joint attention learning network for RGB-D salient object detection   Order a copy of this article
    by Xiuju Gao, Jianhua Cui, Jin Meng, Huaizhong Shi, Songsong Duan, Chenxing Xia 
    Abstract: The existing RGB-D saliency object detection (SOD) methods mostly explore the complementary information between depth features and RGB features. However, these methods ignore the bi-directional complementarity between RGB and depth features. From this view, we propose a joint attention learning network (JALNet) to learn the cross-modal mutual complementary effect between the RGB images and depth maps. Specifically, two joint attention learning networks are designed, namely, a cross modal joint attention fusion module (JAFM) and a joint attention enhance module (JAEM), respectively. The JAFM learns cross-modal complementary information from the RGB and depth features, which can strengthen the interaction of information and complementarity of useful information. At the same time, we utilize the JAEM to enlarge receptive field information to highlight salient objects. We conducted comprehensive experiments on four public datasets, which proved that the performance of our proposed JALNet outperforms 16 state-of-the-art (SOTA) RGB-D SOD methods.
    Keywords: salient object detection; depth map; bi-directional complementarity; cross-modal features.

  • Classifying blockchain cybercriminal transactions using hyperparameter tuned supervised machine learning models   Order a copy of this article
    by Rohit Saxena, Deepak Arora, Vishal Nagar 
    Abstract: Bitcoin is a crypto asset with transactions recorded on a decentralised, publicly accessible ledger. The real-world identity of a Bitcoin blockchain owner is masked behind a pseudonym, known as an address. As a result, Bitcoin is widely thought to provide a high level of anonymity, which is one of the reasons for its widespread use in criminal operations such as ransomware attacks, gambling, etc. As a result, classification and prediction of diverse cybercriminal users' activities and addresses in the Bitcoin blockchain are demanded. This research presents a classification of Bitcoin blockchain user activities and addresses associated with illicit transactions using supervised machine learning (ML). The labelled dataset samples with user activities are prepared using the unlabelled dataset available at the Blockchair repository and labelled dataset at WalletExplorer and are trained using classification models from the Decision Trees, Ensemble, Bayesian, and Instance-based Learning families. For balancing the classes of the dataset, the weighted mean and synthetic minority principles have been employed. The models' cross-validation (CV) accuracy is assessed. Extra Trees emerged as the best classification model, whereas Gaussian Na
    Keywords: blockchain; Bitcoin; supervised machine learning; classification; GridSearchCV.

  • An improved blind/referenceless image spatial quality evaluator algorithm for Image Quality Assessment   Order a copy of this article
    by Xuesong Li, Jinfeng Pan, Jianrun Shang, Alireza Souri, Mingliang Gao 
    Abstract: Image quality assessment (IQA) methods are generally studied in the spatial or transform domain. Due to the BRISQUE algorithm evaluating the quality of an image only based on its natural scene statistics of the spatial domain, the frequency features that are extracted from the modulation transfer function (MTF) are applied to improve its performance. MTF is estimated based on the slanted-edge method. The two-dimensional grey fitting algorithm is utilized to estimate the edge slope more accurately. Then the three-order Fermi function is utilized to match the preliminary estimated edge spread function to reduce the aliasing influence on MTF estimation. The features such as crucial frequency and the MTF value at Nyquist frequency are calculated and adopted to the BRISQUE method to assess the image quality. Experimental results on the image quality assessment databases illustrated that the proposed method outperforms the BRISQUE method and some other common methods, based on the linear and nonlinear correlation between the image quality assessed by the methods and their subjective value.
    Keywords: image assessment; modulation transfer function; Fermi function; feature extraction.

  • Simple and compact finite difference formulae using real and complex variables   Order a copy of this article
    by Yohei Nishidate 
    Abstract: A new set of compact finite difference formulae is derived by simple combinations of the real and the complex Taylor series expansions. The truncation error is fourth-order in derived formulae for approximating first to fourth-order derivatives. Although there exist complex stencil finite difference formulae with better truncation errors, our formulae are computationally cheaper, requiring only three points for first to third-order and four points for fourth-order derivatives. The derived formulae are experimented with for approximating derivatives of relatively simple and highly nonlinear functions used in other literature. Although the new formulae suffer the subtractive cancellation, it is demonstrated that the derived formulae outperform finite difference formulae of comparable computational costs for relatively large step sizes.
    Keywords: Taylor series expansion; approximation in the complex domain; finite difference methods; compact finite difference formula; numerical approximation.

  • A generalised incomplete no-equilibria transformation method to construct a hidden multi-scroll system with no-equilibrium   Order a copy of this article
    by Lihong Tang, Zongmei He, Yanli Yao, Ce Yang 
    Abstract: At present, there is a lot of research on multi-scroll chaotic systems with equilibrium points. However, there are few studies on no-equilibrium multi-scroll chaotic systems. This paper proposes a generalised incomplete no-equilibrium transformation method to design no-equilibrium multi-scroll chaotic systems. Firstly, a no-equilibrium chaotic system is constructed by adopting the proposed method. Phase plots and Lyapunov exponents show that the constructed no-equilibrium chaotic system can generate hidden hyperchaotic attractors. Then, a no-equilibrium multi-scroll hyperchaotic system is realized by introducing multi-level logic pulse signals. Theoretical analysis and numerical simulation show that the designed no-equilibrium multi-scroll hyperchaotic system can generate hidden multidirectional multi-double-scroll attractors including 1-D, 2-D, and 3-D hidden multi-scroll hyperchaotic attractors. Finally, an analogue circuit of the no-equilibrium multi-scroll hyperchaotic system is implemented by using commercial electronic elements. Various typical hidden multi-scroll attractors are verified on MULTISIM platform.
    Keywords: no-equilibrium; hidden attractors; multi-scroll; multi-level pulse.

  • Selection of the best hybrid spectral similarity measure for characterising marine oil spills from multi-platform hyperspectral datasets   Order a copy of this article
    by Deepthi Deepthi, Deepa Sankar, Tessamma Thomas 
    Abstract: Marine oil pollution causes major economic crises in major industrial sectors such as fishing, shipping and tourism. It affects marine life even decades afterwards, necessitating very quick detection and remediation. Unfortunately, from remote sensing Hyperspectral Images (HSI), it is very difficult to detect oils, as oil slicks and seawater have nearly similar spectral properties. Therefore a cohesive and synergistic Hybrid Spectral Similarity Measure (HSSM) evaluating the multi-class, multi-platform capability of hyperspectral marine oil spill image classification is identified and recommended in this paper. Hyperspectral Images (HSI) procured from Spaceborne (Earth Observation (EO-1) Hyperion) and Airborne (Airborne Visible/Infrared Imaging Spectrometer (AVIRIS)) platforms are employed here to discriminate marine spectral classes. The statistical parameters such as Overall Accuracy (OA), Kappa, ROC/PR curve, AUC/PRAUC, weighted Youden index (Jw), F1 score and noise performances provided crucial evidence to find the best HSSM and Spectral Information Divergence-Chi square distance (SID-CHI). The stochastic capabilities of SID in capturing spectrum variations among spectral bands and the robustness to noises, inherited from CHI, are significant for the improved accuracy attained by SID-CHI over other HSSM. From the observations, it is established that SID-CHI can be used as a novel method for the multi-class and multi-platform classification of marine oil spill hyperspectral datasets.
    Keywords: hybrid spectral similarity measure; hyperspectral image; ROC curve; weighted Youden index; F1 score; optimal cut-off value.

  • A novel fertility intention prediction scheme based on naive Bayes   Order a copy of this article
    by Meijiao Zhang, Lan Yang, Weiping Jiang, Gejing Xu, Guoliang Hu 
    Abstract: In today's society, the problem of ageing has developed into a global problem. The ageing of the population is related to the decrease in fertility intention. Therefore, it is meaningful to predict the fertility intention of people. In this paper, we propose a fertility intention prediction scheme based on a polynomial naive Bayesian model, called the FPB scheme. In the proposed scheme, we first extract various features from the data we collected, and we divided the entire dataset into three labels according to the level of fertility intention. Then, we use these data to construct a classifier. Next, we use this classifier to design a perfect prediction algorithm to predict fertility intention. Finally, we conduct extensive experiments to evaluate the performance of the proposed scheme. The experimental results show that the proposed FPB scheme has high accuracy and can help families to make accurate fertility decisions.
    Keywords: fertility intention; polynomial naive Bayes; prediction algorithm.

  • SAPNN: self-adaptive probabilistic neural network for medical diagnosis   Order a copy of this article
    by Yibin Xiong, Jun Wu, Qian Wang, Dandan Wei 
    Abstract: Medical diagnosis has always been a hot topic of great concern in the medical field. For this purpose, a self-adaptive probabilistic neural network (SAPNN) is proposed in this paper. Firstly, a hybrid cuckoo search (HCS) algorithm is proposed. Secondly, HCS is used in probabilistic neural networks for adapting the smoothing factor parameters. In order to accurately evaluate SAPNN proposed in this paper, the disease data sets of breast cancer, diabetes and Parkinsons disease were used for testing. Finally, comparison with several other methods yielded that the accuracy of SAPNN was the best in all cases, where the accuracy was 97.51%, 96.53%, 75.74%, 96.61%; recall was 97.6%, 99.12%, 79.74%, 88.24%; specificity was 96.15%, 88.88%, 59.03%, 95.31%; the precision was 97.85%, 94.32%, 80.12%, 85%, respectively. The results of various evaluation indexes show that the proposed SAPNN in this paper is a new method that can be applied to medical diagnosis.
    Keywords: ancillary diagnosis of disease; cuckoo search; information sharing; mutation strategy; probabilistic neural network.

  • Minimum redundancy maximum relevance and VNS-based gene selection for cancer classification in high-dimensional data   Order a copy of this article
    by Ahmed Bir-Jmel, Sidi Mohamed Douiri, Souad Elbernoussi 
    Abstract: DNA microarray is a technique for measuring the expression levels of a huge number of genes. These levels have a significant impact on cancer classification tasks. In DNA datasets, the number of genes exceeds the number of samples that make the presence of irrelevant or redundant genes possible, which penalises the performance of classifiers. For that, the development of new methods for gene selection represents an active subject for researchers. In this paper, two-hybrid multivariate filters for gene selection, named VNSMI and VNSCor, are presented. The two methods surpass the univariate filters by considering the possible interaction between genes through the search for an optimal subset of genes that contains the minimum redundancy and the maximum relevance (MRMR). In the first stage of our proposed methods, we use a univariate filter by selecting the best-ranked genes based on information theory and the Pearson Correlation Coefficient (PCC). Then, we apply the Variable Neighbourhood Search (VNS) metaheuristic coupled with an innovative Stochastic Local Search (SLS) algorithm to find the final subset of genes that maximise the MRMR objective function. Evaluating the proposed method, the experiments were performed on six well-replicated microarray datasets. The obtained results show that the proposed approach leads to encouraging results in terms of accuracy and the number of selected genes. Also, improvements are observed consistently using the classifiers 1NN and SVM.
    Keywords: gene selection; feature selection; cancer classification; VNS; stochastic local search; normalised mutual information; MRMR; DNA microarray.

  • Synthesis and evaluation of the structure of CAM memory by QCA computing technique   Order a copy of this article
    by Nirupma Pathak, Neeraj Kumar Misra, Santosh Kumar 
    Abstract: The lithographically based CMOS technology revolutions of the past few years are long behind us, but the technology used in today's microelectronics faces significant challenges in terms of speed, area, and power consumption. In the domain of QCA, the purpose of this article is to design a novel CAM memory. This article deals with the compact structure of the novel CAM memory, which is based on GDI-CMOS and QCA technology, respectively. Compared with the recently reported designs in the literature, it is observed that the area, latency, majority gate, and cell count of the proposed CAM are decreased by more than 78.57%, 50%, 40%, and 67%, respectively. In addition to this, the clock delay of the CAM cell design is less than the other results that have been reported. This cutting-edge QCA-based CAM structure is not only one of a kind but also very cost-effective in today's nano-devices. This CAM design that has been proposed improves performance while also making the use of modern device development simpler and more cost-effective.
    Keywords: CAM memory; QCA; GDI-CMOS; nano-electronics.

  • Canopy centre-based fuzzy C-means clustering for enhancement of soil fertility Prediction   Order a copy of this article
    by M. Sujatha, C.D. Jaidhar 
    Abstract: For plants to develop, fertile soil is necessary. Estimating soil parameters based on time change is crucial for enhancing soil fertility. Sentinel-2s remote sensing technology produces images that can be used to gauge soil parameters. In this study, values for soil parameters such as electrical conductivity, pH, organic carbon, and nitrogen are derived using Sentinel-2 data. In order to increase the clustering accuracy, this study suggests using canopy centre-based fuzzy C-means clustering and comparing it with manual labelling and other clustering techniques, such as canopy density-based, expectation maximisation, farthest-first, k-Means, and fuzzy C-means clustering. The proposed clustering achieved the highest clustering accuracy of 78.42%. Machine learning-based classifiers were applied to classify soil fertility, including naive Bayes, support vector machine, decision trees, and random forest (RF). A dataset labelled with the proposed RF clustering classifier achieves a high classification accuracy of 99.69% with 10-fold cross-validation.
    Keywords: clustering; classification; machine learning; remote sensing; soil fertility.

  • A comprehensive understanding of popular machine translation evaluation metrics   Order a copy of this article
    by Md. Adnanul Islam, Md. Saddam Hossain Mukta 
    Abstract: Machine translation is one of the pioneer applications of natural language processing and artificial intelligence. Automatic evaluation of the translation performance of the machine translators is one of the most challenging tasks, as manual evaluation of large volumes of document translations is infeasible in practice. Thus, to facilitate the evaluation of translation performance automatically, several metrics have been introduced and utilised widely. Although these translation performance evaluation metrics cannot match the efficiency level of human evaluation, they are popularly employed in automatic evaluation of translation quality of texts across multifarious application domains. This article discusses three such widely used evaluation metrics - BLEU, METEOR, and TER, with relevant details by demonstrating step by step calculations. The main novelty of this article lies in the consideration of several example translations to present and clarify the calculation process of these three of the most popular evaluation metrics for measuring the performance or quality of machine translation. Moreover, the article presents a comparative analysis among these three metrics using two different datasets to reveal their similarities and distinctions in terms of behaviour.
    Keywords: evaluation metrics; translation performance; bi-lingual evaluation understudy; BLEU; METEOR; translation edit rate; TER; machine translation.
    DOI: 10.1504/IJCSE.2021.10044073
     
  • A named entity recognition method towards product reviews based on BiLSTM-attention-CRF   Order a copy of this article
    by Shunxiang Zhang, Haiyang Zhu, Hanqing Xu, Guangli Zhu, Kuan-Ching Li 
    Abstract: Named entity recognition (NER) towards product review intends to identify domain dependent named entities (e.g., organisation name, product name, etc.) from product reviews. Due to the fragmentation and non-construction of product reviews, traditional methods are difficult to capture the domain feature information and dependencies precisely. To solve the problem, we proposed a NER method towards product reviews based on BiLSTM-attention-CRF. Firstly, three kinds of features (character, word and part of speech) are integrated into the feature representation of texts. The final feature vector is obtained through training, mapping and linking the selected features. Then, the BiLSTM network is built to extract text features, and the attention mechanism is adopted to strengthen the capture of local features. Finally, CRF is applied to annotate and identify the entity. Compared with existing models, it is demonstrated that the proposed method can effectively recognise named entities from product reviews.
    Keywords: named entity recognition; NER; product reviews; BiLSTM; attention; conditional random field; CRF.

  • Efficient and non-interactive ciphertext range query based on differential privacy   Order a copy of this article
    by Peirou Feng, Qitian Sheng, Jianfeng Wang 
    Abstract: Differentially private range query schemes satisfy differential privacy by adding or deleting records during the process of creating index which suffers from the weakness of data loss in query results due to the negative noise. Recently, Sahin et al. (2018) proposed a differentially private index with overflow arrays in ICDE 2018, which ensures the integrity of query results. However, this scheme suffers from two drawbacks: 1) some private information (e.g., query requests or frequency) may be leaked because of querying over plaintext index; 2) the overflow arrays bring extra storage overhead. To this end, we present a non-interactive ciphertext range query based on differential privacy and comparable encryption. Our scheme can protect the query privacy since the query is performed over ciphertext based on comparable encryption. Besides, the experiment results show that our proposed scheme can save the storage overhead.
    Keywords: range query; differential privacy; short comparable encryption.

  • Joint training with the edge detection network for salient object detection   Order a copy of this article
    by Gu Zongyun, Kan Junling, Ma Chun, Wang Qing, Li Fangfang 
    Abstract: The U-shaped network has great advantages in object detection tasks. However, most of the previous salient object detection studies still suffered from inaccurate predictions affected by unclear object boundaries. Considering the complementarity of the information between salient object and salient edge, we designed a new kind of network to effectively perform the joint training with edge detection tasks in three steps. Firstly, we added a prediction branch on the bottom-up pathway for capturing the edge of salient objects. Secondly, salient object features, global context, integrated low-level details, and high-level semantic information are extracted by the method of progressive fusion. Finally, the feature of the salient edge is concatenated with that of the salient object on the last layer in the top-down pathway. Since the salient edge feature contains much information about edge and location, the feature fusion can locate salient objects more accurately. The results of experiments on five benchmark datasets demonstrate that the proposed approach achieves competitive performance.
    Keywords: deep learning; salient object detection; SOD; U-shape architecture; edge detection; feature pyramid network.
    DOI: 10.1504/IJCSE.2022.10045026
     
  • Application of deep learning approach for recognition of voiced Odia digits   Order a copy of this article
    by Prithviraj Mohanty, Jyoti Prakash Sahoo, Ajit Kumar Nayak 
    Abstract: Automatic speech recognition in a regional language like Odia is a challenging field of research. Voiced Odia digit recognition helps in designing automatic voice dialler systems. In this study, a deep learning approach is used for the recognition of voiced Odia digits. The spectrogram representation of voiced samples is given as the input to the deep learning models after considering the feature extraction using MFCC. Various performance metrics are obtained by considering several experiments with different epoch sizes and variation in the dataset using the train-validate-test ratio. Experimental outcomes reveal that the CNN model provides improved accuracy of 91.72% in epoch size of 500 with a split ratio of 80-10-10 as compared to the other two models that use VSL and DNN. From the reported outcome it unravels that, the proposed CNN model has better average recognition accuracy as compared with contemporary models like HMM and SVM.
    Keywords: automatic speech recognition; ASR; convolutional neural network; CNN; deep neural network; DNN; MFCC; HMM; SVM; spectrogram.
    DOI: 10.1504/IJCSE.2022.10047843
     
  • Optimisations of four imputation frameworks for performance exploring based on decision tree algorithms in big data analysis problems   Order a copy of this article
    by Jale Bektaş, Turgay Ibrikci 
    Abstract: The phenomenon of how to treat missing values is a problem confronted in big data analysis. Therefore, various applications have been developed on imputation strategies. This study focused on four imputation frameworks proposing novel perspectives based on expectation-maximisation (EM), self-organising map (SOM), K-means and multilayer perceptron (MLP). Initially, several transformation steps such as normalised, standardised, interquartile range and wavelet were applied. Then, imputed datasets were analysed using decision tree algorithms (DTAs) by optimising their parameters. These analyses showed that DTAs had not been strikingly affected by any data transformation techniques except interquartile range. Even though the dataset contains a missing value ratio of 33.73%, the EM imputation framework provided a performance increase of 0.42% to 3.14%. DTAs based on C4.5 and NBTree algorithms have been more stable for all big imputed datasets. Furthermore, realistic performance measurement of any preprocessing experiment based on C4.5 can be proposed to avoid time complexity.
    Keywords: preprocessing; data mining; multiple imputation; decision tree classifier; machine learning; big data analytics.

  • A new approach based on generalised multiquadric and compactly supported radial basis functions for solving two-dimensional Volterra-Fredholm integral equations   Order a copy of this article
    by Dalila Takouk, Rebiha Zeghdane, Belkacem Lakehali 
    Abstract: This article describes a numerical scheme to solve two-dimensional nonlinear Volterra-Fredholm integral equations (IEs). The method estimates the solution by compactly supported radial basis functions and compared with the approximation of the solution by generalised multiquadric radial basis function with the optimal strategy for the exponent β. Integrals appearing in the procedure of the solution are approximated using shifted Legendre-Gauss-Lobatto nodes and weights. The method is mathematically simple and truly meshless. It can be used for high-dimensional problems because it does not require any cell structures. Finally, numerical experiments are given to show and test the applicability of the presented approach and confirm the theoretical analysis.
    Keywords: Volterra-Fredholm integral equations; two-dimensional integral equations; generalised multiquadric radial basis functions; compactly supported radial basis functions; interpolation method; shifted Legendre-Gauss-Lobatto nodes and weights.

  • Cyber-security threats and vulnerabilities in 4G/5G network enabled systems   Order a copy of this article
    by Shailendra Mishra 
    Abstract: In the era of 4G and 5G networks, many cyber-security issues have arisen. Security is an important issue that should not be overlooked. The security system consists of standardisation, network policies, network arrangements and network positioning. This study investigates the challenges related to cyber-security threats and vulnerabilities within 4G/5G networks and how they affect their use. The primary and secondary data analysis indicates that significant regulatory intervention is required to prevent serious cyber-security issues within these networks, in addition, to affecting users' privacy. Compared to the current intrusion detection system, the new intrusion detection system has better QoS. Intrusion detection rates can vary based on the number of nodes arriving in the network. Support vector machines are used to detect and prevent intrusions within a network and to identify attackers inside. In terms of cyber-attack detection and mitigation, the solutions are fast and effective.
    Keywords: 4G and 5G enabled networks; IoT cyber-security; threats; vulnerabilities; machine learning; intrusion detection system.
    DOI: 10.1504/IJCSE.2022.10050323
     
  • Machine learning-based land usage identification using Haralick texture features of aerial images with Kekre's LUV colour space   Order a copy of this article
    by Sudeep D. Thepade, Shalakha Vijaykumar Bang, Rik Das, Zahid Akhtar 
    Abstract: Study of gathering some useful insights from our planet Earth - its natural, man-made, physical, and biological structures is quite engrossing. Earth observation despite being intuitive, also helps in mitigating the adverse impacts of human civilisation on our mother Earth. Multiple techniques that help in observing the Earth's surface include Earth surveying techniques, remote-sensing technology, etc. The properties which are measured using remote-sensing technology stimulate the study of land usage identification which refers to the purpose the land is used for. The rapid increase in population, immense growth in infrastructure and technology has led to massive urbanisation posing a great number of challenges. The knowledge of land use identification will help in developing strategies to tackle issues related to the depletion of forest areas, urban encroachment, monitoring of natural disasters, etc. This paper attempts to give a more robust approach towards land usage identification that extracts Haralick texture features from input aerial images of the Earth by considering their representation in two different colour spaces namely RGB and Kekre-LUV. Comparing the results obtained by using different machine learning classification algorithms, it is found that an ensemble of simple logistic and random forest classifiers outputs maximum classification accuracy.
    Keywords: grey level co-occurrence matrix; GLCM; random forest; simple logistic regression; land usage identification; remote sensing.
    DOI: 10.1504/IJCSE.2022.10045551
     

Special Issue on: CCPI'20 Smart Cloud Applications, Services and Technologies

  • A big data and cloud computing model architecture for a multi-class travel demand estimation through traffic measures: a real case application in Italy   Order a copy of this article
    by Armando Cartenì, Ilaria Henke, Assunta Errico, Marida Di Bartolomeo 
    Abstract: The big data and cloud computing are an extraordinary opportunity to implement multipurpose smart applications for the management and the control of transport systems. The aim of this paper is to propose a big data and cloud computing model architecture for a multi-class origin-destination demand estimation based on the application of a bi-level transport algorithm using traffic counts on a congested network, also to propose sustainable policies at urban scale. The proposed methodology has been applied to a real case study in terms of travel demand estimation within the city of Naples (Italy), also aiming to verify the effectiveness of a sustainable policy in term of reducing traffic congestion by about 20% through en-route travel information. The obtained results, although preliminary, suggest the usefulness of the proposed methodology in terms of ability in real time/pre-fixed time periods to estimate traffic demand.
    Keywords: cloud computing; big data; virtualisation; smart city; internet of things; transportation planning; demand estimation; sustainable mobility; simulation model.

  • A methodology for introducing an energy-efficient component within the rail infrastructure access charges in Italy   Order a copy of this article
    by Marilisa Botte, Ilaria Tufano, Luca D'Acierno 
    Abstract: After the separation of rail infrastructure managers from rail service operators occurred within the European Union in 1991, the necessity of defining an access charge framework for ensuring non-discriminatory access to the rail market arose. Basically, it has to guarantee an economic balance for infrastructure manager accounts. Currently, in the Italian context, access charge schemes neglect the actual energy-consumption of rail operators and related costs of energy traction for infrastructure managers. Therefore, we propose a methodology, integrating cloud-based tasks and simulation tools, for including such an aspect within the infrastructure toll, thus making the system more sustainable. Finally, to show the feasibility of the proposed approach, it has been applied to an Italian real rail context, i.e. the Rome-Naples high-speed railway line. Results have shown that customising the tool access charges, by considering the power supply required, may generate a virtuous loop with an increase in energy-efficiency of rail systems.
    Keywords: cloud-based applications; rail infrastructure access charges; environmental component; energy-saving policies.

  • Edge analytics on resource-constrained devices   Order a copy of this article
    by Sean Savitz, Charith Perera, Omer Rana 
    Abstract: Video and image cameras have become an important type of sensor within the Internet of Things (IoT) sensing ecosystem. Camera sensors can measure our environment at high precision, providing the basis for detecting more complex phenomena in comparison with other sensors e.g. temperature or humidity. This comes at a high computational cost on the CPU, memory and storage resources, and requires consideration of various deployment constraints, such as lighting and height of camera placement. Using benchmarks, this work evaluates object classification on resource-constrained devices, focusing on video feeds from IoT cameras. The models that have been used in this research include MobileNetV1, MobileNetV2 and Faster R-CNN, which can be combined with regression models for precise object localisation. We compare the models by using their accuracy for classifying objects and the demand they impose on the computational resources of a Raspberry Pi.
    Keywords: internet of things; edge computing; edge analytics; resource-constrained devices; camera sensing; deep learning; object detection.

  • Traffic control strategies based on internet of vehicles architectures for smart traffic management: centralised vs decentralised approach   Order a copy of this article
    by Houda Oulha, Roberta Di Pace, Rachid Ouafi, Stefano De Luca 
    Abstract: In order to reduce traffic congestion, real-time traffic control is one of the most widely adopted strategies. However, the effectiveness of this approach is constrained not only by the adopted framework but also by data. Indeed, the computational complexity may significantly affect this kind of application, thus the trade-off between the effectiveness and the efficiency must be analysed. In this context, the most appropriate traffic control strategy to be adopted must be accurately evaluated. In general, there are three main control approaches in the literature: centralised control, decentralised control and distributed control, which is an intermediate approach. In this paper, the effectiveness of a centralised and a decentralised approach is compared and applied to two network layouts. The results, evaluated not only in terms of performance index with reference to the network total delay but also in terms of emissions and fuel consumption, highlight that the considered centralised approach outperforms the adopted decentralised one and this is particularly evident in the case of more complex layouts.
    Keywords: cloud computing; internet of vehicles; transportation; centralised control; decentralised control; emissions; fuel consumption.

  • ACSmI: a solution to address the challenges of cloud services federation and monitoring towards the cloud continuum   Order a copy of this article
    by Juncal Alonso, Maider Huarte, Leire Orue-Echevarria 
    Abstract: The evolution of cloud computing has changed the way in which cloud service providers offer their services and how cloud customers consume them, moving towards the usage of multiple cloud services, in what is called multi-cloud. Multi-cloud is gaining interest by the expansion of IoT, edge computing and the cloud continuum, where distributed cloud federation models are necessary for effective application deployment and operation. This work presents ACSmI (Advanced Cloud Service Meta-Intermediator), a solution that implements a cloud federation, supporting the seamless brokerage of cloud services. Technical details addressing the discovered shortcomings are presented, including a proof of concept built on JHipster, Java, InfluxD, Telegraf and Grafana. ACSmI contributes to relevant elements of the European Gaia-X initiative, specifically to the federated catalogue, continuous monitoring, and certification of services. The experiments show that the proposed solution effectively saves up to 75% of the DevOps teams effort to discover, contract and monitor cloud services.
    Keywords: cloud service broker; cloud services federation; cloud services brokerage; cloud services intermediation; hybrid cloud; cloud service monitoring; multi-cloud; DevOps; cloud service level agreement; cloud service discovery; multi-cloud service management; cloud continuum.

  • User perception and economic analysis of an e-mobility service: development of an electric bus service in Naples, Italy   Order a copy of this article
    by Ilaria Henke, Assunta Errico, Luigi Di Francesco 
    Abstract: Among the sustainable mobility policies, electric mobility seems to be one of the best choices to reach sustainable goals, but it has limits that could be partially exceeded in the local public transport. This research presents a methodology to design a new sustainable public transport service that meets users needs by analysing economic feasibility. This methodology is then applied to a real case study: renewing an 'old' bus fleet with an electric one charged by a photovoltaic system in the city of Naples (Southern Italy). Its effects on users' mobility choices were assessed through a mobility survey. The bus line and the photovoltaic system were designed. Finally, the economic feasibility of the project was assessed through a cost-benefit analysis. This research is placed in the field of smart mobility and new technologies that increasingly need to store, manage, and process large amounts of data typical of cloud computing and big data applications
    Keywords: e-mobility; electric bus services; cloud computing; user perception; economic analysis; cost-benefit analysis; photovoltaic system; sustainable mobility policies; sustainable goals; new technologies; local emissions; environmental impacts.