Forthcoming articles

International Journal of Computational Science and Engineering

International Journal of Computational Science and Engineering (IJCSE)

These articles have been peer-reviewed and accepted for publication but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

Register for our alerting service, which notifies you by email when new issues are published online.

Open AccessArticles marked with this Open Access icon are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.
We also offer which provide timely updates of tables of contents, newly published articles and calls for papers.

International Journal of Computational Science and Engineering (62 papers in press)

Regular Issues

  • Advances in the enumeration of foldable self-avoiding walks   Order a copy of this article
    by Christophe Guyeux, Jean-Claude Charr, Jacques Bou Abdo, Jacques Demerjian 
    Abstract: Self-Avoiding Walks (SAWs) have been studied for a long time owing to their intrinsic importance and the many application fields in which they operate. A new subset of SAWs, called foldable SAWs, has recently been discovered when investigating two different SAW manipulations embedded within existing Protein Structure Prediction (PSP) software. Since then, several attempts have been made to find out more about these walks, including counting them. However, calculating the number of foldable SAWs appeared as a tough work, and current supercomputers fail to count foldable SAWs of length exceeding ~30 steps. In this article, we present new progress in this enumeration, both theoretical (mathematics) and practical (computer science). A lower bound for the number of foldable SAWs is firstly proposed, by studying a special subset called prudent SAWs that is better known. The triangular and hexagonal lattices are then investigated for the first time, leading to new results about the enumeration of foldable SAWs on such lattices. Finally, a parallel genetic algorithm has been designed to discover new non-foldable SAWs of lengths ~100 steps, and the results obtained with this algorithm are promising.
    Keywords: self-avoiding walks; foldable SAWs; prudent SAWs; genetic algorithm.

  • Distributed nested streamed models of tsunami waves   Order a copy of this article
    by Kensaku Hayashi, Alexander Vazhenin, Andrey Marchuk 
    Abstract: This research focuses on designing a high-speed scheme for tsunami modelling using nested computing. Computations are carried out on a sequence of grids composed of geographical areas with resolutions where each is embedded within another. This decreases the total number of calculations by excluding unimportant coastal areas from the process. The paper describes the distributed streaming computational scheme allowing for flexible reconfiguration of heterogeneous computing resources with a variable set of modelling zones. Computations are implemented by distributing these areas over modelling components and by synchronising the transitions of boundary data between them. Results of numerical modelling experiments are also presented.
    Keywords: tsunami modelling; nested grids; distributed systems; coarse-grained parallelisation; streaming computing; communicating processes; process synchronisation; task parallelism; programming model; component-based software engineering.

  • E-commerce satisfaction based on synthetic evaluation theory and neural networks   Order a copy of this article
    by Jiayin Zhao, Yong Lu, H.A.O. Ban, Ying Chen 
    Abstract: The rapid development of e-commerce has led to the increasing role of satisfaction in more fields. Therefore, the customers opinion has become a necessary role for the success of related companies. E-commerce satisfaction, as the key factor affecting the performance of e-commerce enterprises, has become a research hotspot in academia. This paper proposes a synthetic evaluation model of satisfaction and logistics performance based on a fuzzy synthetic model and a dynamic weighted synthetic model, respectively. A modified ASCI analysis method based on the structured equation model is also proposed to compare with the synthetic method. Beyond this, we have also evaluated consumer satisfaction based review data. Corresponding suggestions are given for the operation of e-commerce enterprises.
    Keywords: E-commerce satisfaction; fuzzy synthetic model; structured equation model; neural networks.

  • On the build and application of bank customer churn warning model   Order a copy of this article
    by Wangdong Jiang, Yushan Luo, Ying Cao, Guang Sun, Chunhong Gong 
    Abstract: In view of the customer churn problem faced by banks, this paper will use the Python language to clean and select the original dataset based on real bank customer data, and gradually condense the 626 customer features in the original dataset to 77 customer features. Then, based on the pre-processed bank data, this paper uses logistic regression, decision tree and neural network to establish three bank customer churn warning models and compares them. The results show that the accuracy of the three models in predicting bank loss customers is above 92%. Finally, based on the logistic regression model with better evaluation results, this paper analyses the characteristics of the lost customers for the bank, and gives the bank management suggestions for the lost customers.
    Keywords: bank customer; churn warning model; logistic regression; customer churn.

  • Efficient deep convolutional model compression with an active stepwise pruning approach   Order a copy of this article
    by Shengsheng Wang, Chunshang Xing, Dong Liu 
    Abstract: Deep models are structurally tremendous and complex, thus making them hard to deploy on embedded hardware with restricted memory and computing power. Although the existing compression methods have pruned the deep models effectively, some issues exist in those methods, such as multiple iterations needed in the fine-tuning phase, difficulty in pruning granularity control, and numerous hyperparameters needed to set. In this paper, we propose an active stepwise pruning method of a logarithmic function which only needs to set three hyperparameters and a few epochs. We also propose a recovery strategy to repair the incorrect pruning thus ensuring the prediction accuracy of model. Pruning and repairing alternately constitute a cyclic process along with updating the weights in layers. Our method can prune the parameters of MobileNet, AlexNet, VGG-16 and ZFNet by a factor of 5.6
    Keywords: deep convolutional model; model compression; active stepwise pruning; parameter repairing; pruning intensity; logarithmic function.

  • Forecasting yield curve of Chinese corporate bonds   Order a copy of this article
    by Maojun Zhang 
    Abstract: Forecasting the yield curve of corporate bonds is an important issue about the corporate bond pricing and its risk management. In this paper, the dynamic Nelson-Siegel model is used to fit the yield of the corporate bonds in China, and the AR model is used to forecast the yield curve. It is found that the Nelson-Siegel model fitting the yield of the corporate bonds with different credit ratings is notonly very effective but also can indicate the long-term, medium-term and short-term dynamic features of the yield curve. Moreover, the linear AR (1) model might be more suitable than the nonlinear AR(1) model.
    Keywords: corporate bonds; yield curve; Nelson-Siegel model; AR(1) model.

  • Review on blockchain technology and its application to the simple analysis of intellectual property protection   Order a copy of this article
    by Wei Chen, Kun Zhou, Weidong Fang, Ke Wang, Fangming Bi, Biruk Assefa 
    Abstract: Blockchain is a widely used decentralised infrastructure. Blockchain technology has the decentration of network, the unforgeability of block data, etc. Therefore, blockchain technology has developed rapidly in recent years, and many organisations are involved. Applications are generally optimistic. This paper systematically introduces the background development status of blockchain, and analyses the operation mechanism, characteristics and possible application scenarios of blockchain technology from a technical perspective. Finally, the blockchain technology is applied to the intellectual property protection method as an example to study domestic and foreign examples and analyse existing problems. The review article aims to provide assistance for the application of blockchain technology.
    Keywords: blockchain; bitcoin; operating mechanism; intellectual property.

  • An improved Sudoku-based data hiding scheme using greedy method   Order a copy of this article
    by Chin-Chen Chang, Guo-Dong Su, Chia-Chen Lin 
    Abstract: Inspired by Chang et al.s scheme, an improved Sudoku-based data hiding scheme is proposed here. The major idea of our improved scheme is to find the approximate optimal solution of Sudoku using the greedy method instead of through a brute-force search for an optimal solution. Later, the found approximate optimal solution of Sudoku is used to offer satisfactory visual stego-image quality with a lower execution time during the embedding procedure. Simulation results confirmed that the average stego-image quality is enhanced by around 90.51% compared with Hong et al.s scheme, with relatively less execution time compared with a brute-force search method.
    Keywords: data hiding; Sudoku; greedy method; brute-force search method; approximate optimal solution.

  • A novel domain adaption approach for neural machine translation   Order a copy of this article
    by Jin Liu, Xiaohu Tian, Jin Wang, Arun Kumar Sangaiah 
    Abstract: Neural machine translation has been widely adopted in modern machine translation as it brings state-of-the-art performance to large-scale parallel corpora. For real-world applications, high-quality translation for text in a specific domain is crucial. However, the performance of general neural machine models drops when they are applied in a specific domain. To alleviate this issue, this paper presents a novel method of machine translation, which explores both model fusion algorithm and logarithmic linear interpolation. The method we propose can improve the performance of the in-domain translation model, while preserving or even improving the performance of the out-domain translation model. This paper has carried out extensive experiments on the proposed translation model using the public United Nations corpus. The BLEU (Bilingual Evaluation Understudy) score of the in-domain corpus and the out-domain corpus reaches 30.27 and 43.17, respectively, which shows a certain improvement over existing methods.
    Keywords: neural machine translation; model fusion; domain adaption.

  • Formation path of customer engagement in virtual brand community based on back propagation neural network algorithm   Order a copy of this article
    by Lin Qiao, Mengmeng Song, Rob Law 
    Abstract: The formation path of customer engagement in a virtual brand community with customer engagement, which explores customers non-transactional behavior, has become increasingly popular in the marketing field. This paper introduces an approach that integrates structural equation modelling and back propagation artificial neural network to identify the motivating factors (interactivity, information quality, and convenience) that influence the perceived information value and social value of a virtual brand community and customer engagement. Our experiment shows that when perceived value plays a mediation role in the influence of interactivity and information quality on customer engagement, interactivity is positively associated with customer engagement in the virtual brand community. This study aims to provide meaningful implications to companies effective use of brand fan pages.
    Keywords: neural network algorithm; virtual brand community; brand fan page; perceived value.

  • The mining method of trigger words for food nutrition matching   Order a copy of this article
    by Shunxiang Zhang 
    Abstract: The rational food nutrition matching plays a dual role in health and diet for humans. The trigger words related to food nutrition matching have an effect on classifying food nutrition matching into two types: reasonable nutrition matching and unreasonable nutrition matching. This paper proposes an aiming method of trigger words for food nutrition matching. First, food information frequency vector can be extracted by the number of food names, the number of nutrition ingredients and the number of matching effects in the sentence. By judging whether each component of food information frequency vector is 0 or not, the sentences unrelated to food nutrition matching can be filtered. Then, the two food verb-noun joint probability matrices can be constructed. The column of the first matrix is the food name, and the row is the verb. The column of the second matrix is the nutrition ingredient and matching effect, and the row is the verb. By comparing row mean value of the two matrices, whether the verb is a trigger word can be judged. Lastly, under the premise of commendatory and derogatory probabilities of the trigger word, the food nutrition matching can be classified as two types by naive Bayes. The experiments show that the proposed method effectively detects the trigger word related to food nutrition matching.
    Keywords: food nutrition matching; food information frequency vector; food verb-noun joint probability matrix.

  • Estimating capacity-oriented availability in cloud systems   Order a copy of this article
    by Jamilson Dantas, Rubens Matos, Jean Teixeira, Eltton Túlio, Paulo Maciel 
    Abstract: Over the years, many companies have employed cloud computing to support their services and optimise their infrastructure usage. The provisioning of high availability and high processing capacity is a significant challenge when planning a cloud computing infrastructure. Even when the system is available, a part of the resources may not be offered owing to partial failures in just a few of the many components in an IaaS cloud. The dynamic behaviour of virtualised resources requires special attention to the effective amount of capacity that is available to users, so the system can be correctly sized. Therefore, the estimation of capacity-oriented availability (COA) is an important activity for cloud infrastructure providers to analyse the cost-benefit tradeoff among distinct architectures and deployment sizes. This paper presents a strategy to evaluate the COA of virtual machines in a private cloud infrastructure. The proposed strategy aims to provide an efficient and accurate computation of COA, by means of closed-form equations. We compare our approach with the use of models such as continuous time Markov chains, considering execution time and values of metrics obtained with both approaches.
    Keywords: capacity-oriented availability; closed-form equation; cloud computing; continuous time Markov chain.

  • DDoS attack detection method based on network abnormal behaviour in big data environment   Order a copy of this article
    by Jing Chen, Xiangyan Tang, Jieren Cheng, Fengkai Wang, Ruomeng Xu 
    Abstract: Distributed denial of service (DDoS) attack is a rapidly growing problem with the fast development of the internet. The existing DDoS attack detection methods have time-delay and low detection rate. This paper presents a DDoS attack detection method based on network abnormal behaviour in a big data environment. Based on the characteristics of flood attack, the method filters the network flows to leave only the many-to-one network flows to reduce the interference from normal network flows and improve the detection accuracy. We define the network abnormal feature value (NAFV) to reflect the state changes of the old and new IP addresses of many-to-one network flows. Finally, the DDoS attack detection method based on NAFV real-time series is built to identify the abnormal network flow states caused by DDoS attacks. The experiments show that compared with similar methods, this method has higher detection rate, lower false alarm rate and lower missing rate.
    Keywords: DDoS; time series; ARIMA; big data; forecast.

  • A parallel adaptive-resolution hydraulic flood inundation model for flood hazard mapping   Order a copy of this article
    by Wencong Lai, Abdul Khan 
    Abstract: There is a growing demand for improved high-resolution flood inundation modelling in large-scale watersheds for sustainable planning and management. In this work, a parallel adaptive-resolution hydraulic flood inundation model is proposed for large-scale unregulated rivers. This model used the public best available topographic data and streamflow statistics data from USGS. An adaptive triangular mesh is generated with fine resolution (~30 m) around streams and coarse resolution (~200 m) away from streams. The river flood-peak discharges are estimated using the regression equations from the National Streamflow Statistics (NSS) Program based on watershed and climate characteristics. The hydraulic simulation is performed using a discontinuous Galerkin solver for the 2D shallow-water flow equations. The hydraulic model is run in parallel with the global domain partitioned using the stream link and stream length. The proposed model is used to predict the flooding in the Muskingum River Basin and the Kentucky River Basin. The simulated inundation maps are compared with FEMA maps and evaluated using three statistical indices. The results demonstrated that the model is capable of predicting flooding maps for large-scale unregulated rivers with acceptable accuracy.
    Keywords: flood inundation; flood mapping; unregulated rivers.

  • The clothing image classification algorithm based on the improved Xception model   Order a copy of this article
    by Zhuoyi Tan, Yuping Hu, Dongjun Luo, Man Hu, Kaihang Liu 
    Abstract: This paper proposes a clothing image classification algorithm based on the improved Xception model. Firstly, the last fully connected layer of the original network is replaced with another fully connected layer to recognise eight classes instead of 1000 classes. Secondly, the activation function we employ in our network adopts both Exponential Linear Unit (ELU) and Rectified Linear Unit (ReLU), which can improve the nonlinear and learning characteristics of the networks. Thirdly, in order to enhance the anti-disturbance capability of the network we employ the L2 regularisation method. Fourthly, we perform data augmentation on the training images to reduce over-fitting. Finally, the learning rate is set to zero in the layers of the first two modules of our network and the network is fine-tuned. The experimental results show that the top-1 accuracy by the algorithm proposed in this paper is 92.19%, which is better than the state-of-the-art models of Inception-v3, Inception-ResNet-v2 and Xception.
    Keywords: clothing image classification; transfer learning; deep convolutional neural network; Xception.

  • A hidden Markov model to characterise motivation level in MOOCs learning   Order a copy of this article
    by Yuan Chen, Dongmei Han, Lihua Xia 
    Abstract: A participants learning in massive open online courses (MOOCs) highly relies on motivation. However, how to characterise motivation level is an open question. This study establishes a hidden Markov model to characterise motivation level and examines the model on the data from MOOCs in China. The empirical results show that two motivation levels (high and low) are characterised. Based on the two motivation levels, further analysis reveals the differences in participants learning behaviours with respect to learning activity and continuous learning. The hidden Markov model proposed in this study contributes to the development of the theoretical ground of current MOOCs. It also has important operational implications for MOOCs.
    Keywords: massive open online courses; motivation Level; hidden Markov model; learning behaviours.

  • A short text conversation generation model combining BERT and context attention mechanism   Order a copy of this article
    by Huan Zhao, Jian Lu, Jie Cao 
    Abstract: The standard Seq2Seq neural network model tends to generate general and safe responses (e.g., I dont know) regardless of the input in the field of short-text conversation generation. To address this problem, we propose a novel model that combines the standard Seq2Seq model with the BERT module (a pre-trained model) to improve the quality of responses. Specifically, the encoder of the model is divided into two parts: one is the standard seq2seq which generates a context attention vector; the other is the improved BERT module which encodes the input sentence into a semantic vector. Then through a fusion unit, the vectors generated by the two parts are fused to generate a new attention vector. Finally, the new attention vector is transmitted to the decoder. In particular, we describe two ways to acquire a new attention vector in the fusion unit. Empirical results from automatic and human evaluations demonstrate that our model improves the quality and diversity of the responses significantly.
    Keywords: Seq2Seq; short text conversation generation; BERT; attention mechanism; fusion unit.

  • Implicit emotional tendency recognition based on disconnected recurrent neural networks   Order a copy of this article
    by Yiting Yan, Zhenghong Xiao, Zhenyu Xuan, Yangjia Ou 
    Abstract: Implicit sentiment orientation recognition classifies emotions. The development of the internet has diversified the information presented by text data. In most cases, text information is positive, negative, or neutral. However, the inaccurate participle, the lack of standard complete sentuation lexicon, and the negation of words bring difficulty in implicit emotional recognition. The text data also contain rich and fine-grained information and thus become a difficult research point in natural language processing. This study proposes a hierarchical disconnected recurrent neural network to overcome the problem of lack of emotional information in implicit sentiment sentence recognition. The network encodes the words and characters in the sentence by using the disconnected recurrent neural network and fuses the context information of the implicit sentiment sentence through the hierarchical structure. By using the context information, the capsule network is used to construct different fine-grained context information for extracting high-level feature information and provide additional semantic information for emotion recognition. This way improves the accuracy of implicit emotion recognition. Experimental results prove that the model is better than some current mainstream models. The F1 value reaches 81.5%, which is 2 to 3 percentage points higher than those of the current mainstream models.
    Keywords: hierarchical interrupted circulation network; implicit emotion; capsule network; sentiment orientation identification.

  • Greedy algorithm for image quality optimisation based on turtle-shell steganography   Order a copy of this article
    by Guo-Hua Qiu, Chin-Feng Lee, Chin-Chen Chang 
    Abstract: Information hiding, also known as data hiding, is an emerging field that combines multiple theories and technologies. In recent years, Chang and Liu et al. have proposed new data-hiding schemes based on Sudoku, a turtle-shell, etc. These proposed schemes have their own advantages in terms of visual quality and embedded capacity. However, the reference matrices used in these schemes are not optimal. Based on the characteristics of these schemes, Jin et al. (2017) employed particle swarm optimisation to select the reference matrix and achieved approximately optimal results in reducing the distortion of the stego-image. However, the complexity is high. In this paper, a turtle-shell matrix optimisation scheme is proposed using a greedy algorithm. The experimental results show that our proposed greedy algorithm is better than the particle swarm optimisation scheme at finding a near-optimal matrix and achieving better stego-image quality, and it outperforms the particle swarm optimisation scheme in terms of computational amount and efficiency.
    Keywords: data hiding; turtle-shell steganography; particle swarm optimisation; greedy algorithm.

  • Client-side ciphertext deduplication scheme with flexible access control   Order a copy of this article
    by Ying Xie, Guohua Tian, Haoran Yuan, Chong Jiang, Jianfeng Wang 
    Abstract: Data deduplication with fine-grained access control has been applied in practice to realise data sharing and reduce the storage space. However, many existing schemes can only achieve server-side deduplication, which greatly wastes the network bandwidth even when the data transmitted is particularly large. Moreover, few existing schemes consider attribute revocation, in which the forward and backward secrecy cannot be guaranteed. To address the above problems, in this paper, we introduce a client-side ciphertext deduplication scheme with more flexible access control. Specifically, we divide the data owner into different domains and distribute corresponding domain keys to them through the secure channel, achieving PoW verification in client-side deduplication. Besides, we realise attribute revocation through the proxy re-encryption technology, which cannot preset the maximum number of clients in system initialisation. Security and performance analysis shows that our scheme can achieve desired security requirements while realising the efficient client-side deduplication and attribute revocation.
    Keywords: client-side deduplication; flexible access control; attribute revocation; random tag.

  • An efficient memetic algorithm using approximation scheme for solving nonlinear integer bilevel programming problems   Order a copy of this article
    by Yuhui Liu, Hecheng Li, Huafei Chen, Jing Huang 
    Abstract: Nonlinear integer bilevel programming problems (NIBLPPs) are a kind of mathematical model with hierarchical structure, which are known as strongly NP-hard problems. In general, it is extremely hard to solve this kind of problem because they are always non-convex and non-differentiable, especially when integer constraints are involved. In this paper, based on a simplified branch and bound method as well as interpolation technique, a memetic algorithm is developed to solve NIBLPPs. Firstly, the leaders variable values are taken as individuals in populations, for each individual in the initial population, a simplified branch and bound method is adopted to obtain the followers optimal solutions. Then, in order to reduce the computation cost in frequently solving followers problems for lots of offsprings generated in evolution, the interpolation method is applied to approximate the solutions to the followers problem for each individual in the population. In addition, among these approximated points, only potential better points can be chosen to endure further optimisation procedure, so as to obtain precise optimal solutions to followers problems. The simulation results show that the proposed memetic algorithm is efficient in dealing with NIBLPPs.
    Keywords: nonlinear integer bilevel programming problem; memetic algorithm; branch and bound method; interpolation function; optimal solutions.

  • LMA: label-based multi-head attentive model for long-tail web service classification   Order a copy of this article
    by Guobing Zou, Hao Wu, Song Yang, Ming Jiang, Bofeng Zhang, Yanglan Gan 
    Abstract: With the rapid growth of web services, service classification is widely used to facilitate service discovery, selection, composition and recommendation. Although there is much research in service classification, rarely does work focus on the long-tail problem to improve the accuracy of those categories that have fewer services. In this paper, we propose a novel label-based attentive model LMA with the multi-head structure for long-tail service classification. It can learn the various word-label subspace attention with a multi-head mechanism, and concatenate them to get the high-level feature of services. To demonstrate the effectiveness of LMA, extensive experiments are conducted on 14,616 real-world services with 80 categories crawled from the service repository ProgrammableWeb. The results prove that the LMA outperforms state-of-the-art approaches for long-tail service classification in terms of multiple evaluation metrics.
    Keywords: service classification; service feature extraction; long tail; label embedding; attention.

  • MESRG: multi-entity summarisation in RDF graph   Order a copy of this article
    by Ze Zheng, Xiangfeng Luo, Hao Wang 
    Abstract: Entity summarisation has drawn a lot attention in recent years. But there still exist some problems. Firstly, most of previous works focus on individual entity summarisation. Secondly, the external resources such as WordNet are frequently used to calculate the similarity between Resource Description Framework (RDF) triples. However, the neighbours with common properties may affect the summarisation of individual entity, and the external resources are not always available in practice. To solve above two problems, this paper focuses on multi-entity summarisation, which aims at selecting representative triples for entities in an RDF graph without external knowledge. A topic model based model Multi-Entity Summarisation in RDF Graph (MESRG) is proposed for multi-entity summarisation, which is capable of extracting informative and diverse summaries and involves a two-phase process: 1) to select more important RDF triples, we propose an improved topic model that ranks triples with probability values: this model takes account of the effects of the neighbour entities rather than individual entity by the probability distribution; 2) to select diverse RDF triples. We use a graph embedding method to calculate the similarity between triples and obtain top k distinctive triples. Experiments of our model with significant results on the benchmark datasets demonstrate the effectiveness.
    Keywords: multi-entity summarisation; RDF graph; data sharing; topic model; graph embedding.

  • Synthetic data augmentation rules for maritime object detection   Order a copy of this article
    by Zeyu Chen, Xiangfeng Luo, Yan Sun 
    Abstract: The performance of deep neural networks for object detection depends on the amount of data. In the field of maritime object detection, the diversity of weather, target scale, position and orientation make real data acquisition hard and expensive. Recently, the generation of synthetic data is a new trend to enrich the training set. However, synthetic data might not improve the detection accuracy. Two problems remain unsolved: 1) what kind of data need to be augmented? 2) how to augment synthetic data? In this paper, we use knowledge-based rules to constrain the process of data augmentation and to seek effective synthetic samples. Herein, we propose two synthetic data augmentation rules: 1) what to augment depends on the gap between training and expiring data distribution; 2) the robustness and effectiveness of synthetic data depends on the proper proportion and domain randomisation. The experiments show that the average accuracy of boat classification increases 3% with our synthetic data in Pascal VOC test set.
    Keywords: data augmentation; synthetic data; object detection; synthetic data augmentation rules.

  • A new transmission strategy to achieve energy balance and efficiency in wireless sensor network   Order a copy of this article
    by Yanli Wang, Yanyan Feng 
    Abstract: Energy balancing and energy efficiency are very important in prolonging network life. In wireless sensor networks, cooperative MIMO technology has become a research hotspot in recent years. The appropriate cooperative nodes can transmit data efficiently. Meanwhile, energy harvesting pays attention to the transmission process of network also. This paper presents a selection of cooperative nodes and cluster nodes, which offers energy balance. The node is not only the transmitter of information, but also the transmitter of energy. In addition, the paper focuses on how to obtain the optimal energy efficiency with proposed resource allocation algorithm. Simulation results show that cluster nodes and the cooperative nodes are more balanced, the selection algorithm realises the energy balance. The energy efficiency increases rapidly with the amount of transmitted power, and tends to be stable when it reaches 3 dBm. The cooperative MIMO technology is adopted to obtain higher network utility.
    Keywords: cooperative node; wireless sensor network; energy harvesting; energy balance; energy efficiency.

  • Reversible data-hiding scheme based on the AMBTC compression technique and Huffman coding   Order a copy of this article
    by Ting-Ting Xia, Juan Lin, Chin-Chen Chang, Tzu-Chuen Lu 
    Abstract: This paper proposes a reversible data-hiding (RDH) method based on the absolute moment block truncation coding (AMBTC) compression technique and Huffman coding. First, AMBTC is used to compress the original grayscale image to obtain two quantisation levels and a bitmap of each block. Next, the bitmap of each block is converted into a decimal number to calculate the frequency of the decimal number. A user-defined threshold is used to classify the block as embeddable or not. If the frequency of the decimal number is larger than or equal to the threshold, the bitmap is embeddable and is then compressed by the Huffman coding technology. The scheme takes the redundancy of each block by using the Huffman code instead of the bitmap to embed secret information. Experimental results show that our proposed scheme has a better hiding payload than other methods, as well as an acceptable image visual quality.
    Keywords: reversible data hiding; AMBTC; Huffman coding; hiding capacity; image visual quality; PSNR; information security; compression domain.

  • Image of plant disease segmentation model based on improved pulse-coupled neural network   Order a copy of this article
    by Xiaoyan Guo, Ming Zhang 
    Abstract: Image segmentation is a key step in feature extraction and disease recognition of plant disease images. To avoid subjectivity while using a pulse-coupled neural network(PCNN), which realises parameter configuration through artificial exploration to segment plant disease images, an improved image segmentation model called SFLA-PCNN is proposed in this paper. The shuffled frog-leaping algorithm (SFLA) is used to optimise the parameters (?, ?_?, and V_? ) of PCNN to improve PCNN performance. A series of plant disease images are taken as segmentation experiments, and the results reveal that SFLA-PCNN is more accurate than other methods mentioned in this paper and can extract lesion images from the background area effectively, providing a foundation for subsequent disease diagnosis.
    Keywords: shuffled frog leap algorithm; pulse-coupled neural network; plant disease.

  • General process of big data analysis and visualisation   Order a copy of this article
    by HongZhang Lv, Guang Sun, WangDong Jiang, FengHua Li 
    Abstract: There are innumerable data generated on the internet every day, which is hardly effectively analysed by traditional means because of its capacities and complexities. Not only is this data huge, but it also has complex relationships between different kinds of datasets. In addition, if people want to know more information of the changes of data during the certain period of time, that means the time factor will be taken into consideration, which leads to the problem of analysing dynamic data. This kind of data is called big data. Under the circumstances of the big data era, a new process for dealing with this data should be conceived. This process contains five steps. Those are collecting data, cleaning data, storing data, analysing data and data's further analysing. The aim of this paper is to illustrate every step. The fourth and fifth steps will be introduced in detail.
    Keywords: big data; visualisation; analysis; process; graph.

  • Dynamic multiple copies adaptive audit scheme based on DITS   Order a copy of this article
    by Xiaoxue Ma, Pengliang Shi 
    Abstract: This paper proposes a dynamic multi-copy adaptive audit scheme based on DITS. In order to achieve the correctness and completeness detection of multiple copies, the data blocks and the corresponding location index information are connected to generate a duplicate file. The audit process is divided into two parts, one is the third-party audits and the other is client audits. When auditing, in order to prevent collusion attacks as well to improve the audit accuracy, the third-party auditors apply the challenge-response mode to detect the data block labels, and the client audit applies the audit algorithm to retrieve the index information of the data blocks. Through theoretical analysis and experimental comparison, the scheme is more secure in verifying the integrity of dynamic data and the correctness of the multi-copy, which can effectively prevent the existing data threats.
    Keywords: cloud storage; dynamic auditing; multiple copies; integrity.

  • Basins of attraction and critical curves for Newton-type methods in a phase equilibrium problem   Order a copy of this article
    by Gustavo Platt, Fran Lobato, Gustavo Libotte, Francisco Moura Neto 
    Abstract: Many engineering problems are described by systems of nonlinear equations, which may exhibit multiple solutions, in a challenging situation for root-finding algorithms. The existence of several solutions may give rise to complex basins of attraction for the solutions in the algorithms, with severe influence in their convergence behaviour. In this work, we explore the relationship of the basins of attractions with the critical curves (the locus of the singular points of the Jacobian of the system of equations) in a phase equilibrium problem in the plane with two solutions, namely the calculation of a double azeotrope in a binary mixture. The results indicate that the conjoint use of the basins of attraction and critical curves can be a useful tool to select the most suitable algorithm for a specific problem.
    Keywords: Newton's methods; basins of attraction; nonlinear systems; phase equilibrium.

  • Design and implementation of food supply chain traceability system based on hyperledger fabric   Order a copy of this article
    by Kui Gao, Yang Liu, Heyang Xu, Tingting Han 
    Abstract: Food safety problems always cause widespread concerns and panic when food-related incidents occur around the globe. Establishing a credible food traceability system is an effective solution to this issue. Most existing blockchain-based traceability systems are not convincing because the traceability information stored on the chain is just coming from one single organisation. Without the upstream and downstream trading information of the supply chain, even blockchain-based systems with immutability and decentralised trustworthy advantages cannot guarantee accurate traceability for customers. In this paper, we establish a food supply chain traceability system called FSCTS which aggregates all the enterprises and organisations along the food supply chain to make deals and transactions on the blockchain. Through analysing the trading data associating the whole food circulation from production to consumption, reliable transaction-based traceability can be achieved to provide trusted food tracing. We implement the system on the base of hyperledger fabric and prove the effectiveness and superiority of FSCTS by conducting extensive comparison experiments with some similar traceability systems.
    Keywords: food safety; food traceability; food trading; food supply chain; blockchain; consortium blockchain; hyperledger fabric.

  • Automatic recommendation of user interface examples for mobile app development   Order a copy of this article
    by Xiaohong Shi, Xiangping Chen, Rongsheng Rao, Kaiyuan Li, Zhensheng Xu, Jingzhong Zhang 
    Abstract: It is an efficient development practice for user interface (UI) developers to exploit some examples for their reference. We propose an approach for automatic recommendation of UI examples for mobile app development. We first introduce a search engine for UI components of mobile applications based on their descriptions, graphical views and source code. From the search results, an algorithm, density-based clustering with maximum intra-cluster distance (DBCMID), is proposed to automatically recommend examples. The comparison between the recommended examples using our approach and existing summarised examples shows that for 83.33% of summarised examples, there are completely/partly matched examples in our recommended results. In addition, 39 new valuable examples are found based on the search results of six queries.
    Keywords: user interface search; user interface development; example recommendation.

  • A DDoS attack detection method based on SVM and K-nearest neighbour in SDN environment   Order a copy of this article
    by Zhaohui Ma, Bohong Li 
    Abstract: This paper presents a detection method for DDos attack in SDN based on k-nearest neighbour algorithm (KNN) and support vector machine (SVM). This method makes use of the characteristics of SDN centralised control, collects the flow characteristic information efficiently, classifies the flow, screens out the attack flow, and determines whether the system is attacked or not. Experiments show that the method has high accuracy.
    Keywords: software define network; controller; detecting method; DDos attack; KNN; SVM.

  • The sentiments of open financial information, public mood and stock returns: an empirical study on Chinese Growth Enterprise Market   Order a copy of this article
    by Qingqing Chang 
    Abstract: This study links public mood to stock performance and examines the moderating role of co-occurring sentiments as expressed at open financial information platforms in this relationship. Drawing on the agenda-setting and source credibility theories, we developed hypotheses with the use of 345 stocks listed on the Chinese Growth Enterprise Market and data on public mood, and open financial information sentiments collected between 1 October, 2012 and 30 September, 2015. Our findings suggest that public mood has a significant, positive impact on stock returns; more interestingly, we found that public mood has a stronger positive impact on stock performance than open financial information sentiments. Furthermore, the study finds a positive interactive effect between public mood and open financial information sentiments, and determined that variation in public mood is a driving force with a market reaction, while the co-occurring open financial information sentiments amplifies the effect of public mood on stock returns.
    Keywords: sentiment analysis; public mood; open financial information sentiments.

  • Digital watermarking for health-care: a survey of ECG watermarking methods in telemedicine   Order a copy of this article
    by Maria Rizzi, Matteo D'Aloia, Annalisa Longo 
    Abstract: Innovations in healthcare have introduced a radical change in the medical environment, including patient diagnostic data and patient biological signal facilities and processing. The adoption of telemedicine services usually leads to an incremental trend in transmission of electronic sensitive data over insecure infrastructures. Since integrity, authenticity and confidentiality are mandatory features in telemedicine, the need to guarantee these requirements with end-to-end control arises. Among the various techniques implemented for data security, digital watermarking has gained considerable popularity in healthcare oriented applications. The challenge the watermarking insertion has to overcome is to avoid changes in health and medical history of a patient to a level where a decision maker can make a misdiagnosis. This paper presents a survey of different applications of electrocardiogram watermarking for telemedicine. The most recent and significant electrocardiogram watermarking schemes are reviewed, various issues related to each approach are discussed, and some aspects of the adopted techniques, including classification and performance measures, are analysed.
    Keywords: watermarking; electrocardiogram; telemedicine; data security; healthcare; integrity verification; authentication; patient record hiding; smart health.

  • Web services classification via combining Doc2vec and LINE model   Order a copy of this article
    by Hongfan Ye, Buqing Cao, Jinkun Geng, Yiping Wen 
    Abstract: With the rapid increasing of the number of web services, web service discovery is becoming a challenging task. Classifying web services with similar functionality from a tremendous amount of web services can improve the efficiency of service discovery significantly. The current web services classification researches mainly focus on the independent mining of the hidden content semantic information or network structure information in the web service characterisation documents, but few of them integrate the two sets of information comprehensively to achieve better classification performance. To this end, we propose a web service classification method that combines content semantic information and network construction information.
    Keywords: web services classification; content semantic; network structure; LINE; Doc2Vec.

  • Discrete stationary wavelet transform and SVD-based digital image watermarking for improved security   Order a copy of this article
    by Rajakumar Chellappan, S. Satheeskumaran, C. Venkatesan, S. Saravanan 
    Abstract: Digital image watermarking plays an important role in digital content protection and security related applications. Embedding watermark is helpful to identify the copyright of an image or ownership of the digital multimedia content. Both the grey images and colour images are used in digital image watermarking. In this work, discrete stationary wavelet transform and singular value decomposition (SVD) are used to embed watermark into an image. One colour image and one watermark image are considered here for watermarking. Three-level wavelet decomposition and SVD are applied and the watermarked image is tested under various attacks, such as noise attacks, filtering attacks and geometric transformations. The proposed work exhibits good robustness against these attacks, and the simulation results show that proposed approach is better than the existing methods in terms of bit error rate, normalised cross correlation coefficient and peak signal to noise ratio.
    Keywords: digital image watermarking; discrete stationary wavelet transform;; wavelet decomposition; singular value decomposition; peak signal to noise ratio.

  • Disaster Management Using D2D Communication with ANFIS Genetic Algorithm Based CH Selection and Efficient Routing by Seagull Optimization   Order a copy of this article
    by Lithungo K. Murry, R. Kumar, Themrichon Tuithung 
    Abstract: The next generation networks and public safety strategies in communications are at a crossroads in order to render best applications and solutions to tackle disaster management proficiently. There are three major challenges and problems considered in this paper: (i) disproportionate disaster management scheduling among bottom-up and top-down strategies; (ii) greater attention on the disaster emergency reaction phase and the absence of management in the complete disaster management series; and (iii) arrangement deficiency of a long-term reclamation procedure, which results in stakeholder resilience and low level community. In this paper, a new strategy is proposed for disaster management. A hybrid Adaptive Neuro-Fuzzy Inference Network based Genetic Algorithm (D2D ANFIS-GA) used for selecting cluster heads, and for the efficient routing the Seagull Optimization Algorithm (SOA) is used. Implementation is done in the MATLAB platform. The performance metrics, such as energy use, average battery lifetime, battery lifetime probability, average residual energy, delivery probability, and overhead ratio, has used to evaluate the performance. Experimental results are compared with two existing approaches, Epidemic and FINDER. Our proposed approach gives better results.
    Keywords: disaster management; adaptive neuro-fuzzy inference network; residual energy; device-to-device communication; seagull optimisation algorithm.

  • Design and implementation of chicken egg incubator for hatching using IoT   Order a copy of this article
    by Niranjan Lakshmappa, C. Venkatesan, Suhas A R, S. Satheeskumaran, Aaquib Nawaz S 
    Abstract: Egg fertilisation is one of the major factors to be considered in poultry farms. This paper describes a smart incubation system designed to combine the IoT technology with the smartphone in order to make the system more convenient to the user in monitoring and operation of the incubation system. The incubator is designed first with both the setter and the hatcher in one unit and incorporating both still air incubation and forced air incubation, which is controlled and monitored by the controller keeping in mind the four factors temperature, humidity, ventilation and egg turning system. Here we are setting with three different temperatures for the experimental purpose at 36.5oC, 37.5oC and 38oC. The environment is maintained the same in all three cases and the best temperature for the incubation of the chicken eggs is noted.
    Keywords: IoT; poultry farms; embryo; brooder; hatchery; Blynk App.

  • A sinkhole prevention mechanism for RPL in IoT   Order a copy of this article
    by Alekha Kumar Mishra, Maitreyee Sinha, Asis Kumar Tripathy 
    Abstract: A sinkhole node has the ability to redirect all the traffic routes from IoT nodes to the root(sink) node through it via false rank advertisement. Unfortunately, there is no provision for a node to verify the actual rank received by a claiming parent from its parent in the RPL protocol. A number of sinkhole and rank spoofing detection mechanisms have been proposed in the literature. The works claiming higher detection rate mostly use cryptography-based operations, which results in additional computational overhead. A majority of the mechanisms are also reported considering only a single network metric for evaluating the trust level of a parent node and for detecting a sinkhole. Practically, only a single network metric may not be sufficient to detect anomalous behaviour and may lead to false positive cases. In this paper, a prevention mechanism is proposed that decides the legitimacy of a node in the neighborhood by considering three network metrics. There are hop count, residual energy, and expected transmission count. The mechanism relies on the fact that all the nodes in a neighborhood have similar network metrics with respect to the position of the root node in the network. Therefore, a node claiming to have metric values quite different from the mean values in a neighborhood is identified as a sinkhole. The experimental results show that the proposed mechanism can significantly distinguish a sinkhole node from genuine ones in an arbitrary location in the network.
    Keywords: IoT; security; RPL; sinkhole; rank spoofing; degree of membership.

  • A comparative linguistic analysis of English news headlines in China, America, England, and ASEAN countries   Order a copy of this article
    by Yusha Zhang, Xiaoming Lu, Yingwen Fu, Shengyi Jiang 
    Abstract: This paper aims to conduct a comparative study on English news headlines in America and China, England, and ASEAN countries, mainly investigating how the composing of news headlines is interrelated with their linguistic factors, such as the part-of-speech, length and the frequency of most common words, depending on which country the news is published in. In this paper, the linguistic comparison is performed based on merely the headlines without going through whole articles. For this purpose, 13 sets of data are collected from major online news sites within above-mentioned countries. The comparison results reveal that headlines in different countries comply with news writing rules in slightly different ways as well as boasting distinctive features. These differences are attributed to the consideration of the target audiences multi-faceted states, such as knowledge states, beliefs, or interests. To better exemplify the results, the headlines in this article were read with care. The proposed method begins with data collection and pre-processing. News headlines are then fetched from different news source using crawler and processed in Natural Language Processing Tool (NLTK).
    Keywords: headline; part-of-speech; length of headlines; cluster.

  • A new unsupervised method for boundary perception and word-like segmentation of sequence   Order a copy of this article
    by Arko Banerjee, Arun K. Pujari, Chhabi Rani Panigrahi, Bibudhendu Pati 
    Abstract: In cognitive science research on natural language processing, motor learning and visual perception, perceiving boundary points and segmenting a continuous string or sequence is one of the fundamental problems. Boundary perception can also be viewed as a machine learning problem: supervised or unsupervised learning. In a supervised learning approach for determining boundary points for segmentation of a sequence, it is necessary to have some pre-segmented training examples. In an unsupervised mode, the learning is accomplished without any training data, hence the frequency of occurence of symbols within the sequence is normally used as the cue. Most of earlier algorithms use this cue while scanning the sequence in the forward direction. In this paper, we propose a novel approach of extracting the possible boundary points by using bidirectional scanning of the sequence. We show here that such an extension from unidirectional to bidirectional is not trivial and requires judicious consideration of datastructure and algorithm. We here propose a new algorithm that traverses the sequence unidirectinally but extracts the information bidirectionally. Our method yields better segmentation, which is demonstrated by rigorous experimentation on several datasets.
    Keywords: boundary perception; sequence segmentation; trie datastructure.

  • Application of light gradient boosting machine in mine water inrush source type online discriminant   Order a copy of this article
    by Yang Yong, Li Jing, Zhang Jing, Liu Yang, Zhao Li, Guo Ruxue 
    Abstract: Water inrush is a kind of mine geological disaster that threatens mining safety. Type recognition of water inrush sources is an effective auxiliary method to forecast water inrush disaster. Compared with the current hydrochemistry methodology, it spends a large amount of time on sample collection. Considering this problem, it is urgent to propose a novel method to discriminate water inrush source type online, and further to obtain much more time for evacuation before the disaster. The paper proposes an in-situ mine water sources discrimination model based on Light Gradient Boosting Machine (LightGBM). This method combines Light Gradient Boosting (LGB) with the Decision Tree (DT) to improve the network's integrated learning ability and enhance model generalisation. The data were collected from in-situ sensors such as pH, conductivity, Ca, Na, Mg and CO3 components in different water bodies of LiJiaZui Coal Mine in HuaiNan. The results illustrate that the accuracy of the proposed method achieves 99.63% to recognise water sources in the mine. Thus, the proposed discriminant model is a timely and an effective online way to recognise source types of water in mines.
    Keywords: water inrush source; light gradient boosting machine; online water sources discrimination.

  • Unmanned surface vehicle adaptive decision model for changing weather   Order a copy of this article
    by Han Zhang, Xinzhi Wang, Xiangfeng Luo, Shaorong Xie, Shixiong Zhu 
    Abstract: The autonomous decision-making capability of an unmanned surface vehicle (USV) is the basis for many tasks, such as obstacle avoidance, tracking and navigation. Most of the works ignore the variability of the scene when making behavioral decisions. For example, traditional decision-making methods are not adaptable to dynamic environments, especially changing weather that a USV is likely to encounter. In order to solve the low adaptability problem of a USV using a single decision model for autonomous decision-making in changing weather, we propose an adaptive model based on human memory cognitive process. It uses deep learning algorithms to classify weather and uses reinforcement learning algorithms to make decisions. Simulated experiments are carried out on USV obstacle avoidance decision task in the Unity3D ocean scene to test our model. Experiments show that our model's decision-making accuracy in changing weather is 27% higher than using only a single decision model.
    Keywords: brain memory cognitive process; reinforcement learning; weather classification; adaptive model.

  • Wireless energy consumption optimisation using node coverage controlling in linear-type network   Order a copy of this article
    by Gaifang Xin, Jun Zhu, Chengming Luo, Jing Tang, Wei Li 
    Abstract: With the mushrooming development of automatic, intelligent and unmanned technologies, the application of wireless sensor networks has become a hot topic. As a class of narrow band structures, such as corridors and tunnels, wireless sensor networks are is used to detect environmental parameters. To address unbalanced energy consumption of wireless nodes along length direction, this paper proposes a coverage controlling strategy and develops a linear-type network using several sensor nodes. A base station node is equipped in narrow band structures. Firstly, the wireless perception models, containing routing path, coverage model, link load and data credible rate, are analysed aiming at special monitoring environments; secondly, the survival lifetime of linear-type network is solved based on energy consumption of every wireless node; thirdly, in consideration of the accidental death of deployed nodes, time-sharing scheduling strategy is used to guarantee the stability of the monitoring linear-type network; finally, the experimental results show that the proposed coverage control strategy can optimise the linear network survival time and achieve the energy efficiency of linear-type network, which can provide the reference for target positioning, operation safety, environmental monitoring, and disaster assessment in some applications.
    Keywords: linear-type structure; wireless sensor network; energy efficiency; coverage controlling.

  • FACF: fuzzy areas-based collaborative filtering for point-of-interest recommendation   Order a copy of this article
    by Ive Tourinho, Tatiane Rios 
    Abstract: Several online social networks collect information from their users' interactions (co-tagging of photos, co-rating of products, etc.) producing a large amount of activity-based data. As a consequence, this kind of information is used by these social networks to provide their users with recommendations about new products or friends. Moreover, Recommendation Systems (RS) are able to predict a persons activity with no special infrastructure or hardware, such as RFID tags, or by using video and audio. In that sense, we propose a technique to provide personalised Points-of-Interest (POI) recommendations for users of Location-Based Social Networks (LBSN). Our technique assumes users' preferences can be characterised by their visited locations, which is shared by them on LBSN, collaboratively exposing important features as, for instance, Areas-of-Interest (AOI) and POI popularity. Therefore, our technique, named Fuzzy Areas-based Collaborative Filtering (FACF), uses users' activities to model their preferences and recommend the next visits to them. We have performed experiments over two real LBSN datasets and the obtained results have shown our technique outperforms location collaborative filtering at almost all of the experimental evaluation. Therefore, by fuzzy clustering of AOI, FACF is suitable to check the popularity of POI to improve POI recommendations.
    Keywords: recommendation systems; fuzzy clustering; location; points-of-interest.

  • ELBA-NoC: ensemble learning-based accelerator for 2D and 3D network-on-chip architectures   Order a copy of this article
    by Anil Kumar, Basavaraj Talawar 
    Abstract: Networks-on-Chip (NoCs) have emerged as a scalable alternative to traditional bus and point-to-point architectures. The overall performance of NoCs become highly sensitive as the number of cores increases. Research on NoCs and development thus play a key role in the design of hundreds to thousands of cores in the near future. Simulation is one of the main tools used in NoC for analysing and testing new architectures. To achieve the best performance vs. cost tradeoff, simulations are important for both the interconnect designer as well as the system designer. Software simulators are too slow for evaluating medium and large scale NoCs. This paper presents a learning framework which can be used to analyse the performance, area and power parameters of 2D and 3D NoC architectures which is fast, accurate and reliable. This framework is named as Ensemble Learning-Based Accelerator (ELBA-NoC) which is built using the random forest regression algorithm to predict parameters of NoCs considering different synthetic traffic patterns. On 2D, 3D Mesh, Torus and Cmesh NoC architectures, ELBA-NoC was tested and the results obtained were compared with the extensively used cycle-accurate Booksim NoC simulator. Experiments with different virtual channels, traffic patterns and injection rates were performed by varying topology sizes. The framework showed an approximate prediction error of less than 5% and an overall speedup of up to 16K
    Keywords: network-on-chip; 2D NoC; 3D NoC; performance modelling; machine learning; regression; ensemble learning; random forest; Booksim; router; traffic pattern.

  • A blockchain-based authority management framework in traceability systems   Order a copy of this article
    by Jiangfeng Li, Yifan Yu, Shili Hu, Yang Shi, Shengjie Zhao, Chenxi Zhang 
    Abstract: The frequent occurrence of product quality and food safety incidents in recent years has greatly lost the trust of consumers. Traceability systems are developed to trace status of products in processes of production, transportation, and sales. However, the tracing data stored in the traceability systems' centralised database can be tampered. In this paper, a blockchain-based authority management framework for traceability systems is proposed. Tracing data are stored on Hyperledger Fabric and InterPlanetary File System (IPFS) to reduce data storage space and improve data privacy protection on blockchain. In the framework, using the Role Based Access Control (RBAC) mechanism, a blockchain-based RBAC model is presented by defining entities, functions, and rules. Additionally, components in four layers are designed in the framework. Strategies of operation flows are presented to achieve authority management in business applications. The framework not only guarantees the integrity of tracing data, but also prevents confidential information from being leaked. Compared with existing approaches, experiments show that the framework performs better in time and storage.
    Keywords: blockchain; authority management; RBAC model; Hyperledger Fabric; IPFS; traceability system.

  • The analysis of stego image visual quality for a data-hiding scheme based on a two-layer turtle shell matrix   Order a copy of this article
    by Ji-Hwei Horng, Xiao-zhu Xie, Chin-Chen Chang 
    Abstract: In 2018, Xie et al. proposed a novel data-hiding scheme based on a two-layer turtle shell matrix. In their scheme, they claimed that their stego image visual quality is superior to that of the state-of-the-art methods, regardless of the features of cover images. In this research note, we make a theoretical analysis of stego image quality for Xie et al.'s method based on the symmetrical characteristic of the matrix. Furthermore, we found that their simulation outcomes do not reveal the fact that their embedding capacity is larger than those of the data-hiding methods proposed previously under the same stego image visual quality. More simulations are made to reveal that our experimental outcomes coincide with the results of our theoretical analysis. Furthermore, the experimental results made by Xie et al. have also been corrected.
    Keywords: theoretical analysis; two-layer turtle shell; data hiding.

  • An inquisitive analysis of ensemble learning models for diabetes disorder prediction in juxtaposition with a deep learning approach   Order a copy of this article
    by Jyoti Prakash Sahoo, Dipak Kumar Sahoo, Amlan Sourav Sahoo, Asis Kumar Tripathy, Ajit Kumar Nayak 
    Abstract: Machine learning (ML) as a procedure for finding intriguing patterns in large datasets is thriving for the past few years. In recent trends, machine learning emerged as an intersection of artificial intelligence, statistics, and data analytics to fulfill complex business needs. In this work, the objective is to discuss various classical ML algorithms along with performance metrics in comparison with ensemble learning and deep learning methods for predictive analysis of historical data. In the first step, a comparative study of different machine learning techniques, logistic regression, k-nearest neighbours, support vector machine, Gaussian naive Bayes and decision trees, is discussed along with the outcome analysis considering the classification of diabetes disease dataset. Furthermore, an inquisitive analysis of ensemble learning models, random forest, gradient boosting, and extreme gradient boosting, is implemented to compare and contrast the outcomes of the deep learning approach contemplating the designated dataset.
    Keywords: machine learning; ensemble learning; deep learning; disease; classification; prediction.

  • Coupling model based on grey relational analysis and stepwise discriminant analysis for subsidence discrimination of foundations in soft clay areas   Order a copy of this article
    by Bo Li, Nian Liu, Wei Wang 
    Abstract: We selected grey correlation and stepwise discriminant analyses as basic models, proposed a coupling discrimination method, and designed and established a coupling discriminant model of the foundation subsidence in soft soil areas to address the challenges in the discriminant analysis of the seismic subsidence grade of soft soil. In this model, samples for discrimination, and reference samples were first analysed by indicator relations analysis. Seismic subsidence grades were ranked according to the correlation, and discriminant grades were screened. Finally, the seismic subsidence grades of the samples that met the criteria were confirmed with stepwise discriminant analysis. Actual sample data were calculated, and the discriminant results were compared and analysed with the traditional model to verify the applicability and accuracy of the coupling model. We obtained good evaluation results, which provided a new method for discriminant analysis of soft soil seismic subsidence grades.
    Keywords: soft clay; grey relational analysis; stepwise discriminant analysis; coupling model.

  • Efficient self-adaptive access control for personal medical data in emergency setting   Order a copy of this article
    by Yifan Wang, Jianfeng Wang 
    Abstract: The notion of access control allows data owners to outsource their data to cloud servers, while encouraging the sharing of data with legally authorised users. Note that the traditional access control techniques only allow authorised users to access the sharing data. However, it is intractable to obtain the required data when the data owner encounters some emergency circumstances, such as medical first-aid. Recently, Yang et al. proposed a self-adaptive access control scheme, which can ensure secure data sharing in both normal and emergency medical scenarios. However, their construction needs to involve an emergency contact person. We argue that their scheme suffers from two weaknesses: (i) it is vulnerable to single point of failure when the emergency contact person is offline, (ii) the two-cloud model brings extra computation and communication overhead. To overcome the above shortcomings, we present a new efficient self-adaptive medical data access control by integrating fuzzy identity-based encryption and convergent encryption. Specifically, our proposed construction can achieve patients' data access by their fingerprint in emergency setting. Furthermore, the proposed scheme supports cross-user data deduplication and improves the performance of the system by convergent encryption. Experiment results show that our scheme has advantage in efficiency.
    Keywords: self-adaptive access control; privacy-preserving; medical data storage; secure deduplication.

  • Edge servers placement in mobile edge computing using stochastic Petri nets   Order a copy of this article
    by Daniel Carvalho, Francisco Airton Silva 
    Abstract: Mobile Edge Computing (MEC) is a network architecture that takes advantage of cloud computing features (such as high availability and elasticity) and makes use of computational resources available at the edge of the network in order to enhance the mobile user experience by decreasing the service latency. MEC solutions need to dynamically allocate the requests as close as possible to their users. However, the request placement depends not only on the geographical location of the servers, but also on their requirements. Based on this fact, this paper proposes a Stochastic Petri Net (SPN) model to represent a MEC scenario and analyses its performance, focusing on the parameters that can directly impact the service Mean Response Time (MRT) and resource use level. In order to present the applicability of our work, we propose three case studies with numerical analysis using real-world values. The main objective is to provide a practical guide to help infrastructure administrators to adapt their architectures, finding a trade-off between MRT and level of resource usage.
    Keywords: mobile edge computing; internet of things; stochastic models; server placement.

  • Application of particle swarm optimisation for coverage estimation in software testing   Order a copy of this article
    by Boopathi Muthusamy, Sujatha Ramalingam, C. Senthil Kumar 
    Abstract: A Markov approach for test case generation and code coverage estimation using particle swarm optimisation is proposed. Initially, the dd-graph is taken from control flow graph of the software code by joining decision to decision. In the dd-graph, the sequences of independent paths are identified using c-uses and p-uses based on set theory approach and compared with cyclomatic complexity. Automatic test cases are generated and the nature of the test cases are integer, float Boolean variables. Using this initial test suite, the code coverage summary is generated using gcov code coverage analysis tool, and the branch probability percentage is considered as TPM values with respect to each branch in the dd-graph. Path coverage is used as a fitness function which is the product of node coverage and TPM values. This algorithm is iterated until it reaches 100% code coverage among each independent test path. The randomness of the proposed approach is compared with genetic algorithm.
    Keywords: particle swarm optimisation; dd-graph; mixed data type variables; branch percentage; TPM-based fitness function; most critical paths.

  • Enhancing user and transaction privacy in bitcoin with unlinkable coin mixing scheme   Order a copy of this article
    by Albert Kofi Kwansah Ansah, Daniel Adu-Gyamfi 
    Abstract: The concept of coin mixing is significant in blockchain and achieves anonymity and has merited application in bitcoin. Albeit, several coin mixing schemes have been proposed, we point out that they either hoard input transactions and address mapping or do not fully satisfy all requirements of practical anonymity. This paper proposes a coin mixing scheme (mixing countersignature scheme, ring signature, and coin mixing approach) that allows users to transact business untraceably and unlinkably without having to trust a third party to ensure coins are safe. Our proposed novel countersignature scheme simulation results prove the countersignature schemes correctness with an average running time of 4s using PBC Type A.80. The schemes security and privacy are met with standard ring signature, ECDSA unforgeability and our countersignature. We demonstrated the efficiency of the mixing scheme using Bitcoins core regtest mode to set up a private Bitcoin network. The mix takes 80, 160, 320, 640, 800 secs to service 500, 1000, 2000, 4000, 5000 users respectively. It was observed that the number of users scales linearly with average running time.
    Keywords: bitcoin blockchain; user and transaction privacy; coin mixing; bilinear pairing and ECDSA; ring signature.

  • NO2 pollutant concentration forecasting for air quality monitoring by using an optimised deep learning bidirectional GRU model   Order a copy of this article
    by Shilpa Sonawani, Kailas Patil, Prawit Chumchu 
    Abstract: Air pollution is the most crucial environmental problem to be handled as it has adverse effects on human health and agriculture, and is also responsible for climate change and global warming. Several observations have warned about the level of increase in the pollutant nitrogen dioxide (NO2) in the atmosphere in many regions. Studies have also shown that nitrogen dioxide pollutant is associated with diseases such as diabetes mellitus, hypertension, stroke, chronic obstructive pulmonary disease (COPD), asthma, bronchitis, and pneumonia, and its high level can lead to death due to asphyxiation from fluid in the lungs. It can also have negative effect on vegetation, leading to reduced growth and damage to leaves. Considering its devastating effects, to estimate and monitor NO2 concentration an optimised bidirectional GRU model is proposed. It is evaluated for its performance with other models, such as timeseries methods, sklearn machine learning regression methods, AUTOML frameworks and all advanced and hybrid deep learning techniques. The model is further optimised for the number of features, number of neurons, number of lookbacks and epoches. It is implemented on a real time dataset of Pune city in India. This model is helpful to government and central authorities to prevent excessive pollution levels and their adverse effects, and for smart homes for controlling pollution levels.
    Keywords: air pollution; air quality; AUTOML; bidirectional GRU; deep learning; nitrogen dioxide; NO2; timeseries forecasting.

  • A knowledge elicitation framework in ranking healthcare providers using rough set with formal concept analysis   Order a copy of this article
    by Arati Mohapatro, S.K. Mahendran, Tapan Kumar Das 
    Abstract: A comparison of healthcare institutions by ranking involves generating their relative scores based on the infrastructure, process and other quality dynamics. Being a top-ranking institute depends on the overall score secured against the hospital quality parameters that are being assessed for ranking. However, the parameters are not equally important when it comes to ranking. Hence, the objective of this research is to explore the parameters that are vital as they significantly influence the ranking score. In this paper, a hybrid model is presented for knowledge extraction, which employs techniques of rough set on intuitionistic fuzzy approximation space (RSIFAS) for classification, Learning from Examples Module 2 (LEM2) algorithm for generating decision rules, and formal concept analysis (FCA) for attribute exploration. The model is discussed using AHA US News score data for cancer specialisation. The result signifies the connection between quality attributes and ranking. Finally, the leading attribute and its particular values are identified for different states of ranking.
    Keywords: rough set with intuitionistic fuzzy approximation space; formal concept analysis; hospital ranking; knowledge mining; attribute exploration.

  • Real-time segmentation of weeds in cornfields based on depthwise separable convolution residual network   Order a copy of this article
    by Hao Guo, Shengsheng Wang 
    Abstract: Traditional artificial spraying of pesticides not only leads to greater use of pesticides but also environmental pollution. However, intelligent weeding devices can identify weeds and crops through sensing devices for selective spraying, which will effectively reduce the use of pesticides. The accurate and efficient identification method of crops and weeds is crucial to the development of the mechanised weeding model. To improve the segmentation exactitude and real-time performance of crops and weeds, we propose a lightweight network based on the Encoder-Decoder architecture, namely, SResNet. The shuffle-split-separable-residual block was employed to compress the model and increase the number of network layers at the same time, thereby extracting more abundant pixel category information. Besides, the model was optimised by a weighted cross-entropy loss function due to the imbalance of pixel ratios of background, crops, and weeds. The results of the experiment prove that the method presented can greatly improve the segmentation accuracy and real-time segmentation speed on the corns and weeds dataset.
    Keywords: weed segmentation; convolutional network; residual network; machine vision; image recognition.

  • Array manifold matching algorithm based on fourth-order cumulant for 2D DOA estimation with two parallel nested arrays   Order a copy of this article
    by Sheng Liu, Jing Zhao, Yu Zhang 
    Abstract: In this paper, a two-dimensional (2D) direction-of-arrival (DOA) estimation algorithm with two parallel nested arrays is developed. Firstly, a constructor method for fourth-order cumulant (FOC) matrices is given according to the distribution of sensors. Then a pre-existing DOA estimation technique is firstly used to estimate the elevation angles and an improved unilateral array manifold matching (AMM) algorithm is used to estimate the azimuth angles. Compared with some classical 2D DOA estimation algorithms, the proposed algorithm has much better estimation performance, particularly in the case of low SNR environment. Compared with some traditional FOC-based algorithm, the proposed algorithm has higher estimation precision. Simulation results can illustrate the validity of proposed algorithm.
    Keywords: DOA estimation; fourth-order cumulant; array manifold matching; two parallel nested arrays.

Special Issue on: Cloud Computing and Networking for Intelligent Data Analytics in Smart City

  • Real time ECG signal preprocessing and neuro-fuzzy-based CHD risk prediction   Order a copy of this article
    by S. Satheeskumaran, C. Venkatesan, S. Saravanan 
    Abstract: Coronary heart disease (CHD) is a major chronic disease that is directly responsible for myocardial infarction. Heart rate variability (HRV) has been used for the prediction of CHD risk in human beings. In this work, neuro-fuzzy-based CHD risk prediction is performed after performing preprocessing and HRV feature extraction. The preprocessing is used to remove high frequency noise, which is modelled as white Gaussian noise. The real time ECG signal acquisition, preprocessing and HRV feature extraction are performed using NI LabVIEW and DAQ board. A 30 seconds recording of the ECG signal was selected in both smokers and non-smokers. Various statistical parameters are extracted from HRV to predict CHD risk among the subjects. The HRV extracted signals are classified into normal and CHD-risky subjects using neuro-fuzzy classifier. The classification performance of the neuro-fuzzy classifier is compared with the ANN, KNN, and decision tree classifiers.
    Keywords: electrocardiogram; Gaussian noise; wavelet transform; heart rate variability; neuro-fuzzy technique; coronary heart disease.

  • An intelligent block matching approach for localisation of copy-move forgery in digital images   Order a copy of this article
    by Gulivindala Suresh, Chanamallu Srinivasa Rao 
    Abstract: Block-based Copy-Move Forgery Detection (CMFD) methods work with features from overlapping blocks. As overlapping blocks are involved, thresholds related to similarity and the physical distances are defined to identify the duplicated regions. However, these thresholds are controlled manually in localising the forged regions. In order to overcome this, an intelligent block matching approach for localisation is proposed using Colour and Texture Features (CTF) through Firefly algorithm. Investigation of the proposed CTF method is carried out on a standard database, which achieved an average true detection rate of 0.98 and an average false detection rate of 0.07. The proposed CTF method is robust against brightness change, colour reduction, blurring, contrast adjustment attacks, and additive white Gaussian noise. Performance analysis of the CTF method validates its superiority over other existing methods.
    Keywords: digital forensics; copy-move forgery detection; intelligent block matching; firefly algorithm.

  • Optimised fuzzy clustering-based resource scheduling and dynamic load-balancing algorithm for fog computing environment   Order a copy of this article
    by Bikash Sarma, R. Kumar, Themrichon Tuithung 
    Abstract: The influential and standard tool known as fog computing performs applications of the Internet of Things (IoT) and it is an extended version of cloud computing. In the network of edge, the applications of IoT are possibly implemented by the fog computing, which is an emerging technology in the cloud computing infrastructure. The unique technology in fog computing is the resource-scheduling process. The load on the cloud is minimised by the resource allocation of the fog-based computing method. Maximisation of throughput, optimisation of available resources, response time reduction, and elimination of overload of single resource are the goals of a load-balancing algorithm. This paper suggests an Optimised Fuzzy Clustering Based Resource Scheduling and Dynamic Load Balancing (OFCRS-DLB) procedure for resource scheduling and load balancing in fog computing. For resource scheduling, this paper recommends an enhanced form of Fast Fuzzy C-means (FFCM) with the Crow Search Optimisation (CSO) algorithm in fog computing. Finally, the loads or requests are balanced by applying the scalability decision technique in the load-balancing algorithm. The proposed method is evaluated based on some standard measures, including response time, processing time, latency ratio, reliability, resource use, and energy consumption. The proficiency of the recommended technique is obtained by comparing with other evolutionary methods.
    Keywords: fog computing; fast fuzzy C-means clustering; crow search optimisation algorithm; scalability decision for load balancing.