Forthcoming articles

International Journal of Computational Science and Engineering

International Journal of Computational Science and Engineering (IJCSE)

These articles have been peer-reviewed and accepted for publication but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

Register for our alerting service, which notifies you by email when new issues are published online.

Open AccessArticles marked with this Open Access icon are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.
We also offer which provide timely updates of tables of contents, newly published articles and calls for papers.

International Journal of Computational Science and Engineering (155 papers in press)

Regular Issues

  •   Free full-text access Open AccessA decision system based on active perception and intelligent analysis for key location security information
    ( Free Full-text Access ) CC-BY-NC-ND
    by Jingzhao Li, Zihua Chen, Guangming Cao, Mei Zhang 
    Abstract: In various enterprises, the security problems (or latent danger) of key location can not be processed in time. It is because that the security data are manually entered by multiple security workers at different times, which can lead to disordered data in some related security information systems, and analysis decision files need to be manually processed. To solve this problem, this paper presents a decision system for key location security information based on active perception and intelligent analysis to help worker/staff making proper decision. First, the designed system is developed based on C/S framework, and the functions mainly include four aspects: intelligent semantic analysis extraction, standard keyword database, intelligent(analysis) retrieval decision and early warning function. Then, the perception model based on deep learning and intelligent decision analysis model is constructed to achieve the above functions. Experimental results show that this system can significantly reduce the heavy workload of security inspectors, carry out intelligent retrieval and decision analysis, prevent safety accidents and reduce the frequency of safety accidents. It has highly social application value and innovation.
    Keywords: security risk information; semantic analysis; active perception; ant colony optimisation; intelligent decision making.
    DOI: 10.1504/IJCSE.2019.10019697
     
  • A personalised ontology ranking model based on analytic hierarchy process   Order a copy of this article
    by Jianghua Li, Chen Qiu 
    Abstract: Ontology ranking is one of the important functions of ontology search engines, which ranks searched ontologies based on the ranking model applied. A good ranking method can help users to acquire the exactly required ontology from a considerable amount of search results, efficiently. Existing approaches that rank ontologies take only a single aspect into consideration, and ignore users personalised demands, hence produce unsatisfactory results. It is believed that the factors that influence ontology importance and the users demands both need to be considered comprehensively in ontology ranking. A personalised ontology ranking model based on the hierarchical analysis approach is proposed in this paper. We build a hierarchically analytical model and apply an analytic hierarchy process to quantify ranking indexes and assign weights to them. The experimental results show that the proposed method can rank ontologies effectively and meet users personalised demands.
    Keywords: hierarchical analysis approach; ontology ranking; personalised demands; weights assignment.

  • Collective intelligence value discovery based on citation of science article   Order a copy of this article
    by Yi Zhao, Zhao Li, Bitao Li, Keqing He, Junfei Guo 
    Abstract: One of the tasks of scientific paper writing is to recommend. When the number of references is increased, there is no clear classification and the similarity measure of the recommendation system will show poor performance. In this work, we propose a novel recommendation research approach using classification, clustering and recommendation models integrated into the system. In an evaluation of the ACL Anthology papers network data, we effectively use a complex network of knowledge tree node degrees (refer to the number of papers) to enhance the accuracy of recommendation. The experimental results show that our model generates better recommended citation, achieving 10% higher accuracy and 8% higher F-score than the keyword march method when the data is big enough. We make full use of the collective intelligence to serve the public.
    Keywords: citation recommendation; classification; clustering; similarity; citation network.

  • Differential evolution with k-nearest-neighbour-based mutation operator   Order a copy of this article
    by Gang Liu, Cong Wu 
    Abstract: Differential evolution (DE) is one of the most powerful global numerical optimisation algorithms in the evolutionary algorithm family, and it is popular for its simplicity and effectiveness in solving numerous real-world optimisation problems in real-valued spaces. The performance of DE depends on its mutation strategy. However, the traditional mutation operators have difficulty in balancing the exploration and exploitation. To address these issues, in this paper, a k-nearest-neighbour-based mutation operator is proposed for improving the search ability of DE. This operator is used to search in the areas in which the vector density distribution is sparse. This method enhances the exploitation of DE and accelerates the convergence of the algorithm. In order to evaluate the effectiveness of our proposed mutation operator on DE, this paper compares other state-of-the-art evolutionary algorithms with the proposed algorithm. Experimental verifications are conducted on the CEC05 competition and two real-world problems. Experimental results indicate that our proposed mutation operator is able to enhance the performance of DE and can perform significantly better than, or at least comparably with, several state-of-the-art DE variants.
    Keywords: differential evolution; unilateral sort; k-nearest-neighbour-based mutation; global optimisation.

  • Using Gaussian mixture model to fix errors in SFS approach based on propagation   Order a copy of this article
    by Huang WenMin 
    Abstract: A new Gaussian mixture model is used to improve the quality of the propagation method for SFS in this paper. The improved algorithm can overcome most difficulties of the method, including slow convergence, interdependence of propagation nodes and error accumulation. To slow convergence and interdependence of propagation nodes, a stable propagation source and integration path are used to make sure that the reconstruction work of each pixel in the image is independent. A Gaussian mixture model based on prior conditions is proposed to fix the error of integration. Good results have been achieved in the experiment for the Lambert composite image of front illumination.
    Keywords: shape from shading; propagation method; silhouette; Gaussian mixture model; surface reconstruction.

  • Estimation of distribution algorithms based on increment clustering for multiple optima in dynamic environments   Order a copy of this article
    by Bolin Yu 
    Abstract: Aiming to locate and track multiple optima in dynamic multimodal environments, an estimation of distribution algorithms based on increment clustering is proposed. The main idea of the proposed algorithm is to construct several probability models based on an increment clustering which improved performance for locating multiple local optima and contributed to find the global optimal solution quickly for dynamic multimodal problems. Meanwhile, a policy of diffusion search is introduced to enhance the diversity of the population in a guided fashion when the environment is changed. The policy uses both the current population information and the part history information of the optimal solutions available. Experimental studies on the moving peaks benchmark are carried out to evaluate the performance of the proposed algorithm in comparison with several state-of-the-art algorithms from the literature. The results show that the proposed algorithm is effective for the function with moving optimum and can adapt to the dynamic environments rapidly.
    Keywords: EDAs; dynamic multimodal problems; diffusion policy; incremental clustering.
    DOI: 10.1504/IJCSE.2017.10010004
     
  • A blind image watermarking algorithm based on amalgamation domain method   Order a copy of this article
    by Qingtang Su 
    Abstract: Combining with the spatial domain and the frequency domain, a novel blind digital image watermarking algorithm is proposed in this paper to resolve the protecting copyright problem. For embedding a watermark, the generation principle and distribution features of direct current (DC) coefficient are used to directly modify the pixel values in the spatial domain, then four different sub-watermarks are embedded into different areas of the host image for four times, respectively. When extracting the watermark, the sub-watermarks are extracted in a blind manner according to the DC coefficients of the watermarked image and the key-based quantisation step, and then the statistical rule and first to select, second to combine are proposed to form the final watermark. Hence, the proposed algorithm not only has the simple and quick performance of the spatial domain but also has the high robustness feature of DCT domain. Many experimental results have proved that the proposed watermarking algorithm has good invisibility of watermark and strong robustness for many added attacks, e.g., JPEG compression, cropping, adding noise, etc. Comparison results also have shown the preponderance of the proposed algorithm.
    Keywords: information security; digital watermarking; combine domain; direct current.

  • A data cleaning method for heterogeneous attribute fusion and record linkage   Order a copy of this article
    by Huijuan Zhu, Tonghai Jiang, Yi Wang, Li Cheng, Bo Ma, Fan Zhao 
    Abstract: In big data era, when massive heterogeneous data are generated from various data sources, the cleaning of dirty data is critical for reliable data analysis. Existing rule-based methods are generally developed in a single data source environment, so issues such as data standardisation and duplication detection for different data-type attributes are not fully studied. In order to address these challenges, we introduce a method based on dynamic configurable rules which can integrate data detection, modification and transformation together. Secondly, we propose a type-based blocking and a varying window size selection mechanism based on a classic sorted-neighborhood algorithm. We present a reference implementation of our method in a real-life data fusion system and validate its effectiveness and efficiency using recall and precision metrics. Experimental results indicate that our method is suitable in the scenario of multiple data sources with heterogeneous attribute properties.
    Keywords: big data; varying window; data cleaning; record linkage; record similarity; SNM; type-based blocking.

  • Chinese question speech recognition integrated with domain characteristics   Order a copy of this article
    by Shengxiang Gao, Dewei Kong, Zhengtao Yu, Jianyi Guo, Yantuan Xian 
    Abstract: Aiming at domain adaptation in speech recognition, we propose a speech recognition method for Chinese question sentence based on domain characteristics. Firstly, by virtue of syllable association characteristics implied in domain term, syllable feature sequences of domain terms are used to construct the domain acoustic model. Secondly, in decoding process of domain-specific Chinese question speech recognition, we use a domain knowledge relationship to optimise and prune the speech decoding network generated by the language model, to improve continuous speech recognition. The experiments on the tourist domain corpus show that the proposed method has an accuracy of 80.50% on Chinese question speech recognition and of 91.50% on domain term recognition, respectively.
    Keywords: Chinese question speech recognition; speech recognition; domain characteristic; acoustic model library; domain terms; language model; domain knowledge library.
    DOI: 10.1504/IJCSE.2017.10008632
     
  • IFOA: an improved forest algorithm for continuous nonlinear optimisation   Order a copy of this article
    by Borong Ma, Zhixin Ma, Dagan Nie, Xianbo Li 
    Abstract: The Forest Optimisation Algorithm (FOA) is a new evolutionary optimisation algorithm which is inspired by seed dispersal procedure in forests, and is suitable for continuous nonlinear optimisation problems. In this paper, an Improved Forest Optimisation Algorithm (IFOA) is introduced to improve convergence speed and the accuracy of the FOA, and four improvement strategies, including the greedy strategy, waveform step, preferential treatment of best tree and new-type global seeding, are proposed to solve continuous nonlinear optimisation problems better. The capability of IFOA has been investigated through the performance of several experiments on well-known test problems, and the results prove that IFOA is able to perform global optimisation effectively with high accuracy and convergence speed.
    Keywords: forest optimisation algorithm; evolutionary algorithm; continuous nonlinear optimisation; scientific decision-making.

  • A location-aware matrix factorisation approach for collaborative web service QoS prediction   Order a copy of this article
    by Zhen Chen, Limin Shen, Dianlong You, Chuan Ma, Feng Li 
    Abstract: Predicting the unknown QoS is often required because most users would have invoked only a small fraction of web services. Previous prediction methods benefit from mining neighborhood interest from explicit user QoS ratings. However, the implicitly existing but significant location information that would potentially tackle the data sparsity problem is overlooked. In this paper, we propose a unified matrix factorisation model that fully capitalises on the advantages of both location-aware neighborhood and latent factor approach. We first develop a multiview-based neighborhood selection method that clusters neighbours from the views of both geographical distance and rating similarity relationships. Then a personalised prediction model is built up by transforming the wisdom of neighborhoods. Experimental results have demonstrated that our method can achieve higher prediction accuracy than other competitive approaches and also better alleviate the concerned data sparsity issue.
    Keywords: service computing; web service; QoS prediction; matrix factorisation; location awareness.

  • Large universe multi-authority attribute-based PHR sharing with user revocation   Order a copy of this article
    by Enting Dong, Jianfeng Wang, Zhenhua Liu, Hua Ma 
    Abstract: In the patient-centric model of health information exchange, personal health records (PHRs) are often outsourced to third parties, such as cloud service providers (CSPs). Attribute-based encryption (ABE) can be used to realise flexible access control on PHRs in the cloud environment. Nevertheless, the issues of scalability in key management, user revocation and flexible attributes remain to be addressed. In this paper, we propose a large-universe multi-authority ciphertext-policy ABE system with user revocation. The proposed scheme achieves scalable and fine-grained access control on PHRs. In our scheme, there are a central authority (CA) and multiple attribute authorities (AAs). When a user is revoked, the system public key and the other users' secret keys need not be updated. Furthermore, because our scheme supports a large attribute universe, the number of attributes is not polynomially bounded and the public parameter size does not linearly grow with the number of attributes. Our system is constructed on prime order groups and proven selectively secure in the standard model.
    Keywords: attribute-based encryption; large universe; multi-authority; personal health record; user revocation.

  • A multi-objective optimisation multicast routing algorithm with diversity rate in cognitive wireless mesh networks   Order a copy of this article
    by Zhufang Kuang 
    Abstract: Cognitive Wireless Mesh Networks (CWMNs) were developed to improve the usage ratio of the licensed spectrum. Since the spectrum opportunities for users vary over time and location, enhancing the spectrum effectiveness is a goal and also a challenge for CWMNs. Multimedia applications have recently generated much interest in CWMNs supporting Quality-Of-Service (QoS) communications. Multicast routing and spectrum allocation is an important challenge in CWMNs. In this paper, we design an effective multicast routing algorithm based on diversity rate with respect to load balancing and the number of transmissions for CWMNs. A Load Balancing wireless links weight computing function and computing algorithm based on Diversity Rate (LBDR) are proposed, and a load balancing Channel and Rate Allocating algorithm based on Diversity Rate (CRADR) is proposed. On this basis, a Load balancing joint Multicast Routing, channel and Rate allocation algorithm based on Diversity rate with QoS constraints for CWMNs (LMR2D) is proposed. Balancing the load of node and channel, and minimising the number of transmissions of multicast tree are the objectives of LMR2D. Firstly, LMR2D computes the weight of wireless links using LBDR and the Dijkstra algorithm for constructing the load balancing multicast tree step by step. Secondly, LMR2D uses CRADR to allocate channel and rate of its to links, which is based on the Wireless Broadcast Advantage (WBA). Simulation results show that LMR2D can achieve the expected goal. Not only can it balance the load of node and channel, but also it needs fewer transmissions for multicast tree.
    Keywords: cognitive wireless mesh networks; multicast routing; spectrum allocation; load balanced; diversity rate.

  • Context discriminative dictionary construction for topic representation   Order a copy of this article
    by Shufang Wu 
    Abstract: The construction of a discriminative topic dictionary is important for describing the topic and increasing the accuracy of topic detection and tracking. In this method, we rank the mutual information of words, and the top few words with the maximum mutual information are selected to construct the discriminative topic dictionaries. Considering context words can provide a more accurate expression of the topic, during word selection, we both consider the differences between different topics and the context words that appear in the stories. Since the news topic is dynamic over time,it is not reasonable to keep the topic dictionary unchanged, so a dictionary updating method is also proposed. Experiments were carried out on TDT4 corpus, and we adopt miss probability and false alarm probability as evaluation criteria to compare the performance of incremental TF-IDF and the proposed method. Extensive experiments are conducted to show that our method can provide better results.
    Keywords: discriminative dictionary; context word; topic representation; word selection.
    DOI: 10.1504/IJCSE.2017.10011825
     
  • Demystifying echo state network with deterministic simple topologies   Order a copy of this article
    by Duaa Elsarraj, Maha Al Qisi, Ali Rodan, Nadim Obeid, Ahmad Sharieh, Hossam Faris 
    Abstract: Echo State Networks (ESN) are a special type of Recurrent Neural Networks (RNN) with distinct performance in the field of Reservoir computing. The state space of the ESN is initially randomised and the reservoir weights are fixed with training done only on the state readout. Beside the advantages of ESN, there remains some opacity in the dynamic properties of the reservoir owing to the presence of randomisation. Our aims in this paper are to demystify the model of ESN in a complete deterministic structure with the use of different proposed reservoir structures (topologies) and to compare their performance with the random ESN on different benchmark datasets. All applied topologies maintain the simplicity of random ESN computation complexity. Most of the topologies showed comparable or even better performance.
    Keywords: echo state network; reservoir computing; reservoir structure topology; memory capacity; echo state network algorithm; complexity.

  • A state space distribution approach based on system behaviour   Order a copy of this article
    by Imene Bensetira, Djamel Eddine Saidouni, Mahfud Al-la Alamin 
    Abstract: In this paper, we propose a novel approach to deal with the state space explosion problem occurring in model checking. We propose an off-line algorithm for distributed state space construction. That is carried out by reviewing the behaviour of the constructed system and redistributing the state space according to the accumulated information about the optimal considered behaviour. Therefore, the distribution will be guided by the systems behaviour. The proposed policy maintains the spatial-time balance. The simulation and implementation of our system are based on a multi-agent technique which fits very well the development of distributed systems. The experimental measures performed on a cluster of machines have shown very promising results for both workload balance and communication overhead.
    Keywords: model checking; combinatorial state space explosion; distributed state space construction; graph distribution; system behaviour; distributed algorithms; reachability analysis.

  • Consensus RNA secondary structure prediction using information of neighbouring columns and principal component analysis   Order a copy of this article
    by Tianhang Liu, Jianping Yin, Long Gao, Wei Chen, Minghui Qiu 
    Abstract: RNA is a family of biological macromolecules. It is important to all kinds of biological processes. RNA structures are closely related to their functions. Hence, determining the structure is invaluable in understanding genetic diseases and creating drugs. Nowadays, RNA secondary structure prediction is a field yet to be researched. In this paper, we present a novel method using an RNA sequence alignment to predict a consensus RNA secondary structure. In essence, the goal of the method is to give a prediction about whether any two columns of an alignment correspond to a base pair or not, using the information provided by the alignment. The information includes the covariation score, the fraction of complementary nucleotides and the consensus probability matrix of the column pair and those of its neighbours. Then principal component analysis is applied to overcome the problem of over-fitting. A comparison of our method and other consensus RNA secondary structure prediction methods, including NeCFold, ELMFold, KnetFold, PFold and RNAalifold, in 47 families from Rfam (version 11.0), is performed. Results show that our method surpasses the other methods in terms of Matthews correlation coefficient, sensitivity and selectivity.
    Keywords: RNA secondary structure prediction; comparative sequence analysis; principal component analysis; information of neighbouring columns.

  • Research on RSA and Hill hybrid encryption algorithm   Order a copy of this article
    by Hongyu Yang, Yuguang Ning, Yue Wang 
    Abstract: An RSA-Hill hybrid encryption algorithm model based on random division of plaintext is proposed. First, the key of the Hill cipher is replaced by a Pascal matrix. Secondly, the session key of the model is replaced by random numbers of plaintext division, and encrypted by the RSA cipher. Finally, the dummy problem in the Hill cipher can be solved, and the model can achieve the one-time pad. Security analysis and experimental results show that our method has better encryption efficiency and stronger anti-attack capacity.
    Keywords: hybrid encryption; plaintext division; Pascal matrix; RSA cipher; Hill cipher.

  • An auction mechanism for cloud resource allocation with time-discounting values   Order a copy of this article
    by Yonglong Zhang 
    Abstract: Group-buying has emerged as a new trading paradigm and has become more attractive. Both sides of the transaction benefit from group-buying: buyers enjoy a lower price and sellers receive more demanding orders. In this paper, we investigate an auction mechanism for cloud resource allocation with time discounting values via group-buying, called TDVG. TDVG consists of two steps: winning seller and buyer selection, and pricing. In the first step, we choose winning seller and buyer in a greedy manner according to some criterion, and calculate the payment for each winning seller and buyer in the second step. Rigorous proof demonstrates that TDVG satisfies the properties of truthfulness, budget balance and individual rationality. Our experiment results show that TDVG achieves better total utility, matching rate and commodities use than the existing works.
    Keywords: cloud resource allocation; auction; time discounting values; group-buying.
    DOI: 10.1504/IJCSE.2017.10012967
     
  • Study on data sparsity in social network-based recommender system   Order a copy of this article
    by Ru Jia, Ru Li, Meng Gao 
    Abstract: With the development of information technology and the expanding of information resources, it is more difficult for people to get the information that they are really interested in, which is so-called information overload. Recommender systems are regarded as an important approach to deal with information overload, because it can predict users preferences according to users records. Matrix factorisation is very successful in recommender systems, but it faces the problem of data sparsity. This paper deals with the sparsity problem from the perspective of adding more kinds of information from social networks, such as friendships and tags, into the recommending model in order to alleviate the sparsity problem. The paper also validates the impacts of users friendships, tags and neighbours of items on reducing the sparseness of the data and improving the accuracy of recommending by the experiments using the dataset from real life.
    Keywords: social network-based recommender systems; matrix factorisation; data sparsity.
    DOI: 10.1504/IJCSE.2017.10012119
     
  • A novel virtual disk bandwidth allocation framework for data-intensive applications in cloud environments   Order a copy of this article
    by Peng Xiao, Changsong Liu 
    Abstract: Recently, cloud computing has become a promising distributed processing paradigm to deploy various kinds of non-trivial applications. In those applications, most of them are considered data-intensive and therefore require the cloud system providing massive storage space as well as desirable I/O performance. As a result, virtual disk technique has been widely applied in many real-world platforms to meet the requirements of these applications. Therefore, how to efficiently allocate the virtual disk bandwidth become an important issue that need to be addressed. In this paper, we present a novel virtual disk bandwidth allocation framework, in which a set of virtual bandwidth brokers are introduced to make allocation decisions by playing two game models. Theoretical analysis and solution are presented to prove the effectiveness of the proposed game models. Extensive experiments are conducted on a real-world cloud platform, and the results indicate that the proposed framework can significantly improve the use of virtual disk bandwidth compared with other existing approaches.
    Keywords: cloud computing; bandwidth reservation; quality of service; queue model; gaming theory.

  • Academic research trend analysis based on big data technology   Order a copy of this article
    by Weiwei Lin, Zilong Zhang, Shaoliang Peng 
    Abstract: Big data technology can well support the analysis of academic research trends, which requires the ability to process an enormous amount of metadata efficiently. On this point, we propose an academic trend analysis method that exploits a popular topic model for paper feature extraction and an influence propagation model for field influence evaluation. We also propose a parallel association rule mining algorithm based on Spark to accelerate trend analysis process. Experimentally, a vast amount of paper metadata was collected from four popular digital libraries: ACM, IEEE, Science Direct and Springer, serving as the raw data for our final feature dataset. Focusing on the hotspot of cloud computing, our result demonstrates that the most relevant topics to cloud computing have been changing these years from basic research to applied research, and from a microscopic point of view, the development of cloud computing related fields presents a certain periodicity.
    Keywords: big data; associate rule mining; Spark; Apriori; technology convergence.
    DOI: 10.1504/IJCSE.2017.10016151
     
  • The discovery in uncertain-social-relationship communities of opportunistic network   Order a copy of this article
    by Xu Gang, Wang Jia-Yi, Jin Hai-He, Mu Peng-Fei 
    Abstract: In the current studies of communities division of the opportunistic network, we always take the uncertain social relations as the input. In the practical application scenarios, because communications are always disturbed and the movements of nodes are random, the social relations are in the uncertain states. Therefore, the result of the community division based on the certain social relations is impractical. To solve the problem which cannot get the accurate communities under the uncertain social relations, we propose an uncertain-social-relation model of the opportunistic network in this paper. Meanwhile we analyze the probability distribution of the uncertain social relation and propose an algorithm of the community division based on the social cohesion, and then we divide communities by the uncertain social relations of opportunistic network. The experimental result shows that the Clique_detection_Based_SoH algorithm of the community division, which is based on the social cohesion, is more in accord with practical communities than the traditional K-clique algorithm of community division.
    Keywords: opportunistic network; uncertain social relations; k-clique algorithm; social cohesion; key node.
    DOI: 10.1504/IJCSE.2017.10016989
     
  • Tag recommendation based on topic hierarchy of folksonomy   Order a copy of this article
    by Han Xue, Bing Qin, Ting Liu, Shen Liu 
    Abstract: As a recommendation problem, tag recommendation has been receiving increasing attention from both the business and academic communities. Traditional recommendation methods are inappropriate for folksonomy because the basis of such mechanism remains un-updated in time owing to the bottleneck of knowledge acquisition. Therefore, we propose a novel method of tag recommendation based on the topic hierarchy of folksonomy. The method applies the topic tag hierarchy constructed automatically from folksonomy to tag recommendation using the proposed strategy. The method can improve the quality of folksonomy and can evaluate the topic tag hierarchy through tag recommendation. The precision of tag recommendation reaches 0.892. The experimental results show that the proposed method significantly outperforms state-of-the-art methods (t-test, p-value <0.0001) and demonstrates effectiveness with respect to data sources on tag recommendation.
    Keywords: tag recommendation; topic hierarchy; folksonomy.

  • Incremental processing for string similarity join   Order a copy of this article
    by Cairong Yan, Bin Zhu 
    Abstract: String similarity join is an essential operation of data quality management and a key step to find the value of data. Now in the era of big data, the existing methods cannot meet the demands of incremental processing. By using the string partition technique, an incremental processing framework for string similarity join is proposed in this paper. This framework treats the inverted index of strings as a state that will be updated after each operation of a string similarity match. Compared with the batching processing model, such framework can avoid the heavy time cost and the space cost brought by the duplicate similarity computation among historical strings and is suitable for processing data streams. We implement two algorithms, Inc-join and Inp-join. Inc-join runs on a stand-alone machine while Inp-join runs on a cluster with Spark environment. The experimental results show that this incremental processing framework can reduce the number of string matchings without affecting the join accuracy and improve the response time for the streaming data join compared with the batch computation model. When the data quantity becomes large, Inp-join can make full use of the advantage of parallel processing and obtain a better performance than Inc-join.
    Keywords: string similarity join; incremental processing; parallel processing; string matching.

  • A hybrid filtering-based network document recommendation system in cloud storage   Order a copy of this article
    by Wu Yuezhong, Liu Qin, Li Changyun, Wang Guojun 
    Abstract: Since the key requirement of users is to efficiently obtain personalised services from mass network document resources, a hybrid filtering-based network document recommendation system is designed with the method of incorporating the content-based recommendation and collaborative filtering recommendation based on the powerful and extensible storage and computing power in cloud storage. The proposed system realises the main service module on Hadoop and Mahout platform, and processes the documents containing the information of user interests by applying AHP-based attribute weighted fusion method. Based on the network interaction, the proposed system not only has advantages on the extensible storage space and high recommendation precision but also has an essential role in realizing network resources sharing and personalised recommendation.
    Keywords: user interest model; collaborative filtering; recommendation system; cloud storage.
    DOI: 10.1504/IJCSE.2017.10008648
     
  • Multiobjective evolutionary algorithm on simplified biobjective minimum weight minimum label spanning tree problems   Order a copy of this article
    by Xinsheng Lai, Xiaoyun Xia 
    Abstract: As general purpose optimisation methods, evolutionary algorithms have been efficiently used to solve multiobjective combinatorial optimisation problems. However, few theoretical investigations have been conducted to understand the efficiency of evolutionary algorithms on such problems, and even fewer theoretical investigations have been conducted on multiobjective combinatorial optimisation problems coming from the real world. In this paper, we analyse the performance of a simple multiobjective evolutionary algorithm on two simplified instances of the biobjective minimum weight minimum label spanning tree problem, which comes from real world. This problem is to find spanning trees that simultaneously minimise the total weight and also the total number of distinct labels in a connected graph where each edge has a label and a weight. Though these two instances are similar, the analysis results show that the simple multiobjective evolutionary algorithm is efficient for one instance, but it may be inefficient for the other. According to the analysis on the second instance, we think that the restart strategy may be useful in making the multiobjecctive evolutionary algorithm more efficient for the biobjective problem.
    Keywords: multiobjective evolutionary algorithm; biobjective; spanning tree problem; minimum weight; minimum label.

  • High dimensional Arnold inverse transformation for multiple images scrambling   Order a copy of this article
    by Weigang Zou, Wei Li, Zhaoquan Cai 
    Abstract: The traditional scrambling technology based on the low dimensional Arnold transformation (AT) is not able to assure the security of images during the transmission process, since the key space of the low dimensional AT is small and the scrambling period is short. Actually, the Arnold inverse transformation (AIT) is also a good image scrambling technique. The high-dimension AIT used in image scrambling can solve the shortcomings of low dimensional geometric transformation, have good image scrambling effect, and achieve the purpose of image encryption, which enriches the theory and application of image scrambling. Taking into account that an image has location space and colour space, the high dimensional AIT for image scrambling improves the anti-attack ability of image scrambling since the combination of the location space coordinates and the colour space component is very flexible. We investigated the property and application of AIT with five or six dimensions in the digital images scrambling. Specifically, we propose the theory of n dimensional AIT. Our investigations show that the technology in larger key space has a good effect on scrambling and has a certain application value.
    Keywords: information hiding; image scrambling; high dimensional transformation; Arnold transformation; Arnold inverse transformation; periodicity.

  • CAT: a context-aware teller for supporting tourist experiences   Order a copy of this article
    by Francesco Colace, Massimo De Santo, Saverio Lemma, Marco Lombardi, Mario Casillo 
    Abstract: The aim of this paper is the introduction of a methodology for the dynamic creation of an adaptive generator of stories related to a tourist context. The proposed approach selects the most suitable contents for the user and builds a context-aware teller that can support them during the exploration of the context, making it more appealing and immersive. The tourist can use the system by a hybrid app. The dynamic context-aware telling engine grabs the contents from a knowledge base that contains data coming both from the knowledge base and from the web. The user profile is updated thanks to information obtained during the visit and from social networks. A case study and some experimental results are presented and discussed.
    Keywords: context-aware; storyteller; social content; pervasive systems.
    DOI: 10.1504/IJCSE.2017.10013620
     
  • Saving energy consumption for mixed workloads in cloud platforms   Order a copy of this article
    by Dongbo Liu, Peng Xiao, Yongjian Li 
    Abstract: Virtualisation technology has been widely applied in cloud systems, however it also introduces many energy-efficiency losses especially when I/O virtualisation mechanism is concerned. In this paper, we present an energy-efficiency enhanced virtual machine (VM) scheduling policy, namely Share-Reclaiming with Collective I/O (SRC-I/O), with aim to reducing the energy-efficiency losses caused by I/O virtualisation. The SRC-I/O scheduler allows running VMs to reclaim extra CPU shares in certain conditions so as to increase CPU use. Meanwhile, SRC-I/O policy separates I/O-intensive VMs from CPU-intensive ones and schedules them in a batch manner, so as to reduce the context-switching costs of scheduling mixed workloads. Extensive experiments are conducted on various platforms by using different benchmarks to investigate the performance of the proposed policy. The results indicate that when the virtualisation platform is in presence of mixed workloads, the SRC-I/O scheduler outperforms existing VM schedulers in terms of energy efficiency and I/O responsiveness.
    Keywords: cloud computing; virtual machine; energy efficiency; mixed workload; task scheduling.

  • The extraction of security situation in heterogeneous log based on Str-FSFDP density peak cluster   Order a copy of this article
    by Chundong Wang, Tong Zhao, Xiuliang Mo 
    Abstract: Log analysis has been widely developed for identifying intrusion at the host or network. In order to reduce the false alarm rate in the process of security events extraction and discover a wide range of anomalies by scrutinising various logs, an improvement of Str-FSFDP (a fast search and find of peak density based data stream) clustering algorithm in heterogeneous log analysis is presented. Because of the advantages in data attribute relationship analysis for mixed attributes data, this algorithm can classify log data into two types whose corresponding distance measure metrics are designed. In order to apply Str-FSFDP in various logs, 12 attributes are defined in the unified XML format for clustering in this paper. These attributes are divided by the characteristics of each type of log and the importance of expressing a security event. To match the new micro cluster characteristic vector mentioned in the Str-FSFDP algorithm, this paper uses time gap to improve the UHAD (unsupervised anomaly detection model) framework. The time gap is designed as a threshold value based on micro cluster strategy. Experimental results reveal that the framework using Str-FSFDP clustering algorithm with time threshold can improve the aggregation rate of the log events and reduce the false alarm rate. As the algorithm has an analysis of attributes correlation, the connections between different IP addresses have been tested in the experiment. This helps us to look for the same attackers exploitation traces even if he fakes the IP addresses. It can increase the degree of aggregation in the same event. According to our analysis of each cluster, some serious attacks in the experiment have been summarised through the time line.
    Keywords: heterogeneous log; micro cluster; mixed attributes; unsupervised anomaly detection.

  • An improved KNN text classification method   Order a copy of this article
    by Fengfei Wang, Zhen Liu, Chundong Wang 
    Abstract: A text classification method based on improved SOM and KNN is introduced in this paper. In order to overcome the shortcomings of KNN in the text space model, this paper uses the SOM neural network to optimise the text classification. Based on this, this paper presents an improved SOM combined with KNN algorithm model. The SOM neural network weights of each dimension of the vector space model are calculated, using the SOM neural network in an unsupervised and no prior knowledge state of the sample to execute self-organisation and self-learning, to achieve evaluation and classification of the sample. This characteristic, using the SOM neural network combined with the KNN algorithm, effectively reduces the dimension of the vector, improves the clustering accuracy and speed and can effectively improve the efficiency of text classification.
    Keywords: text classification; KNN; SOM; neural network.

  • Privacy-preserving location-based service protocols with flexible access   Order a copy of this article
    by Shuyang Tang, Shengli Liu, Xinyi Huang, Zhiqiang Liu 
    Abstract: We propose an efficient privacy-preserving, content-protecting Location-based Service (LBS) scheme. Our proposal gives refined data classification and uses generalised ElGamal to support flexible access to different data classes. We also make use of Pseudo-Random Function (PRF) to protect users' position query. Since PRF is a light-weighted primitive, our proposal enables the cloud server to locate position efficiently while preserving the privacy of the queried position.
    Keywords: location-based services; outsourced cloud; security; privacy preserving.

  • On providing on-the-fly resizing of the elasticity grain when executing HPC applications in the cloud   Order a copy of this article
    by Rodrigo Righi, Cristiano Costa, Vinicius Facco, Luis Cunha 
    Abstract: Today, we observe that cloud infrastructures are gaining more and more space to execute HPC (High Performance Computing) applications. Unlike clusters and grids, the cloud offers elasticity, which refers to the ability of enlarging or reducing the number of resources (and consequently, processes) to support as close as possible the needs of a particular moment of the execution. In the best of our knowledge, current initiatives explore the elasticity and HPC duet by always handling the same number of resources at each scaling in or out of operation. This fixed elasticity grain commonly reveals a stair-shaped behaviour, where successive elasticity operations take place to address the load curve. In this context, this article presents GrainElastic: an elasticity model to execute HPC applications with the capacity to adapt the elasticity grain to the requirements of each elasticity operation. Its contribution concerns a mathematical formalism that uses historical execution traces and ARIMA time series model to predict the required number of resources (in our case, VMs) to address a reconfiguration point. Based on the proposed model, we developed a prototype that was compared with two other scenarios: (i) non-elastic application and (ii) elastic middleware with a fixed grain. The results presented gains up to 30% in favour of GrainElastic, showing us the relevance on adapting the elasticity grain to enhance system reactivity and performance.
    Keywords: elasticity; resource management; HPC; cloud computing; elasticity grain; adaptivity.
    DOI: 10.1504/IJCSE.2017.10013365
     
  • Can the hybrid colouring algorithm take advantage of multi-core architectures?   Order a copy of this article
    by João Fabrício Filho, Luis Gustavo Araujo Rodriguez, Anderson Faustino Da Silva 
    Abstract: Graph colouring is a complex computational problem that focuses on colouring all vertices of a given graph using a minimum number of colours. However, adjacent vertices are restricted from receiving the same colour. Over recent decades, various algorithms have been proposed and implemented to solve such a problem. An interesting algorithm is the Hybrid Coloring Algorithm (HCA), which was developed in 1999 by Philippe Galinier and Jin-Kao Hao. The HCA was widely regarded at the time as one of the best performing algorithms for graph colouring. Nowadays, high-performance out-of-order multi-cores have emerged that execute applications faster and more efficiently. Thus, the objective of this paper is to analyse whether the HCA can take advantage of multi-core architectures, in terms of performance, or not. For this purpose, we propose and implement a parallel version of the HCA that takes advantage of all hardware resources. Several experiments were performed on a machine with two Intel(R) Xeon(R) CPU E5-2630 processors, thus having a total of 24 cores. The experiment proved that the parallel HCA, using multi-core architectures, is a significant improvement over the original because it achieves enhancements of up to 40% in terms of the distance to the best chromatic number found in the literature. The expected contribution of this paper is to encourage developers to take advantage of high performance out-of-order multi-cores to solve complex computational problems.
    Keywords: metaheuristics; hybrid colouring algorithm; graph colouring problem; architecture of modern computers.

  • Learning pattern of hurricane damage levels using semantic web resources   Order a copy of this article
    by Quang-Khai Tran, Sa-kwang Song 
    Abstract: This paper proposes an approach for hurricane damage level prediction using semantic web resources and matrix completion algorithms. Based on the statistical unit node set framework, streaming data from five hurricanes and damage levels from 48 counties in the USA were collected from the SRBench dataset and other web resources, and then trans-coded into matrices. At a time t, the pattern of possible highest damage levels at 6 hours into the future was estimated using a multivariate regression procedure based on singular value decomposition. We also applied the Soft-Impute algorithm and k-nearest-neighbours concept to improve the statistical unit node set framework in this research domain. Results showed that the model could deal with inaccurate, inconsistent and incomplete streaming data that were highly sparse, to learn future damage patterns and perform forecasting in near real time. It was able to estimate the damage levels in several scenarios even if two-thirds of the relevant weather information was unavailable. The contributions of this work will be able to promote the applicability of the semantic web in the context of climate change.
    Keywords: hurricane damage; statistical unit node set; matrix completion; SRBench dataset; streaming data.

  • CUDA GPU libraries and novel sparse matrix-vector multiplication implementation and performance enhancement in unstructured finite element computations   Order a copy of this article
    by Richard Haney, Ram V. Mohan 
    Abstract: The efficient solution to systems of linear and non-linear equations arising from sparse matrix operations is a ubiquitous challenge for computing applications that can be exacerbated by the employment of heterogeneous architectures such as CPU-GPU computing systems. There is a common need for efficient implementation and computational performance of solution of sparse system of linear equations in many unstructured finite element-based computations of physics based modeling problems. This paper presents our implementation of a novel sparse matrix-vector multiplication (a significant compute load operation in the iterative solution via pre-conditioned conjugate gradient based methods) employing LightSpMV with Compressed Sparse Row (CSR) format, and the resulting performance characteristics. An unstructured finite element-based computational simulation involving multiple calls to iterative pre-conditioned conjugate gradient algorithm for the solution to a linear system of equations employing a single CPU-GPU computing system using NVidia Compute Unified Device Architecture libraries is employed for the results discussed in the present paper. The matrix-vector product implementation is examined within the context of a resin transfer molding simulation code. Results from the present work can be applied without loss of generality to many other unstructured, finite element-based computational modeling applications in science and engineering that employ solutions to sparse linear and non-linear system of equations using CPU-GPU architecture. Computational performance analysed indicates that LightSpMV can provide an asset to boost performance for these computational modelling applications. This work also investigates potential improvements in the LightSpMV algorithm using CUDA 35 intrinsic, which results in an additional performance boost by 1%. While this may not be significant, it supports the idea that LightSpMV can potentially be used for other full-solution finite element-based computational implementations.
    Keywords: general purpose GPU computing; sparse matrix-vector; finite element method; CUDA; performance analysis.
    DOI: 10.1504/IJCSE.2017.10011618
     
  • Rational e-voting based on network evolution in the cloud   Order a copy of this article
    by Tao Li, Shaojing Li 
    Abstract: Physically distributed voters can vote online through an electronic voting (e-voting) system. It can outsource the counting work to the cloud when the system is overloaded. However, this kind of outsourcing may lead to some security problems such as anonymity, privacy, fairness etc. Suppose servers in the cloud have no incentives to deviate from the e-voting system, these security problems can be effectively solved. In this paper, we assume that servers in the cloud are rational, and try to maximise their utilities. We look for incentives for rational servers not to deviate from the e-voting system. Here, no deviation means rational servers prefer to cooperate in the e-voting system. Simulation results of our evolution model show that the cooperation level is high after certain rounds. Finally, we put forward a rational e-voting protocol based on the above results and prove that the system is secure under proper assumptions.
    Keywords: electronic voting; utility; cloud computing; rational secret sharing.

  • Water contamination monitoring system based on big data: a case study   Order a copy of this article
    by Gaofeng Zhang, Yingnan Yan, Yunsheng Tian, Yang Liu, Yan Li, Qingguo Zhou, Rui Zhou, Kuan-Ching Li 
    Abstract: Water plays a vital role in peoples lives, and individuals cannot survive without it. However, water contamination has become a serious issue with the development of industry and agriculture, and has become a threat to peoples daily life. Moreover, the amount of data people need to process becomes excessively complex and huge in the big data era. Hence, data management is increasingly a difficult task. There is an urgent need to develop a system to identify major changes of water quality through monitoring and managing these water quality variables. In this paper, we develop a data monitoring system named Monitoring and Managing Data Center (MMDC) for monitoring, downloading, sharing, and time-series analysis based on big data technology. In order to reflect the real hydrological ecosystem, water quality variable data collected from Taihu Lake in China is used to verify the effectiveness of MMDC. Results show that MMDC is effective for monitoring and management of massive data. Although this investigation is focused on Taihu Lake, it is applicable as a general monitoring system for other similar natural resources.
    Keywords: water contamination; big data; MMDC; monitoring; data analysis.
    DOI: 10.1504/IJCSE.2017.10011736
     
  • Passive image autofocus by using direct fuzzy transform   Order a copy of this article
    by Ferdinando Di Martino, Salvatore Sessa 
    Abstract: We present a new passive autofocusing algorithm based on fuzzy transforms. In a previous work a localised variation of the variance operator was proposed based on the concept of fuzzy subspaces of the image: fuzzy C-means and conditional fuzzy C-means algorithms are applied for detecting the fuzzy subspaces. The direct fuzzy transform is used for extracting the mean values of the image intensity in a fuzzy subspace, then a weighted sum of the local variance operators obtained in each subspace is calculated as well. We propose a new approach based on the fuzzy generalised fuzzy C-means algorithm, where the number of fuzzy subspaces is obtained by using the partition coefficient and exponential separation validity indexes. Comparisons show that our method is more robust with respect to the localised variation of the variance operator.
    Keywords: image autofocusing; image contrast; variance; FCM; fuzzy transform.
    DOI: 10.1504/IJCSE.2017.10011885
     
  • Arrhythmia recognition and classification through deep learning based approach   Order a copy of this article
    by Rui Zhou, Xue Li, Binbin Yong, Zebang Shen, Chen Wang, Qingguo Zhou, Yunshan Cao, Kuan-Ching Li 
    Abstract: Arrhythmia is a cardiac condition caused by abnormal electrical activity of the heart, which can be life-threatening. Electrocardiogram (ECG) is the principal diagnostic tool used to detect arrhythmias or heart abnormalities. It contains information about the different types of arrhythmia. However, owing to the complexity and non-linearity of ECG signals, such as the presence of noise, the time dependence of ECG signals and the irregularity of the heartbeat, it is troublesome to analyse ECG signals manually. Moreover, the interpretation of ECG signals is subjective and might vary among experts in the field. Therefore, an automatic, high-precision ECG recognition method is important to arrhythmia detection. For such, it is proposed in this paper a method to arrhythmia classification, which is based on a deep learning based approach called Long Short-Term Memory (LSTM), where five classes of arrhythmia as recommended by the Association for Advancement of Medical Instrumentation (AAMI) are analysed. The method has been tested on the MIT-BIH Arrhythmia Database with a number of useful performance evaluation measures, showing that it has a promising and better performance than other artificial intelligence methods used.
    Keywords: electrocardiogram signal; long short-term memory; arrhythmia classification; artificial intelligence; deep learning.
    DOI: 10.1504/IJCSE.2017.10011740
     
  • Publicly verifiable function secret sharing   Order a copy of this article
    by Qiang Wang, Fucai Zhou, Su Peng, Jian Xu 
    Abstract: Function Secret Sharing (FSS) allows a dealer to split a secret function into n sub-functions, described by n evaluation keys, such that only a combination of all of these keys could reconstruct the secret function. However, it is impossible to recover the secret correctly if there exist some sharers deviating from intended behaviors. To settle this problem, we propose a new primitive called Publicly Verifiable Function Secret Sharing (PVFSS), in which any client could verify the validity of secret in constant time. Furthermore, we define three important properties: public delegation, public verification and high efficiency, which are an essential part in our scheme. Finally, we construct a PVFSS scheme for point function, then we prove its security and make performance analysis in two major directions: key length and algorithm efficiency. The analysis validates that our proposed scheme is asymptotic to FSS. It would be applicable to cloud computing.
    Keywords: PVFSS; cloud computing; high efficiency; public delegation; public verification.
    DOI: 10.1504/IJCSE.2019.10018801
     
  • Parallel context-aware multi-agent tourism recommender system   Order a copy of this article
    by Richa Singh, Punam Bedi 
    Abstract: The presence of millions and millions of users and items makes real-time filtering a time-consuming process in recommender systems. In context-aware recommender systems, the choices of users depend on the contextual information as well as available items. This helps to reduce the user item data to some extent, but the rapid change in the interests of a user under different contexts puts an extra load on recommender systems. To address this problem, we present a parallel approach for context-aware recommender systems using a multi-agent system that greatly accelerates the processing time. General Purpose Graphic Processing Unit (GPGPU) is used to exploit the parallel behaviour of the system along with CUDA (Compute Unified Device Architecture) and JCuda. The proposed algorithm works in both offline and online phases. Contextual filtering and multi-agent environment help to keep the system updated with the context of the user. A prototype of the system is developed using JCuda, JADE and Java technologies for the tourism domain. The performance of the presented system is compared with the context-aware recommender system without parallel processing with respect to processing time and scalability, as well as precision, recall and F-measure. The results show a significant speedup for the presented system over the non-parallel context-aware recommender system.
    Keywords: multi-agent system; recommender system; context aware; parallel processing; tourism.
    DOI: 10.1504/IJCSE.2017.10010189
     
  • Graph databases for openEHR clinical repositories   Order a copy of this article
    by Samar El Helou, Shinji Kobayashi, Goshiro Yamamoto, Naoto Kume, Eiji Kondoh, Shusuke Hiragi, Kazuya Okamoto, Hiroshi Tamura, Tomohiro Kuroda 
    Abstract: The archetype-based approach has now been adopted by major EHR interoperability standards. Soon, owing to an increase in EHR adoption, more health data will be created and frequently accessed. Previous research shows that conventional persistence mechanisms such as relational and XML databases have scalability issues when storing and querying archetype-based datasets. Accordingly, we need to explore and evaluate new persistence strategies for archetype-based EHR repositories. To address the performance issues expected to occur with the increase of data, we proposed an approach using labelled property graph databases for implementing openEHR clinical repositories. We implemented the proposed approach using Neo4j and compared it with an Object Relational Mapping (ORM) approach using Microsoft SQL Server. We evaluated both approaches over a simulation of a pregnancy home-monitoring application in terms of required storage space and query response time. The results show that the proposed approach provides a better overall performance for clinical querying.
    Keywords: openEHR; graph database; EHR; database; performance; archetypes; reference model; EHR repository; archetype-based storage; query response time.
    DOI: 10.1504/IJCSE.2017.10017366
     
  • Kernel-based tensor discriminant analysis with fuzzy fusion for face recognition   Order a copy of this article
    by Xiaozhang Liu, Hangyu Ruan 
    Abstract: This paper proposes a novel kernel-based image subspace learning method for face recognition, by encoding a face image as a tensor of second order (matrix). First, we propose a kernel-based discriminant tensor criterion, called kernel bilinear fisher criterion (KBFC), which is designed to simultaneously pursue two projection vectors to maximise the interclass scatter and at the same time minimise the intraclass scatter in its corresponding subspace. Then, a score level fusion method is presented to combine two separate projection results to achieve classification tasks. Experimental results on the ORL and UMIST face databases show the effectiveness of the proposed approach.
    Keywords: kernel; tensor discriminant; bilinear discriminant; matrix representation; face recognition.
    DOI: 10.1504/IJCSE.2017.10012617
     
  • Modelling of advanced persistent threat attack monitoring based on the artificial fish swarm algorithm   Order a copy of this article
    by Biaohan Zhang 
    Abstract: In recent years, Advanced Persistent Threat (APT) has become one of the important factors that threaten network security. Aiming at the APT attack defence problem, this paper proposes an APT attack monitoring method based on the principle of artificial fish swarm algorithm. The attack monitoring model is established by imitating the behaviour of the artificial fish swarm. The model is used to dynamically monitor the environment, and the APT attack index is simulated with the food consistence to monitor the position of the highest APT attack index. The experimental results show that the monitoring model designed by this method can effectively monitor and forecast the attack target, and also has good expansibility and practicability.
    Keywords: artificial fish swarm algorithm; advanced persistent threat attack; monitoring model.

  • Multilayer ensemble of ELMs for image steganalysis with multiple feature sets   Order a copy of this article
    by Punam Bedi, Veenu Bhasin 
    Abstract: A multilayer ensemble of Extreme Learning Machines (ELM) for multi-class image steganalysis is proposed in this paper. The proposed ensemble consists of three levels and uses multiple feature sets extracted from images. The first two layers form sub-ensembles, one sub-ensemble for each of the feature sets. Each feature set is partitioned and used with multiple ELMs at level-1. These feature sets along with the output of the ELMs at level-1 are used by different ELMs at level-2 to classify images into multiple classes. To combine these results from sub-ensembles a stacking technique is used. Results of level-2 ELMs are used as input for the last level ELM. The fast learning process of ELM aids the speedy execution of the proposed method. Performance of the proposed method is compared with existing steganalysis methods based on individual feature sets and on 2-level ensemble. The experimental study demonstrates that the proposed method classifies images into multiple classes with higher accuracy and this has been confirmed using t-test with 99% confidence.
    Keywords: steganalysis; extreme learning machine; Markov random process; ensemble of ELMs.
    DOI: 10.1504/IJCSE.2017.10010576
     
  • An anchor node selection mechanism-based node localisation for mines using wireless sensor networks   Order a copy of this article
    by Kangshun Li, Hui Wang, Ying Huang 
    Abstract: To tackle the low localisation accuracy problem in wireless sensor network (WSN) nodes in mines, a localisation algorithm is proposed to improve the localisation accuracy of Received Signal Strength Indication (RSSI) using an anchor node selection mechanism. This localisation mainly includes three phases. First, the anchor node RSSI values received from an unknown node are sorted from high to low. Second, the four anchor nodes with the highest RSSI values are selected by a Gaussian elimination method. These nodes are not in the same plane and form a prismatic shape, and the distance from any one node to a plane consisting of another three points is not less than a certain threshold value. Finally, the least squares method is used to estimate the coordinates of the unknown nodes to realise the precise localisation of the unknown nodes. The simulation results show that the proposed algorithm has greatly improved the localisation accuracy compared with other traditional localisation algorithms.
    Keywords: underground tunnel; received signal strength indication; anchor node selection; least squares method; Gauss elimination method.

  • A malware variants detection methodology with an opcode-based feature learning method and a fast density-based clustering algorithm   Order a copy of this article
    by Hui Yin, Jixin Zhang, Zheng Qin 
    Abstract: Malware is one of the most terrible and major security threats facing the internet today, which can be defined as any type of malicious code to harm a computer or network. As malware variants may be equipped with sophisticated mechanisms to bypass traditional detection systems, in this paper, we propose a malware variant detection approach that can automatically, quickly, and accurately detect malware variants. In our approach, we present an asynchronous architecture for automated training and detection. Under this architecture, to improve the detection speed while retaining the accuracy, we propose an information entropy-based feature extraction method to extract a few but very useful features and a distance-based weight learning method to weight these features. To further improve the detection speed, we propose our fast density-based clustering algorithm. We evaluate our approach with a number of Windows-based malware instances that belong to six large families, and our experiments demonstrate that our automated malware variant detection method is able to achieve high accuracy with a significant speedup in comparison with the other state-of-art approaches.
    Keywords: distance-based weight learning; fast density-based clustering; information entropy; malware variants.

  • Optimised tags with time attenuation recommendation algorithm based on tripartite graphs network   Order a copy of this article
    by Ming Zhang, Wei Chen 
    Abstract: Social recommendation has attracted increasing attention in recent years owing to the potential value of social relations in recommender systems. Social tags play an important role in improving the recommendation accuracy. However, garbage tags may lead to the issue of data matrix sparseness and affect the accuracy and performance of the recommendation system. To optimise the social tags in the recommendation system, the tags are sorted by popularity ranking method with the time attention model in order to remove the garbage tags. The time attention model is used to consider the variation of tags with the change of time. Then a novel recommendation algorithm with the optimised social tags is proposed, based on the complete tripartite graph network. This method considers the preference information of users and items, and generates the recommendation items for users on the basis of collaborative filtering. Experimental results show that the proposed algorithm predicts the recommendation items more accurately than other existing approaches.
    Keywords: tags optimisation; tripartite graphs network; time attenuation model; social recommendation.

  • Probabilistic rough-set-based band selection method for hyperspectral data classification   Order a copy of this article
    by Li Min, Wang Lei, Deng Shaobo 
    Abstract: This paper proposes an innovative band selection algorithm called probabilistic rough-set-based band selection (PRSBS) algorithm. The proposed algorithm is a supervised band selection algorithm with efficiency because it needs to calculate only the first-order significance measure. The main novelty of the proposed PRSBS algorithm is lined in criterion function, which measures the effectiveness of the considered band. The algorithm uses a probabilistic distribution dependency as the relevance measure between the bands and class labels, which can effectively measure the uncertainty of both the positive and the boundary samples in a dataset. We compared the proposed PRSBS with the most relevant band selection algorithm, RSBS, on three different hyperspectral datasets; the experimental results show that the PRSBS has better results than the RSBS. Moreover, the PRSBS algorithm runs significantly faster than the RSBS algorithm, which makes it a good choice for band selection in hyperspectral image datasets.
    Keywords: band selection; probabilistic rough set; hyperspectral image; classification.
    DOI: 10.1504/IJCSE.2019.10019529
     
  • A universal designated multi verifiers content extraction signature scheme   Order a copy of this article
    by Min Wang, Yuexin Zhang, Jinhua Ma, Wei Wu 
    Abstract: A notion to combine the content extraction signature and the universal designated verifier signature was put forth by Lin in 2012. Specifically, it allows an extracted signature holder to designate the signature to a prospective verifier. However, existing designs become inefficient when multi verifiers are involved. To improve the efficiency, in this paper, we extend the notion to the Universal Designated Multi Verifiers Content Extraction Signature ($\mathrm{UDMVCES}$). Implementing our new scheme, the extracted signature holder can efficiently designate the signature to multi verifiers. Additionally, we provide the security notions and prove the security of the proposed scheme in the random oracle model. To illustrate the efficiency of our $\mathrm{UDMVCES}$ scheme, we analyse its performance. The analysis shows that the computation costs and signature lengths of the new scheme are independent of the number of verifiers.
    Keywords: content extraction signature; universal designated multi verifiers signature; extracted signature; random oracle model.

  • Dynamic input domain reduction for test data generation with iterative partitioning   Order a copy of this article
    by Esmaeel Nikravan, Saeed Parsa 
    Abstract: A major difficulty concerning test data generation for white box testing is to detect the domain of input variables covering a certain path. To this aim a new concept, domain coverage, is introduced in this article. In search of appropriate input variable subdomains, covering a desired path, the domains are randomly partitioned as far as subdomains whose boundaries satisfy the path constraints are found. When partitioning, the priority is given to those subdomains whose boundary variables do not satisfy the path constraints. Representing the relation between the subdomains and their parents as a directed acyclic graph, an Euler/Venn reasoning system could be applied to select the most appropriate subdomains. To evaluate our proposed path-oriented test data generation method, the results of applying the method to six known benchmark programs, Triangle, GCD, Calday, Shellsort, Quicksort and Heapsort, is presented.
    Keywords: random testing; test data generation; Euler/Venn diagram; directed acyclic graph.

  • Multi-class instance-incremental framework for classification in fully dynamic graphs   Order a copy of this article
    by Hardeo Kumar Thakur, Anand Gupta, Ritvik Shrivastava, Sreyashi Nag 
    Abstract: Existing work in the area of graph classification is mostly restricted to static graphs. These static classification models prove ineffective in several real-life scenarios that require an approach capable of handling data of a dynamic nature. Further, the limited work in the domain of dynamic graphs has mainly focused on solely incremental graphs, which fail to accommodate Fully Dynamic Graphs (FDG). Hence, in this paper, we propose a comprehensive framework targeting multi-class classification in fully dynamic graphs by using the efficient Weisfeiler-Lehman graph kernel (W-L) with a multi-class Support Vector Machine (SVM). The framework iterates through each update using the instance-incremental method while retaining all historical data in order to ensure higher accuracy. Reliable validation metrics are used for the model parameter selection and output verification. Experimental results over four case studies on real-world data demonstrate the efficacy of our approach.
    Keywords: fully dynamic graph; dynamic graph; graph classification; multi-class classification.
    DOI: 10.1504/IJCSE.2017.10016991
     
  • Assessment of nested-parallel task model under real-time scheduling on multi-core processors   Order a copy of this article
    by Mahesh Lokhande, Mohammad Atique 
    Abstract: Real-time applications contain numerous time-bound parallel tasks with enormous computations. Parallel models, not sequential models, have the capability to handle intra-task parallelism and accomplish such tasks in a specific time or before. Previous researchers presented the task models for parallel tasks, but not for the nested-parallel tasks. This paper deals with the real-time scheduling of periodic nested-parallel tasks with an implicit-deadline on multi-core processors. Initially, the focus is on a nested-parallel task model. Next, a novel task disintegration technique is studied where the MAMs ratio is defined to categorise the segments. It is theoretically proved that the discussed disintegration technique achieved a speedup factor of 4.30 and 3.40 when the tasks, after disintegration, were scheduled under partitioned DM (Deadline Monotonic) and global EDF (Earliest Deadline First) scheduling, respectively. Further, considering the overhead factor (β) for non-preemptive global EDF scheduling, the disintegration technique was analysed and achieved a speedup factor of 3.73 (for β=1). The proposed disintegration technique is assessed through the simulations thereby indicating the adequacy of derived speedup factors.
    Keywords: nested-parallel tasks; real-time scheduling; partitioned DM scheduling; EDF scheduling; multi-core processors; task disintegration; speedup factor.
    DOI: 10.1504/IJCSE.2017.10011790
     
  • Recognition of landslide disasters with extreme learning machine   Order a copy of this article
    by Guanyu Chen, Xiang Li, Wenyin Gong 
    Abstract: The geological disasters of landslides induced by the Wenchuan earthquake are great in number so landslide disaster recognition and investigation must be conducted in the early stage of large construction planning in the disaster area. In recent years, the studies on image recognition have focused on the extreme learning machine (ELM)algorithm. Based on the preprocessing of remote sensing images, this paper conducts landslide recognition with remote sensing images through the ELM classification combined with colour and texture features of ground objects. The comparison experiments of landslide recognition with the support vector machine (SVM) algorithm shows that the recognition accuracy of the ELM algorithm is not much different from that of the SVM algorithm, but the ELM takes short time in training with absolute advantage.
    Keywords: geological disaster; remote sensing image; extreme learning machine; landslide recognition.

  • Graffiti-writing recognition with fine-grained information   Order a copy of this article
    by Jiashuang Xu, Jiashuang Zhangjie Fu 
    Abstract: Contactless HCI (Human-Computer Interaction) has become a new trend due to the springing up of the novel intelligent terminals. The existing interaction systems usually adopt depth cameras, motion controller, and radiofrequency devices. The common drawback of the above approaches is that all the participants are required to obey the unistroke writing standard for data acquisition. The uniformity of the writing rule simplifies the data acquisition stage, but it breaks the integrity of the handwriting system. In practice, the writing habits vary among people. It is observed that eight capitalised letters of the alphabet possess more than one writing pattern. Thus, we are motivated to propose a more adaptive, contactless graffiti-writing recognition system with CSI (Channel State Information) derived from Wi-Fi signals. The discrete wavelet transform is used for denoising. We choose a sliding window to calculate the MAD (Mean Absolute Deviation)to detect the start and end points. We extract the unique CSI waveform caused by writing action to represent each letter. To cater for more users writing customs and improve the universality of the system, we train separate HMMs (Hidden Markov Model) for the eight letters and conduct cross-validation for testing. The average detection accuracy reaches 94.5%. The average recognition accuracy for the 26-letter model is 85.96% when the number of the training sample is 100 from five subjects. The real-time recognition efficiency measured by characters per minute is 11.97(= 31/155.24 s).
    Keywords: air-write recognition; wireless sensing; channel state information.

  • A new neural architecture for feature extraction of remote sensing data   Order a copy of this article
    by Mustapha Si Tayeb, Hadria Fizazi 
    Abstract: The paper presents a novel method for the classification of remote sensing data. The proposed approach comprises two main steps: 1) Extractor Multi-Layer Perceptron (EMLP) is used for feature extraction of the remote sensing data; then 2) the data resulting from the EMLP are classified using a Support Vector Machine (SVM) algorithm. The contribution of this work is mainly in the creation of the EMLP method based on the Multi-Layer Perceptron (MLP) method, which has the role of creating a dataset more representative of the classes from the original dataset. To better situate and evaluate our proposed approach, we applied our proposed technique to three datasets, namely, Statlog Landsat Satellite, Urban Land Cover and Landsat TM Oran, and several measures were used, for example, classification rate, classification error, precision, recall and F-measure. The experimental results show that the proposed approach (EMLP-SVM) is more efficient and powerful than the basic methods (MLP and SVM) and the existing state-of-the-art classification methods.
    Keywords: classification methods; feature extraction; remote sensing data; extractor multi-layer perceptron; support vector machine; supervised learning.

  • Parallelisation of practical shared sampling alpha matting with OpenMP   Order a copy of this article
    by Tien-Hsiung Weng, Chi-Ching Chiu, Huimin Lu 
    Abstract: In the modern filmmaking industry, image matting has been one of the common tasks in video side effects and the necessary intermediate steps in computer vision. It pulls the foreground object from the background of an image by estimating the alpha values. However, the computational speed for matting high resolution images can be significantly slow owing to its complexity and the computation that is proportional to the size of unknown region. In order to improve the performance, we implement a parallel alpha matting code with OpenMP from existing sequential code for running on the multicore servers. We present and discuss the algorithm and experimentation results in the parallel application developer perspective. The development takes less effort, however the results show significant performance improvement of the entire program.
    Keywords: OpenMP; image matting; multicores; parallel programming.

  • A novel coverless text information hiding method based on double-tags and twice-send   Order a copy of this article
    by Xiang Zhou, Xianyi Chen, Fasheng Zhang, Ningning Zheng 
    Abstract: Recently, coverless text information hiding (CTIH) has attracted the attention of an increasing number of researchers because of the high security. However, there are still many problems to be solved, for example the efficiency of retrieving and the hiding capacity. In the existing CTIH methods, the secret information is embedded to be one carrier with one label to ensure the success rate of hiding. In this paper, we proposed a novel CTIH method based on the double tags and twice-send, in which the double tags in a text are achieved by designing the odd-even adjudgement, and a reverse index is created firstly to promote the efficiency of retrieving, then transform characters into binary numbers, which will be employed as the location tags to determine the secret information in the received texts. Finally, this improves the success rate of hiding by sending the document twice. The experimental results show that the proposed method improves the hiding capacity and efficiency compared with existing text CIH algorithms.
    Keywords: coverless information hiding; double tags; twice-send.

  • Fast CU size decision based on texture-depth relationship for depth map encoding in 3D-HEVC   Order a copy of this article
    by Liming Chen, Zhaoqing Pan, Xiaokai Yi, Yajuan Zhang 
    Abstract: Because many advanced encoding techniques are introduced into the 3D-HEVC, it achieves higher encoding efficiency than HEVC. However, the encoding complexity of 3D-HEVC increases significantly as these high complexity coding techniques are used. In this paper, a new fast CU size decision algorithm based on texture depth is proposed to reduce the depth map encoding complexity, because there is strong correlation between texture and depth map, including motion characteristic and background region. Both kinds of map tend to choose the same CU depth as their best depth level. By building a one-to-one match for collocated largest coding unit (LCU), the information of texture encoding can be used to predict the depth level of the depth map. Experimental results have shown that the proposed method can achieve 41.89% time saving on average, with the negligible drop of 0.04 dB on BDPSNR and a small increase of 2.29% on BDBR.
    Keywords: 3D-HEVC; early termination; CU split; PU mode; depth map.

  • Aligning molecular sequences using hybrid bioinspired algorithm in GPU   Order a copy of this article
    by Jayapriya Jayakumar, Michael Arock 
    Abstract: To explicate the functionality of the basic cell, there is a need for the study of bioinformatics. To better understand the structural and functional information of molecules, sequence analysis is considered as the root domain. In this, aligning the sequences is the first step, an NP-complete problem like all biological problems. Owing to the increased molecular data in biology, there is a demand for the development of efficient approaches to this sequence alignment problem. From the study, it is concluded that there is trade-off between accuracy and computational time. Focusing on the latter in this paper, a new parallel hybridised bio-inspired approach (PGWOGO) is proposed without sacrificing the accuracy. A Grey Wolf Optimizer technique is hybridised with the genetic operators, and the parallel phases are implemented in Quadro 4000 Graphics Processing unit. New crossover and mutation operators, namely horizontal crossover and local gaps shuffle mutation operator between aligned blocks, are employed. The performance of the proposed algorithm is evaluated using the CUPS (cells update per second) and compared with the state-of-the-art techniques. The results show that the proposed algorithm yields better alignment than other techniques.
    Keywords: GPU; alignment; hybrid bioinspired; GWO; genetic operators; crossover; mutation.

  • Deep learning for collective anomaly detection   Order a copy of this article
    by Mohiuddin Ahmed, Al-Sakib Khan Pathan 
    Abstract: Deep learning has been performing well in a number of application domains. Inspired by its popularity in domains such as image processing, speech recognition etc., in this paper we explore the effectiveness of deep learning and other supervised learning algorithms for collective anomaly detection. Recently, collective anomaly has become popular for DoS (Denial of Service) attack detection, however, all these approaches are unsupervised in nature and often have high false alarm rate owing to being unsupervised. Therefore, to reduce the false alarm rates, we have experimented using the deep learning method that is supervised in nature. Our experimental results on UNSW-NB15 and KDD Cup 1999 datasets show that the deep learning implemented using H2O achieves ≈97% recall for collective anomaly detection. Deep learning outperforms a wide range of unsupervised techniques for collective anomaly detection. The key insight of this paper is to report the efficiency of deep learning for collective anomaly detection. To the best of our knowledge, this paper is the first one to address the collective anomaly detection problem using deep learning.
    Keywords: deep learning; collective anomaly; DoS attack; traffic analysis.

  • Experimental investigation and CFD simulation of power consumption for mixing in the gyro shaker   Order a copy of this article
    by P.A. Abdul Samad, P.R. Shalij 
    Abstract: The better mixing of ingredients is the key to improving the quality of the process in the manufacturing of several products. The gyro shaker is a dual rotation mixer commonly used for mixing highly viscous fluids. In this work, CFD simulation for the multiphase mixing in the gyro shaker is carried out for obtaining numerical solutions. Simulations of three different mixing models, namely Eulerian granular model, mixture model and volume of fluid (VOF) model are compared. Reynolds number and power number based on characteristic velocity were derived for the gyro shaker. Experiments were conducted to validate the mixing power by simulation using torque method and viscous dissipation method. The viscous dissipation method for mixing power demonstrates a smaller deviation from the experimental data than torque method. Among the three simulation models, the multiphase mixture model shows the minimum variation of the experimental data. A comparison of the flow fields of the different mixing models is also carried out.
    Keywords: computational fluid dynamics; characteristic velocity; Eulerian granular; gyro shaker; mixture model; multiphase; power consumption; power number; viscous dissipation; VOF; volume of fluid.

  • Optimisation model of price changes after knowledge transfer in the big data environment   Order a copy of this article
    by Chuanrong Wu, Evgeniya Zapevalova, Deming Zeng 
    Abstract: Big data knowledge and private knowledge are the two dominant types of knowledge that an enterprise needs for new product innovation in a big data environment. Big data knowledge can help enterprises to make decisions, trim costs and lift sales. Private knowledge is usually the core patent knowledge, which can help to enhance the quality of products. The decline in product cost and the increased quality of a new product after knowledge transfer may lead to pricing decisions with respect to an enterprises new product. Should the enterprise cut prices, keep prices unchanged, or raise prices after knowledge transfer? It is important to analyse the impact of a new products price change on knowledge transfer activities in the big data environment. By considering the changes in product costs, market share and the profits after knowledge transfer caused by price changes to a new product, an optimisation model of price changes after knowledge transfer in the big data environment is presented. The model can assess the impact that a price change will have on the expected profits of an enterprise that has engaged in knowledge transfer in the big data environment. The experimental results are consistent with previous studies and the actual economic situations, and the model is deemed valid. It can enable pricing decisions about new products for enterprises in the big data environment.
    Keywords: big data; knowledge transfer; optimisation model; price change; price decision.

  • Predicting new composition relations between web services via link analysis   Order a copy of this article
    by Mingdong Tang, Fenfang Xie, Wei Liang, Yanmin Xia 
    Abstract: With the wide application of Service-Oriented Architecture (SOA) and Service-Oriented Computing (SOC), the past decade has witnessed a rapid growth of the number of web services on the internet. Against this background, combining different web services to create new applications has attracted great interest from developers. However, according to the latest statistics, only a small number of popular services are frequently used by developers and the use rates of most web services are rather low. To help service users to discover appropriate web services and promote service compositions has thus become a significant need. In this paper, we propose a link-based approach to predict new composition relations between web services. The approach is based on exploration of known composition relations and similarity relations among web services. To measure the composition or similarity degrees, several link-based methods are exploited, and two reasonable heuristic rules for integrating the existing composition and similarity relations for service composition prediction are developed. Case studies and experiments based on real web service datasets validated the proposed approach.
    Keywords: service composition; composition relations; similarity relations; link prediction; web services; API; Mashup.

  • Research on product design knowledge organisation model based on granularity principle   Order a copy of this article
    by Youyuan Wang, Weiwei Qian, Lu Zhao 
    Abstract: In order to solve the problem of weak discernibility relation between the demand of knowledge in the process of product design, a knowledge organisation model based on the granularity principle is put forward. The paper applies knowledge unit and knowledge point to describe product design knowledge, adopts the granularity principle to perform the granulation tissue of product design knowledge, monitors the classification, association and inference of knowledge points according to task requirements and structures, formalises the related knowledge, and ultimately provides knowledge service in the form of knowledge unit. Through the analysis of a case, the method is proven to be effective to improve the relevance of knowledge and to improve the efficiency of knowledge service.
    Keywords: knowledge organisation; granularity principle; product design.

  • A novel clustering algorithm based on the deviation factor model   Order a copy of this article
    by Jungan Chen, Chen Yinyin, Yang Dongyong 
    Abstract: For classical clustering algorithms, it is difficult to find clusters that have non-spherical shapes or varied size and density. In view of this, many methods have been proposed in recent years to overcome this problem, such as introducing more representative points per cluster, considering both interconnectivity and closeness, and adopting the density-based method. However, the density defined in DBSCAN is decided by minPts and Eps, and it is not the best solution to describe the data distribution of one cluster. In this paper, a deviation factor model is proposed to describe the data distribution and a novel clustering algorithm based on artificial immune system is presented. The experimental results show that the proposed algorithm is more effective than DBSCAN, k-means, etc.
    Keywords: clustering algorithm; DBSCAN; artificial immune system.

  • Multi-keywords carrier-free text steganography method based on Chinese Pinyin   Order a copy of this article
    by Yuling Liu, Jiao Wu, Guojiang Xin 
    Abstract: By combining big data with the characteristics of steganography, carrier-free steganography was proposed to resist all the steganalysis attacks. A novel method named multi-keywords carrier-free text steganography method, based on Chinese Pinyin, is introduced in this paper. In the proposed method, the hidden tags are selected from the Pinyin combinations of two words. In the process of information hiding, the POS (Part of Speech) is used for hiding the number of keywords. Also, the redundancy of hidden tags in extraction process is eliminated by ensuring the uniqueness of each hidden tag in each stego-text. Meanwhile, the way of joint retrieval is used for hiding multi-keywords. Experimental results show that the proposed method has good performance in the hiding capacity, the success rate of hiding, the extraction accuracy and the time efficiency with appropriate hidden tags and large scale of the big text data.
    Keywords: carrier-free steganography; big text data; multi-keywords; Chinese Pinyin; POS tagging.

  • Collaborative filtering-based recommendation system for big data   Order a copy of this article
    by Jian Shen, Tianqi Zhou, Lina Chen 
    Abstract: The collaborative filtering algorithm is widely used in the recommendation system of e-commerce websites (Wong et al. 2016), which are based on the analysis of a large number of users' historical behaviour data, so as to explore the users' interest and recommend the appropriate products to users. In this paper, we focus on how to design a reliable and highly accurate algorithm for movie recommendation. It is worth noting that the algorithm is not limited to film recommendation, but can be applied in many other areas of e-commerce. In this paper, we use Java language to implement a movie recommendation system in Ubuntu system. Benefitting from the MapReduce framework and the recommendation algorithm based on items, the system can handle large data sets. The experimental results show that the system can achieve high efficiency and reliability in large datasets.
    Keywords: big data; collaborative filtering; e-commerce; movie recommendation; MapReduce framework.

  • Communication optimisation for intermediate data of MapReduce computing model   Order a copy of this article
    by Yunpeng Cao, Haifeng Wang 
    Abstract: MapReduce is a typical computing model for processing and analysis of big data. MapReduce computing job produces a large amount of intermediate data after Map phase. Massive intermediate data results in a large amount of intermediate data communication across rack switches in the Shuffle process of MapReduce computing model, which degrades the performance of heterogeneous cluster computing. In order to optimise the intermediate data communication performance of Map-intensive jobs, the characteristics of pre-running scheduling information of MapReduce computing jobs are extracted, and job classification is realised by machine learning. The jobs of active intermediate data communication are mapped into a rack to keep the communication locality of intermediate data. The jobs with inactive communication are deployed to the nodes sorted by computing performance. The experimental results show that the proposed communication optimisation scheme has a good effect on Shuffle-intensive jobs, and can reach 4-5%. In the case of a larger amount of input data, the communication optimisation scheme is robust and can adapt to heterogeneous cluster. In the case of a multi-user application scene, the intermediate data communication can be reduced by 4.1%.
    Keywords: MapReduce computing model; big data processing; communication optimisation; intermediate data; machine learning.

  • Evaluating the trustworthness of BPEL processes based on data dependency and XBFG   Order a copy of this article
    by Chunling Hu, Cuicui Liu, Bixin Li 
    Abstract: Composite services implement value-added functionality by composing service components of various granularity. Trust is an important criterion to judge whether a composite service can behave as expected. There is a great need for a flexible trust evaluation method for composite services, which can guide service selection and the trust-based optimisation and evolution of composite services. In this paper, a data dependency based trust evaluation approach for composite services in Business Process Execution Language (BPEL) is proposed. Firstly, we derive define-use pairs of variables to identify data dependency between service components in BPEL processes modeled by eXtensible BPEL Flow Graph (XBFG); in addition, dependency links including both direct and indirect data dependencies are used to evaluate the trust of these service components; furthermore, on the basis of BPEL structure and XBFG, reduction rules are proposed to evaluate the global trust of BPEL processes. Experimental results demonstrate that the proposed approach is effective for the trust evaluation of BPEL composite services and stable with the growing number of service components in BPEL.
    Keywords: trust evaluation; data dependency; dependency link; reduction rules; XBFG.

  • Assessing classification complexity of datasets using fractals   Order a copy of this article
    by André Luiz Marasca, Marcelo Teixeira, Dalcimar Casanova 
    Abstract: Supervised classification is a mechanism used in machine learning to associate classes with objects from datasets. Depending on the dimension and on the internal data structuring, classification may become complex. In this paper, we claim that the complexity level of a given dataset can be estimated by using fractal analysis. A novel fractal measure, called transition border, is proposed in order to estimate the chaos behind labelled points distribution. Their correlation with the success rate is tested by comparing it against results obtained from other supervised classification methods. Results suggest that this approach can be used to measure the complexity behind a classification task problem in real-valued datasets with three dimensions. The proposed method can also be useful for other science domains for which fractal analysis is applicable.
    Keywords: supervised classification; fractal analysis; chaotic datasets.

  • A routing strategy with energy optimisation based on community in mobile social networks   Order a copy of this article
    by Gaocai Wang, Nao Wang, Ying Peng, Shuqiang Huang 
    Abstract: In current mobile networks, usage has drastically shifted from mobile users base station end-to-end communication to message/content retrieval among mobile users, which forms a so-called mobile social network. Usually, in a mobile social network, the movement feature of the mobile users has social aggregation characteristics, and the same mobile user who visits different communities forms a connected network based on community. This paper studies the energy consumption optimisation problem of message delivery based on the social characteristics of mobile users. The paper proposes an optimal energy efficiency routing strategy based on community, which minimises the network energy consumption with a given delay. Firstly, the expected energy consumption and delay of message delivery in a connected network are obtained through Markov chain. Then a comprehensive cost function for message delivery from a source node to a destination node is designed, which is combined with energy consumption and delay. Thus we obtain the optimisation function for delivering a message of relay to comprehensive cost. Further, the reward function of relay is given. Finally, the optimal expected reward of optimal relay is achieved using the optimal stopping theory for realising the optimal energy efficiency routing strategy. In simulations, the average energy consumption, the average delay and the average de-livery ratio of the routing optimisation strategy in this paper are compared with those of other routing strategies in related literature. The results show that the strategy proposed by this paper has smaller average energy consumption, smaller average delay and bigger average delivery ratio, and better energy consumption optimisation results can be achieved.
    Keywords: mobile social networks; optimal energy efficiency routing; community; optimal stopping; optimal relay.

  • A holistic IT infrastructure management framework   Order a copy of this article
    by Sergio Varga, Gilmar Barreto, Paulo David Battaglin 
    Abstract: Information Systems (IS) are becoming increasingly complex and they have issues to be solved. New technologies, products and deployment models make the management of an IS difficult to maintain. Organisations need to deploy tools, processes, and governance in their Information Technology (IT) environment to support their IS. This increases even more the complexity of an IT environment and it drives organisations to manage the environment by silos or components. This type of management inhibits organisations to ensure the entire environment has been properly managed accordingly to what was agreed in the outsourcing contract despite the usage of IT frameworks available. This paper intends to analyse and identify these issues as well as it proposes an IT management framework that will help organisations to provide an efficient service. This service is based on agreed scope and will ensure that all contracted services will be deployed with accurate, completeness, management, and awareness.
    Keywords: information systems; information technology; IT Management; ITIL; ITSM; cloud.
    DOI: 10.1504/IJCSE.2019.10018360
     
  • Super-sampling by learning-based super-resolution   Order a copy of this article
    by Ping Du, Jinhuan Zhang, Jun Long 
    Abstract: In this paper, we present a novel problem of intelligent image processing, which is how to infer a finer image in terms of intensity levels for a given image. We explain the motivation for this effort and present a simple technique that makes it possible to apply the existing learning-based super-resolution methods to this new problem. As a result of the adoption of the intelligent methods, the proposed algorithm needs notably little human assistance. We also verify our algorithm experimentally in the paper.
    Keywords: texture synthesis; super-resolution; image manifold.
    DOI: 10.1504/IJCSE.2019.10020177
     
  • An evolutionary algorithm for finding optimisation sequences: proposal and experiments   Order a copy of this article
    by João Fabrício Filho, Luis Gustavo Araujo Rodriguez, Anderson Faustino Da Silva 
    Abstract: Evolutionary algorithms are metaheuristics for solving combinatorial and optimisation problems. A combinatorial problem, important in the context of software development, consists in selecting code transformations that must be used by the compiler while generating the target code. The objective of this paper is to propose and evaluate an evolutionary algorithm that is capable of finding an efficient sequence of optimising transformations, which will be used while generating the target code. The results indicate that it is efficient to find good transformation sequences, and a good option to generate databases for machine learning systems.
    Keywords: evolutionary algorithms; code optimisation; iterative compilation; machine learning.

  • Energy replenishment optimisation via density-based clustering   Order a copy of this article
    by Xin Gu, Jun Peng, Yijun Cheng, Xiaoyong Zhang, Kaiyang Liu 
    Abstract: This paper investigates a density-based clustering approach to achieve efficient energy replenishment in wireless rechargeable sensor networks (WRSNs). Sensor nodes with charging request are divided into several clusters. Some of them are selected as head nodes, adopting a mobile charger to visit. The rest are arranged to the closest head nodes. Then the mobile charger serves all nodes in the same cluster simultaneously. Different from other clustering algorithms, our proposed clustering approach selects the head nodes with high local density. The distance between high-density nodes is also taken into consideration, effectively reducing the charging delay. Simulation results show that our proposed clustering approach can achieve optimal cluster results. Moreover, compared with two other cluster-based charging methods, the charging delay and travel distance can be reduced by our proposed clustering approach, in both dense and sparse deployment scenarios.
    Keywords: wireless rechargeable sensor networks; clustering; mobile charger; wireless energy transfer; charging delay.

  • Evolutionary ant colony algorithm using firefly-based transition for solving vehicle routing problems   Order a copy of this article
    by Rajeev Goel, Raman Maini 
    Abstract: In this paper, we propose an evolutionary optimisation algorithm that adapts the advantages of ant colony optimisation and firefly optimisation algorithms to solve the vehicle routing problem and its variants. Firefly optimisation (FA) based transition rule along with pheromone shaking rule is used to escape local optima. Whereas the multi-modal nature of FA helps in exploring the search space, pheromone shaking avoids the stagnation of pheromone deposit on the exploited paths. This is expected to improve the working of ant colony system (ACS). Performance of the proposed algorithm has been compared with the performance of some of other available meta-heuristic approaches currently being used for solving vehicle routing problems on some benchmark problems. Results show the consistency of the proposed approach. Moreover, its convergence rate is faster and the obtained solutions are closer to optimal compared with solutions obtained using certain other existing meta-heuristic approaches. The results also demonstrate the effectiveness of the presented algorithm over other existing FA-based algorithms for solving vehicle routing problems.
    Keywords: ant colony optimisation; evolutionary algorithms; firefly optimisation; vehicle routing problems.

  • Performance evaluation of main-memory hash joins on KNL   Order a copy of this article
    by Deyou Tang, Yazhuo Zhang, Qingmiao Zeng, Hu Chen 
    Abstract: New hardware features have propelled designs and analysis in main-memory hash joins. In previous studies, memory access has always been the primary bottleneck for hash join algorithms. However, there are relatively few studies devoted to bottlenecks analysis on the Knights Landing (KNL) processor. In this paper, we pay attention to the state-of-the-art hash join algorithms on KNL and analysing their bottlenecks under different workloads. The analysis and comparisons in the paper show that both memory latency and bandwidth are keys to improve hash joins, and multi-channel dynamic random access memory (MCDRAM) reasonably plays a vital role in enhancing performance. Notably, we find that hash join algorithms that are hardware-oblivious perform better than hardware-conscious approaches. A typical algorithm of hardware-oblivious joins achieves better performance than ever before to the best of our knowledge. Through the analysis, we shed light on how new features of KNL affect the performance of hash joins.
    Keywords: performance evaluation; main-memory; hash join; algorithm; KNL; memory latency; bandwidth; cache alignment; cache miss; prefetching; MCDRAM.
    DOI: 10.1504/IJCSE.2018.10016618
     
  • Feature selection with improved binary artificial bee colony algorithm for microarray data   Order a copy of this article
    by Sheng-Sheng Wang, Ru-Yi Dong 
    Abstract: In the areas of clinical and medical diagnosis, gene expression profiles are known to have latent qualities as they denote the state of cells in molecular rankings. But the sample sizes are relatively small compared with the number of genes concerned in the training datasets during the classification of cancer in researches. This indisputable case is a daunting problem in the classification since the scarcity of training data is still prevailing. Hence, the need to develop an efficient gene selection algorithm for sample classification is appropriate to enhance predictive accuracy and as well as to prevent unfathomable conditions as a result of the extensive number of genes involved in the research. In this article, we propose to improve the Binary Artificial Bee Colony algorithm (BABC) based on chaotic catfish effect for feature selection. Chaotic effect was added to the initialisation procedure of BABC, and further introduced chaotic catfish-bee for new nectar exploration, which can thus improve the BABC algorithm by preventing bees from getting trapped in a local optimum. This allows for a search through all possible solution spaces in a short time. It is clear from the results of our experiment that this new method indicated an elaborate feature simplification that achieved a very precise and significant accuracy in the classification of 9 among the 11 datasets in comparison with other methods of feature selection.
    Keywords: feature selection; binary artificial bee colony; support vector machines; chaotic catfish effect.

  • An integrated ambient intelligence system for a smart lab environment   Order a copy of this article
    by Dat Do, Scott King, Alaa Sheta, Thanh Pham 
    Abstract: The goals of the ambient intelligence system are not only to enhance the way people communicate with the surrounding environment but also to advance safety measures and enrich human lives. In this paper, we introduce an Integrated Ambient Intelligence System (IAmIS) to perceive the presence of people, identify them, determine their locations, and provide suitable interaction with them. The proposed framework can be applied in various application domains such as a smart house, authorisation, surveillance, crime prevention, and many others. The proposed system has five components: body detection and tracking, face recognition, controller, monitor system, and interaction modules. The system deploys RGB cameras and Kinect depth sensors to monitor human activity. The developed system is designed to be fast and reliable for indoor environments. The proposed IAmIS can interact directly with the environment or communicate with humans acting on the environment. Thus, the system behaves as an intelligent agent. The system has been deployed in our research lab and can recognise lab members and guests to the lab as well as track their movements and have interactions with them depending on their identity and location within the lab.
    Keywords: ambient intelligence system; awareness system; object tracking; face recognition; body tracking; Kinect.

  • An internet-of-things based security scheme for healthcare environment for robust location privacy   Order a copy of this article
    by Aakanksha Tewari, Brij Gupta 
    Abstract: Recently, various applications of the internet of things have been developed for the healthcare sector. Our contribution is to provide a secure and low-cost environment for the IoT devices in healthcare. The main goal is to make patients lives easier and more comfortable by providing them with more effective treatments. Nevertheless, we also intend to address the issues related to location privacy and security due to the deployment of IoT devices. We have proposed a very simple mutual authentication protocol, which provides strong location privacy by using one-way hash, pseudo-random number generators and bitwise operations. Strong location privacy is a key factor while ensuring healthcare security. We can enforce this property by making sure that tags in the network are indistinguishable and the protocol ensures forward secrecy. The security strength of our protocol is verified through a formal proof model for location privacy.
    Keywords: internet of things; location privacy; RFID; mutual authentication; forward secrecy; indistinguishability.

  • A fuzzy controller for an adaptive VNFs placement in 5G network architecture   Order a copy of this article
    by Sara Retal, Abdellah Idrissi 
    Abstract: In cloud computing, computation and memory resources are becoming a relevant growing business. On the other hand, mobile network architecture faces many hurdles, including lack of flexibility for providing enhanced services and distributed architecture, and expensive cost to provide a network topology that meets the users' equipment (UE) needs. To cope with these problems, cloud computing is used in mobile telecommunications market thanks to network functions virtualisation. In our paper, we develop a fuzzy controller to support virtual network functions placement and provide an adaptive solution to manage and organise the network. Our approach enables the solution to adapt to UE mobility and needs in terms of quality of experience. Furthermore, it minimises serving gateways relocation cost and the path between UEs and packet data network gateways, taking into account the resource capacities. The experimental results show that our approach provides good results compared with the literature methods.
    Keywords: cloud computing; virtual network functions placement; adaptive placement; fuzzy controller; multi-objective optimisation.
    DOI: 10.1504/IJCSE.2019.10018399
     
  • The key user discovery model based on user importance calculation   Order a copy of this article
    by Lei Zhang, Dandan Jiang, Ruirong Xue, Yawen Yi, Xiangfeng Luo 
    Abstract: Recently, more and more users publish their views on events in social media. Identifying influential users in social media and calculating the importance of users can help to analyse the impact of hot events or enterprise products in the real world. A method based on attribute analysis selects relatively simple characteristics without digging into the event-targeted properties; the network-based analysis method only uses the user behaviour relation or the content association relation to build a network, which does not take the user attributes into consideration and cannot effectively calculate the user importance. This paper proposes a multi-angle user importance calculation method with event-specificity. The overall importance of a user is measured by taking into account the four levels within the user layer, the fan layer, the micro-blog layer, and the event layer. Experimental results show that our method can effectively calculate the importance of users.
    Keywords: key user discovery; multi-layer; social media.

  • Event-triggered fault estimation for stochastic state-delay systems against adversaries with application to a DC motor system   Order a copy of this article
    by Yunji Li, Yi Gao, Quansheng Liu, Li Peng 
    Abstract: This paper is concerned with the problem of fault estimation for stochastic state-delay systems subject to adversaries under an event-triggered data-transmission framework. An adversarial fault estimator is designed for remote state and fault estimation simultaneously. Furthermore, a sufficient condition is provided for exponential stability in the mean-squared of the proposed event-triggered fault estimator. The corresponding event-triggered sensor data transmission scheme is to reduce the overall communication burden. Finally, an application example on a DC motor system is presented and the benefits of the obtained theoretical results are demonstrated by comparative experiments.
    Keywords: fault estimation; event-triggered data-transmission scheme; time delays.

  • Dependence structure between bitcoin price and its influence factors   Order a copy of this article
    by Weili Chen, Zibin Zheng, Mingjie Ma, Jiajing Wu, Jiaquan Yao, Yuren Zhou 
    Abstract: Bitcoin is a decentralised digital currency that has attracted growing interest over recent years. Much research from different subjects emerged because bitcoin is a multidisciplinary product. Among all these studies, the interpretation of the drastic fluctuation of the bitcoin price attracts a great attention. Many influence factors of the bitcoin price were found. However, research seldom reveals the dependence structure between price and its influence factors. By selecting 10 interpretable influence factors from the bitcoin network and using copula theory, we find that the bitcoin price has different correlation structures with its influence factors. These findings provide new insights into the behaviour of miners, users, and coins in the bitcoin system, thus leading to meaningful implications for policymakers, investors and risk managers dealing with bitcoin and other cryptocurrencies.
    Keywords: bitcoin; price fluctuation; influence factor; dependency structure; copula function.
    DOI: 10.1504/IJCSE.2019.10018973
     
  • Software design of monitoring and flight simulation for UAV swarms based on OSGEarth   Order a copy of this article
    by Meimin Wu, Yuxiang Xiao, Qian Bi 
    Abstract: Real-time monitoring of Unmanned Aerial Vehicle (UAV) swarms is critical for flight safety. In order to monitor the position and working condition of UAV intuitively, we propose a three-dimensional (3D) monitoring software for UAV swarms based on OpenSceneGraph Earth. The software is built on platform + plug-ins architecture. The flight scene is constructed via 3D visualisation. UAV nodes are updated and moved in the flight scene when data is received in real time. Meanwhile, in order to decrease the cost and improve the work efficiency in the development and performance verification of UAV swarms, the simulation platform for UAV swarms is designed. The swarm behaviour algorithm is pre-designed in a Python file, which will be read to parse the position data and display the flight scene. The software has been successfully applied to monitor the flight of UAV swarms. Excellent accuracy and reliability are demonstrated.
    Keywords: OSGEarth; UAV swarms; real-time monitoring; 3D visualisation; swarm simulation.

  • Improved quantum secret sharing scheme based on GHZ states   Order a copy of this article
    by Ming-Ming Wang, Zhi-Guo Qu, Lin-Ming Gong 
    Abstract: With the rapid progress of quantum cryptography, secret sharing has been developed in the quantum setting for achieving a high level of security, which is known as quantum secret sharing (QSS). The first QSS scheme was proposed by Hillary et al. in 1999 [Phys. Rev. A 59, 1829 (1999)] based on entangled Greenberger-Horne-Zeilinger (GHZ) states. However, only 50% of the entangled quantum states are effective for eavesdropping detection and secret splitting in the original scheme. In this paper, we introduce a possible method, called measurement-delay strategy, to improve the qubit efficiency of the GHZ-based QSS scheme. By using this method, the qubit efficiency of the improved QSS scheme can reach 100% for both security detection and secret distribution. The improved QSS scheme can be implemented experimentally based on current technologies.
    Keywords: quantum secret sharing; efficiency; security; GHZ state.

  • Analysing research collaboration through co-authorship networks in a big data environment: an efficient parallel approach   Order a copy of this article
    by Carlos Roberto Valêncio, José Carlos De Freitas, Rogéria Cristiane Gratão De Souza, Leandro Alves Neves, Geraldo Francisco Donegá Zafalon, Angelo Cesar Colombini, William Tenório 
    Abstract: Bibliometry is the quantitative study of scientific productions and enables the characterisation of scientific collaboration networks. However, with the development of science and the increase of scientific production, large collaborative networks are formed, which makes it difficult to extract bibliometrics. In this context, this work presents an efficient parallel optimisation of three bibliometrics for co-authorship network analysis using multithread programming: transitivity, average distance, and diameter. Our experiments found that the time wasted to calculate the transitivity value using the sequential approach grows 4.08 times faster than the parallel proposed approach when the size of co-authorship network grows. Similarly, the time wasted to calculate the average distance and diameter values using the sequential approach grows 5.27 times faster than the parallel proposed approach when the size of co-authorship network grows. In addition, we reported relevant values of speed up and efficiency for the developed algorithms.
    Keywords: bibliometrics; graphs; knowledge extraction; co-authorship network; NoSQL; parallel computing.
    DOI: 10.1504/IJCSE.2019.10020390
     
  • Design of fault-tolerant majority voter for error-resilient TMR targeting micro- to nano-scale logic   Order a copy of this article
    by Mrinal Goswami, Subrata Chattopadhyay, Shiv Bhushan Tripathi, Bibhash Sen 
    Abstract: The shrinking size of transistors for satisfying the increasing demand for higher density and low power has made the VLSI circuits more vulnerable to faults. Therefore, new circuits in advanced VLSI technology have forced designers to use fault-tolerant techniques in safety-critical applications. Also, the presence of some faults (not permanent) due to the complexity of the nanocircuit or its interaction with software results in malfunctioning of circuits. The fault-tolerant scheme, where majority voter plays the core role in triple modular redundancy (TMR), is being implemented increasingly in digital systems. This work aims to implement a different fault-tolerant scheme of majority voter for the implementation of TMR using quantum-dot cellular automata (QCA), which is a viable alternative nanotechnology to CMOS VLSI. The fault-masking ability of various voter designs has been analysed in detail. The fault-masking ratio of the proposed voter (FMV) is 66% considering single/multiple faults. Simulation results establish the validation of the proposed logic in QCA, which targets nano-scale devices. The proposed logic is also suitable for conventional CMOS technology, which is verified with the Cadence tool.
    Keywords: quantum dot cellular automata; triple modular redundancy; fault-tolerant majority voter; QCA defects; reliability; nanoelectronics.

  • Time series clustering using stochastic and deterministic influences   Order a copy of this article
    by Mirlei Silva, Rodrigo Mello, Ricardo Rios 
    Abstract: As part of the unsupervised machine learning area, time series clustering aims at designing methods to extract patterns from temporal data in order to organise different series according to their similarities. According to the literature, most of researches either perform a preprocessing step to convert time series into an attribute-value matrix to be later analyzed by traditional clustering methods, or apply measures specifically designed to compute the similarity among time series. Based on such studies, we have noticed two main issues: i) clustering methods do not take into account the stochastic and the deterministic influences inherent of time series from real-world scenarios; and ii) similarity measures tend to look for recurrent patterns, which may not be available in stochastic time series. In order to overcome such drawbacks, we present a new clustering approach that considers both influences and a new similarity measure to deal with purely stochastic time series. Experiments provided outstanding results, emphasizing time series are better clustered when their stochastic and deterministic influences are properly analysed.
    Keywords: time series; clustering; similarity measure.

  • Laius: an energy-efficient FPGA CNN accelerator with the support of a fixed-point training framework   Order a copy of this article
    by Zikai Nie, Zhisheng Li, Lei Wang, Shasha Guo, Yu Deng, Rangyu Deng, Qiang Dou 
    Abstract: With the development of Convolutional Neural Networks (CNNs), their high computational complexity and energy consumption become significant problems. Many CNN inference accelerators have been proposed to reduce the energy consumption. Most of them are based on 32-bit float-point matrix multiplication, where the data precision is over-provisioned for inference. This paper presents Laius, an 8-bit fixed-point LeNet inference engine implemented on FPGA. To economise FPGA resources, we propose a methodology to find the optimal bit-length for weight and bias in LeNet. We use optimisations of pipelining, PE tiling, and theoretical analysis to improve the performance. Moreover, we optimise the convolutional sequence and data layout for further research. Experiment results show that Laius achieves 44.9 Gops throughput. Moreover, with only 1% accuracy loss, Laius largely reduces 31.43% in delay, 87.01% in LUT consumption, 66.50% in BRAM consumption, 65.11% in DSP consumption and 47.95% reduction in power compared with the 32-bit version with the same structure.
    Keywords: CNN accelerator; FPGA; inference engine; fixed-point training; data layout.

  • Convergence and numerical stability of action-dependent heuristic dynamic programming algorithms based on RLS learning for online DLQR optimal control   Order a copy of this article
    by Guilherme Bonfim De Sousa, Patrícia Helena Moraes Rêgo 
    Abstract: The development and the numerical stability analysis of a novel algorithm of approximate dynamic programming (ADP) based on RLS learning for approximating the optimal control solution online in real-time are the main issues of this paper. The approximate dynamic programming is a method developed to make possible the use of dynamic programming techniques in real-time, but this method has a reasonable mathematical complexity owing to the size of the internal matrices of the algorithm and the need for inversion of some of them. Thus, focusing on improving numerical stability and computational cost of ADP algorithms, more specifically in the action-dependent heuristic dynamic programming and optimal control context, UDUT-type unitary transformations are integrated in actor-critic architectures, which produce algorithms with better specifications for implementation in real-world optimal control systems. The control and stabilisation of the inverted pendulum system on a motor driven cart is established as a study platform to evaluate the convergence and numerical stability for the estimated parameters of the proposed algorithm.
    Keywords: action-dependent heuristic dynamic programming; discrete linear quadratic regulator; recursive least-squares and numerical stability.
    DOI: 10.1504/IJCSE.2019.10020394
     
  • A graph representation for search-based approaches to graph layout problems   Order a copy of this article
    by Behrooz Koohestani 
    Abstract: A graph consists of a finite set of vertices and edges. Graphs are used to represent a significant number of real life applications. For example, in computer science, graphs are employed for the representation of networks of communication, organisation of data, flow of computation, computational devices, etc. Several data structures have been proposed for representing graphs among which the adjacency matrix, adjacency list and edge list are the most important and widely used ones. The choice of a graph representation is mainly situation-specific and depends on the type of operations required to be performed on a given graph as well as the ease of use. In this research, a specialised graph representation is proposed, specifically designed for use when coping with graph-based optimisation problems (e.g., graph layout problems) through heuristic search methods with the aim of speeding up the search. The results of numerical experiments show that for the purpose of this study, the proposed approach performs extremely well compared to well-known graph representation approaches.
    Keywords: graph representation; combinatorial optimisation; graph layout problems; search methods.
    DOI: 10.1504/IJCSE.2019.10020775
     
  • Energy-efficiency-aware flow-based access control in HetNets with renewable energy supply   Order a copy of this article
    by Li Li, Yifei Wei, Mei Song, Xiaojun Wang 
    Abstract: Software defined networking (SDN) is revolutionising the telecommunication networking industry by providing flexible and efficient management. This paper proposes an energy-efficiency-aware flow-based management framework for relay-assisted heterogeneous networks (HetNets), where the relay nodes are powered by renewable energy. Owing to the dynamic property of user behaviour and renewable energy availability, the flow-based management layer should enhance not only the instantaneous energy efficiency but also the long-term energy efficiency, while satisfying the transmission rate demand for each user. We first formulate the energy efficiency problem in HetNets as an optimisation problem for instantaneous energy efficiency and renewable energy allocation, and propose a heuristic algorithm to solve the optimisation problem. According to the proposed algorithm, we then design a dynamic flow-table configuration policy (DFTCP), which can be integrated as an application on top of an SDN controller to enhance the long-term energy efficiency. Simulation results show that the proposed policy can achieve higher energy efficiency compared with current distributed relay strategy, which chooses the nearest or strongest signal node to access, and obtain better performance for the overall relay network when the user density and demand change.
    Keywords: software defined networking; energy efficiency; renewable energy.

  • MOEA for discovering Pareto-optimal process models: an experimental comparison   Order a copy of this article
    by Sonia Kundu, Manoj Agarwal, Shikha Gupta, Naveen Kumar 
    Abstract: Process mining aims at discovering the workflow of a process from the event logs that provide insights into organisational processes for improving these processes and their support systems. Process mining abstracts the complex real-life datasets into a well-structured form known as a process model. In an ideal scenario, a process mining algorithm should produce a model that is simple, precise, and general, and fits the available logs. A conventional process mining algorithm typically generates a single process model that may not describe the recorded behaviour effectively. Multi-objective evolutionary algorithms (MOEA) for process mining optimise two or more objectives to generate several competing process models from the event logs. Subsequently, a user can choose a model based on his/her preference. In this paper, we have experimentally compared the popular second-generation MOEA algorithms for process mining.
    Keywords: process discovery; evolutionary algorithms; Pareto front; multi-objective optimisation; process model quality dimensions; PAES; SPEA-II; NSGA-II.

  • ElBench: a microbenchmark to evaluate virtual machine and container strategies on executing elastic applications in the cloud   Order a copy of this article
    by Rodrigo Da Rosa Righi, Cristiano Costa, Adenauer Yamin, Vinicius Facco, Douglas Brauner 
    Abstract: One of the main features of cloud computing that differentiates it from clusters and grids is the elasticity of resources, being mainly implemented through virtual machines (VMs) that deliver an easy mechanism for replication and isolation. In particular, in the high performance computing (HPC) panorama, the use of VMs to run parallel applications can impose prohibitive overheads, either in terms of time penalties related to scaling out operations or in terms of large delays on accessing hypervisor-based virtualised hardware. In addition to VMs, today we perceive the emergence of the container technology to implement the aforementioned facilities; however, our investigation did not discover an initiative that describes a comparison formalism to evaluate VM and container techniques to run HPC elastic applications in the cloud. This article explores this gap, presenting a microbenchmark named ElBench, which focuses on providing a framework to compare VM and container on executing elastic parallel applications in the cloud. Using a starting infrastructure and a predefined number of maximum and minimum resources, ElBench provides runtime traces along the execution, in addition to the conclusion time, resource use and cost (time
    Keywords: benchmark; cloud elasticity; HPC; container; virtual machine; virtualisation.
    DOI: 10.1504/IJCSE.2019.10021443
     
  • Efficient web service selection with uncertain QoS   Order a copy of this article
    by Fethallah Hadjila, Amine Belabed, Mohammed Merzoug 
    Abstract: The QoS-based service selection in a highly dynamical environment is becoming a challenging issue. In practice, the QoS fluctuations of a service composition entail major difficulties in measuring the degree to which the user requirements are satisfied. In addition, the search space of feasible compositions (i.e., the solutions that preserve the requirements) is generally large and cannot be explored in a limited time; therefore, we need an approach that not only copes with the presence of uncertainty but also ensures a pertinent search with a reduced computational cost. To tackle this problem, we propose a constraint programming framework and a set of ranking heuristics that both reduce the search space and ensure a set of reliable compositions. The conducted experiments show that the ranking heuristics, termed 'fuzzy dominance' and 'probabilistic skyline', outperform almost all existing state-of-the-art methods.
    Keywords: web service selection; QoS uncertainty; global QoS conformance; constraint programming.

  • PCR: caching replacement algorithm in proxy server   Order a copy of this article
    by Tong Liu, Xiaoyu Peng, Jiahao Liang, Jianhua Lu, Baili Zhang 
    Abstract: The efficiency of caching is a key factor affecting the performance of Content Delivery Network (CDN). The main aim of current CDN caching is to obtain a higher hit ratio, since the validation and freshness of outdated pages have not received due consideration in their replacement model. In this paper, a new improved cache profit model is clearly defined, and the freshness factors of web pages have been taken into adequate account. Based on the profit model, a new replacement algorithm-PCR (Proxy Cache Replacement) is recommended, and it can be proved optimal under the rational hypothesis. To conclude, a series of comparative experiments verified the efficiency of PCR in web caching replacement.
    Keywords: cache profit; replacement mechanism; proxy server.

  • Prediction of gold-bearing localised occurrences from limited exploration data   Order a copy of this article
    by Igor Grigoryev, Adil Bagirov, Michael Tuck 
    Abstract: Inaccurate drill-core assay interpretation in the exploration stage presents challenges to long-term profit of gold mining operations. Predicting the gold distribution within a deposit as precisely as possible is one of the most important aspects of the methodologies employed to avoid problems associated with financial expectations. The prediction of the variability of gold using a very limited number of drill-core samples is a very challenging problem. This is often intractable using traditional statistical tools where, with less than complete spatial information, certain assumptions are made about gold distribution and mineralisation. The decision-support predictive modelling methodology based on the unsupervised machine learning technique presented in this paper avoids some of the restrictive limitations of traditional methods. It identifies promising exploration targets missed during exploration and recovers hidden spatial and physical characteristics of the explored deposit using information directly from drill hole database.
    Keywords: unsupervised machine learning; mathematical programming; resource definition; prediction; clusterwise linear regression.

  • Mutual-inclusive learning-based multi-swarm PSO algorithm for image segmentation using an innovative objective function   Order a copy of this article
    by Rupak Chakraborty, Rama Sushil, Madan Garg 
    Abstract: This paper presents a novel image segmentation algorithm formed by the Normalised Index Value (Niv) and Probability (Pr) of pixel intensities. To reduce the com putational complexity, a mutual-inclusive learning-based optimisation strategy, named Mutual-Inclusive Multi-swarm Particle Swarm Optimization (MIMPSO) is also proposed. In mutual learning, a high dimensional problem of Particle Swarm Optimisation (PSO) is divided into several one-dimensional problems to get rid of the high dimensionality problem whereas premature convergence is removed by the inclusive-learning approach. The proposed Niv and Pr based technique with the MIMPSO algorithm is applied on the Berkley Dataset (BSDS300) images, which produce better optimal thresholds at a faster convergence rate with high functional values compared with the considered optimisation techniques such as PSO, Genetic Algorithm (GA) and Artificial Bee Colony (ABC). The overall performance in terms of the fidelity parameters of the proposed algorithm is carried out over the other stated global optimisers.
    Keywords: multilevel thresholding; normalised index value; probability; multi-swarm PSO.

  • VFS_CS: a light-weight and extensible virtual file system middleware for cloud storage system   Order a copy of this article
    by Zeng Saifeng 
    Abstract: In cloud environments, data-intensive applications have been widely deployed to solve non-trivial tasks, while cloud-based storage systems usually fail to provide desirable performance and efficiency when running those data-intensive applications. To address the problems of storage capacity and performance when executing data-intensive applications, we design and implement a light-weight distributed file system middleware, namely Virtual File System for Cloud Storage, which allows other storage-level services to be easily incorporated into an integrated framework in a plug-in manner. In the proposed middleware, we implement three effective mechanisms: disk caching, file striping, and metadata management. The implementation of the proposed middleware has been deployed in a realistic cloud platform, and its performance has been thoroughly investigated under various workloads. Experimental results show that it can significantly improve I/O performance comparing with existing approaches. In addition, it also exhibits better robustness when the cloud system is facing intensive workloads.
    Keywords: cloud computing; distributed storage; file system; data layout; disk cache.

  • Performance analysis of non-linear activation function in convolution neural network for image classification   Order a copy of this article
    by Edna Too, Li Yujian, Pius Kwao Gadosey, Sam Njuki, Firdaous Essaf 
    Abstract: Deep learning architectures which are exceptionally deep have exhibited to be incredibly powerful models for image processing and classification and general computer vision. As the architectures become deep, they introduce challenges and difficulties in the training process, such as overfitting, computational cost, and exploding/vanishing gradients and degradation. A new state-of-the-art densely connected architecture, DenseNets, has exhibited an exceptionally outstanding result for image classification. However, it is still computationally costly to train DenseNets. Several approaches have been recommended to deal with deeper network issues, including nonlinear activation functions. The choice of the activation function is also an important aspect in training of deep learning networks because it has a considerable impact on the training and performance of a network model. Therefore, an empirical analysis of some of the nonlinear activation functions used in deep learning is done for image classification and identification. The activation functions evaluated include ReLU, Leaky ReLU, ELU, SELU and an ensemble of SELU and ELU. Publicly available datasets Cifar-10, SVHN, and PlantVillage are used for evaluation. From the experimental results, SELU has a tendency to be more accurate and parameter efficient. Equally, it is seen to be fairly fast compared with ReLU and LeakyReLU. It achieves the testing accuracy score of 99.5%, 93.7% and 83.05% on PlantVillage, SVHN, and Cifar-10, respectively. Fast, accurate and parameter efficiency is desired to train DenseNets models.
    Keywords: deep learning; convolutional neural network; activation functions; nonlinear activation functions; image classification.

  • User-content categorisation model, a generic model that combines text mining and semantic models   Order a copy of this article
    by Randa Benkhelifa, Fatima Zohra Laallam 
    Abstract: Social networking websites such as Facebook and Twitter are growing not only regarding the numbers of users but also in terms of user-generated content. These data represent a valuable source of information for several applications, which require the meaning of the content associated with the personal data of their owner. However, the current structure of social networks does not allow extracting in a fast and straightforward way the hidden information sought by these applications. Major efforts have emerged from the semantic web community to address this problem trying to represent the user as accurately as possible. These semantic models are unable to give a sense to the user-generated content. For this, more mining and sense-making need to be done on the content, to enrich the user profile. In this paper, we introduce a generic ontology called UCC model (User Content Categorisation model) which combines two disciplines (text mining and semantic models). The proposed model incorporates the text mining approach into a semantic model to enrich the user profile by including information on user's posts classifications, in an attempt to: (1) group online contents into topics under a top-down approach, (2) infer and model the temporal dynamic interests of millions of users, (3) group users with similar interests and preferences and (4) attribute a mechanism for querying the system data. We evaluate UCC model by building a social application that uses a large Facebook dataset. The UCC model can be used as a foundation structure in several applications such as the recommender systems.
    Keywords: semantic models; ontology; text mining; machine learning; user interests; users categorisation; text categorisation; profiling; ontology learning.

  • Fine-tuning of pre-trained convolutional neural networks for diabetic retinopathy screening: a clinical study   Order a copy of this article
    by Saboora Mohammadian Roshan, Ali Karsaz, Amir Hossein Vejdani, Yaser Mohammadian Roshan 
    Abstract: Diabetic retinopathy is a serious complication of diabetes, and if not controlled, may cause blindness. Automated screening of diabetic retinopathy helps physicians to diagnose and control the disease in early stages. In this paper, two case studies are proposed, each on a different dataset. Firstly, automatic screening of diabetic retinopathy using pre-trained convolutional neural networks was employed on the Kaggle dataset. The reason for using pre-trained networks is to save time and resources during training compared with fully training a convolutional neural network. The proposed networks were fine-tuned for the pre-processed dataset, and the selectable parameters of the fine-tuning approach were optimised. At the end, the performance of the fine-tuned network was evaluated using a clinical dataset comprising 101 images. The clinical dataset is completely independent of the fine-tuning dataset and is taken by a different device with different image quality and size.
    Keywords: deep learning; convolutional neural network; diabetic retinopathy; inception model; clinical study.

  • A deep neural architecture for sentence semantic matching   Order a copy of this article
    by Xu Zhang, Wenpeng Lu, Fangfang Li, Ruoyu Zhang, Jinyong Cheng 
    Abstract: Sentence semantic matching (SSM) is a fundamental research task in natural language processing. Most existing SSM methods take advantage of sentence representation learning to generate a single or multi-granularity semantic representation for sentence matching. However, sentence interactions and loss function, which are the two key factors for SSM, still haven't been fully considered. Accordingly, we propose a deep neural network architecture for SSM task with a sentence interactive matching layer and an optimised loss function. Given two input sentences, our model first encodes them to embeddings with an ordinary long short-term memory (LSTM) encoder. Then, the encoded embeddings are handled by an attention layer to find the key and important words in sentence. Next, sentence interactions are captured with a matching layer to output a matching vector. Finally, based on the matching vector, a fully connected multi-layer perceptron outputs the similarity score. The model also distinguishes the equivocation training instances with an improved optimised loss function. We also systematically evaluate our model on a public Chinese semantic matching corpus, BQ corpus. The experiment results demonstrate that our model outperforms the state-of-the-art methods, i.e., BiMPM and DIIN.
    Keywords: sentence matching; sentence interaction; loss function.

  • Reversibly hiding data using dual images scheme based on EMD data hiding method   Order a copy of this article
    by Yu Chen, Jiangyi Lin, Chin-Chen Chang, Yu-Chen Hu 
    Abstract: This paper presents a novel grayscale image reversible data hiding scheme based on the exploiting modification direction (EMD) method. In this scheme, two 5-ary secret numbers are embedded into each pixel pair in the cover image according to the EMD method to generate two pairs of stego pixels. Two meaningful shadow images are obtained by shifting the generated corresponding pixel pairs, and the original image and the secret data can be accurately recovered when the two shadow images are operated together. Experimental results show that the proposed scheme has a good performance in the shadow image quality and the image embedding ratio.
    Keywords: reversible data hiding; secret image sharing; exploiting modification direction.

  • The internet of things for healthcare: optimising e-health system availability in fog and cloud computing   Order a copy of this article
    by Guto Leoni Santos, Demis Gomes, Judith Kelner, Djamel Sadok, Francisco Airton Silva, Patricia Takako Endo, Theo Lynn 
    Abstract: The integration of fog and cloud computing has enabled a multitude of Internet of Things (IoT) applications through greater scalability, availability and connectivity. E-health systems can be used to monitor and assist people in real time, offering a range of multimedia-based health services, at the same time reducing the system cost since cheaper sensors and devices can be used to compose it. However, any downtime, mainly in the case of critical health services, can result in patient health problems and in the worst case, loss of life. In this paper, we use an interdisciplinary approach combining stochastic models with optimisation algorithms to analyse how failures impact e-health monitoring system availability. We propose stochastic-based surrogate models to estimate the availability of e-health monitoring systems that rely on edge, fog, and cloud infrastructures. Then, based on these surrogate models, we apply a multi-objective optimisation algorithm, NSGA-II, to improve system availability considering component costs as a constraint. Results suggest that replacing components with more reliable ones is more effective in improving the availability of an e-health monitoring system than adding more redundant components.
    Keywords: availability; cloud computing; edge computing; e-health systems; fog computing; internet of things; optimisation algorithms; stochastic models; surrogate models.

  • Association data release with the randomised response based on Bayesian networks   Order a copy of this article
    by Gaoming Yang, Tao Dong, Xinjin Fang, Shuzhi Su 
    Abstract: Local differential privacy is one of the most effective methods for privacy protection data publishing, whose theoretical basis is the randomised response. However, the existing models assume that data attributes are independent of each other, which might result in excessive information loss. To solve this issue, we present a local differentially private method for releasing association data. First, to find the relationship between attributes, we constructed a Bayesian network with a greedy algorithm base on mutual information for the given dataset. Second, to ensure local differential privacy, we perturbed each dependent attribute pair according to weak or robust association attribute set. Third, to achieve the local differential privacy with the noisy marginal, we constructed an approximation distribution for the given dataset. Finally, we experimentally evaluated our method on real data, and the extensive results show that our method better balances data utility and privacy disclosure.
    Keywords: association data; local differential privacy; Bayesian network; privacy preserving.

  • A method of automatic text summarisation based on long short-term memory   Order a copy of this article
    by Wei Fang, Tianxiao Jiang, Ke Jiang, Yewen Ding, Feihong Zhang, Sheng Jack 
    Abstract: Deep learning is currently developing very fast in the NLP field and has achieved many amazing results in the past few years. Automatic text summarisation means that the abstract of the document is automatically summarised by a computer program without changing the original intention of the document. There are many application scenarios for automatic summarisation, such as news headline generation, scientific document abstract generation, search result segment generation, and product review summarisation. In the era of internet big data in the information explosion, if the short text can be used to express the main connotation of information, it will undoubtedly help to alleviate the problem of information overload. In this paper, a model based on a long short-term memory network is presented to automatically analyse and summarise Chinese articles by using the seq2seq + attention models. Finally, the experimental results are attached and evaluated.
    Keywords: deep learning; text summarisation; NLP; RNN; LSTM; seq2seq; attention; jieba; separate words; the language model.

Special Issue on: High-Performance Information Technologies for Engineering Applications

  • Parallel data processing approaches for effective intensive care units with the internet of things   Order a copy of this article
    by N. Manikandan, S Subha 
    Abstract: Computerisation in health care is more general and monitoring Intensive Care Units(ICU) is more significant and life-critical. Accurate monitoring in an ICU is essential. Failing to take right decisions at the right time may prove fatal. Similarly, a timely decision can save people's lives in various critical situations. In order to increase the accuracy and timeliness in ICU monitoring, two major technologies can be used, namely parallel processing through vectorisation of ICU data and data communication through the Internet of Things (IoT). With our approach, we can improve efficiency and accuracy in data processing. This paper proposes a parallel decision tree algorithm in ICU data to take faster and accurate decisions on data selection. Uses of parallelised algorithms optimise the process of collecting large sets of patient information. A decision tree algorithm is used for examining and extracting knowledge-based data from large databases. Finalised information will be transferred to concerned medical experts in cases of medical emergency using the IOT. Parallel implementation of the decision tree algorithm is implemented with threads, and output data is stored in local IOT tables for further processing.
    Keywords: medical data processing; internet of things; ICU data; vectorisation; multicore architecture; parallel data processing.

  • Execution of scientific workflows on IaaS cloud by PBRR algorithm   Order a copy of this article
    by S.A. Sundararaman, T. SubbuLakshmi 
    Abstract: Job scheduling of scientific workflow applications in IaaS cloud is a challenging task. Optimal resource mapping of jobs to virtual machines is calculated considering schedule constraints such as timeline and cost. Determining the required number of virtual machines to execute the jobs is key in finding the optimal schedule makespan with minimal cost. In this paper, VMPROV algorithm has been proposed to find the required virtual machines. Priority-based round robin (PBRR) algorithm is proposed for finding the job to resource mapping with minimal makespan and cost. Execution of four real-world scientific application jobs by PBRR algorithm are compared with MINMIN, MAXMIN, MCT, and round robin algorithms execution times. The results show that the proposed algorithm PBRR can predict the mapping of tasks to virtual machines in better way compared to the other classic algorithms.
    Keywords: cloud job scheduling; virtual machine provisioning; IaaS.
    DOI: 10.1504/IJCSE.2016.10017130
     
  • Development and evaluation of the cloudlet technology within the Raspberry Pi   Order a copy of this article
    by Nawel Kortas, Anis Ben Arbia 
    Abstract: Nowadays, communication devices, such as laptops, computers, smartphones and personal media players, have extensively increased in popularity thanks to the rich set of cloud services that they allow users to access. This paper focuses on setting solutions of network latency for communication devices by the use of cloudlets. This work also proposes a conception of a local datacentre that allows users to connect to their data from any point and through any device by the use of the Raspberry. We also display the performance demonstration results of the resource utilisation rate, the average execution time, the latency, the throughput and the lost packets that provide the big advantage of cloudless application from local and distant connections. Furthermore, we display an evaluation of cloudless by comparing it with similar services and by setting simulation results through the CloudSim simulator.
    Keywords: cloudlets; cloud computing; cloudless; Raspberry Pi; datacentre; device communication; file-sharing services.
    DOI: 10.1504/IJCSE.2016.10008320
     
  • Study of runtime performance for Java-multithread PSO on multiCore machines   Order a copy of this article
    by Imed Bennour, Monia Ettouil, Rim Zarrouk, Abderrazak Jemai 
    Abstract: Optimisation meta-heuristics, such as Particle Swarm Optimization (PSO), require high-performance computing (HPC). The use of software parallelism and hardware parallelism is mandatory to achieve HPC. Thread-level parallelism is a common software solution for programming on multicore systems. The Java language, which includes important aspects such as its portability and architecture neutrality, its multithreading facilities and its distributed nature, makes it an interesting language for parallel PSO. However, many factors may impact the runtime performance: the coding styles, the threads-synchronisation levels, the harmony between the software parallelism injected into the code and the available hardware parallelism, the Java networking APIs, etc. This paper analyses the Java runtime performance on handling multithread PSO over general purpose multicore machines and networked machines. Synchronous, asynchronous, single-swarm and multi-swarm PSO variants are considered.
    Keywords: high-performance computing ; particle swarm optimisation,multicore; multithread; performance; simulation.
    DOI: 10.1504/IJCSE.2016.10015696
     

Special Issue on: Computational Imaging and Multimedia Processing

  • Underwater image segmentation based on fast level set method   Order a copy of this article
    by Yujie Li, Huiliang Xu, Yun Li, Huimin Lu, Seiichi Serikawa 
    Abstract: Image segmentation is a fundamental process in image processing that has found application in many fields, such as neural image analysis, underwater image analysis. In this paper, we propose a novel fast level set method (FLSM)-based underwater image segmentation method to improve the traditional level set methods by avoiding the calculation of signed distance function (SDF). The proposed method can speed up the computational complexity without re-initialisation. We also provide a fast semi-implicit additive operator splitting (AOS) algorithm to improve the computational complex. The experiments show that the proposed FLSM performs well in selecting local or global segmentation regions.
    Keywords: underwater imaging; level set; image segmentation

  • Pseudo Zernike moments based approach for text detection and localisation from lecture videos   Order a copy of this article
    by Soundes Belkacem, Larbi Guezouli, Samir Zidat 
    Abstract: Text information embedded in videos is an important clue for retrieval and indexation of images and videos. Scene text presents challenging characteristics mainly related to acquisition circumstances and environmental changes, resulting low quality videos. In this paper, we present a scene text detection algorithm based on Pseudo Zernike Moments (PZMs) and stroke features from low resolution lecture videos. The algorithm mainly consists of three steps: slide detection, text detection and segmentation and non-text filtering. In lecture videos, the slide region is a key object carrying almost all the important information; hence the slide region has to be extracted and segmented from other scene objects considered as background for later treatments. Slide region detection and segmentation is done by applying PZMs based on RGB frames. Text detection and extraction is performed using PZM segmentation over V channel of HSV colour space, and then stroke feature is used to filter out non-text regions and remove false positives. PZMs are powerful shape descriptors; they present several strong advantages such as robustness to noise, rotation invariants, and multilevel feature representation. The PZMs based segmentation process consists of two steps: feature extraction and clustering. First, a video frame is partitioned into equal size windows, then the coordinates of each window are normalised to a polar system, then PZMs are computed over the normalised coordinates as region descriptors. Finally, a clustering step using K-means is performed in which each window is labelled for text/non-text region. The algorithm is shown to be robust to illumination, low resolution and uneven luminance from compressed videos. The effectiveness of the PZM description leads to very few false positives compared with other approaches. Moreover, resultant images can be used directly by OCR engines and no more processing is needed.
    Keywords: text localisation; text detection; pseudo Zernike moments; slide region detection.
    DOI: 10.1504/IJCSE.2016.10011674
     

Special Issue on: ICNC-FSKD'15 Machine Learning, Data Mining and Knowledge Management

  • Genetic or non-genetic prognostic factors for colon cancer classification   Order a copy of this article
    by Meng Pan, Jie Zhang 
    Abstract: Many researches have addressed patient classification using prognostic factors or gene expression profiles (GEPs). This study tried to identify whether a prognostic factor was genetic by using GEPs. If significant GEP difference was observed between two statuses of a factor, the factor might be genetic. If the GEP difference was largely less significant than the survival difference, the survival difference might not be due to the genes; then, the factor might be non-genetic or partly non-genetic. A practice was made in this study using public dataset GSE40967, which contains GEP data of 566 colon cancer patients, messages of tumor-node-metastasis (TNM) staging, etc. Prognostic factors T, N, M, and TNM were observed being non-genetic or partly non-genetic, which should be complement for future gene expression classifiers.
    Keywords: gene expression profiles; prognostic factor; colon cancer; classification; survival

  • A medical training system for the operation of heart-lung machine   Order a copy of this article
    by Ren Kanehira 
    Abstract: There has been a strong tendency to use Information Communication Technology (ICT) to construct various education/training systems to help students or other learners master necessary skills more easily. Among such systems the ability to obtain operational practice is particularly welcome in addition to the conventional e-learning ones mainly for obtaining textbook-like knowledge only. In this study, we propose a medical training system for the operation of heart-lung machine. Two training contents, one for basic operation and another for troubleshooting, are constructed in the system and their effects are tested.
    Keywords: computer-aided training; skill science; medical training; heart-lung machine; operation supporting; e-learning; clinic engineer.

Special Issue on: ICICS 2016 Next Generation Information and Communication Systems

  • Is a picture worth a thousand words? A computational investigation of the modality effect   Order a copy of this article
    by Naser Al Madi, Javed Khan 
    Abstract: The modality effect is a term that refers to differences in learning performance in relation to the mode of presentation. It is an interesting phenomenon that impacts education, online-learning, and marketing, among many other areas of life. In this study, we use Electroencephalography (EEG Alpha, Beta, and Theta) and computational modelling of comprehension to study the modality effect in text and multimedia. First, we provide a framework for evaluating learning performance, working memory, and emotions during learning. Second, we apply these tools to investigate the modality effect computationally focusing on text in contrast to multimedia. This study is based on a dataset that we have collected through a human experiment involving 16 participants. Our results are important for future learning systems that incorporate learning performance, working memory, and emotions in a continuous feedback system that measures and optimises learning during and not after learning.
    Keywords: modality effect; comprehension; electroencephalography; learning; education; text; multimedia; semantic networks; recall; emotions.

  • Automated labelling and severity prediction of software bug reports   Order a copy of this article
    by Ahmed Otoom, Doaa Al-Shdaifat, Maen Hammad, Emad Abdallah, Ashraf Aljammal 
    Abstract: We target two research problems that are related to bug tracking systems: bug severity prediction and automated bug labelling. Our main aim is to develop an intelligent classifier that is capable of predicting the severity and label (type) of a newly submitted bug report through a bug tracking system. For this purpose, we build two datasets that are based on 350 bug reports from the open-source community (Eclipse, Mozilla, and Gnome). These datasets are characterised by various textual features that are extracted from the summary and description of bug reports of the aforementioned projects. Based on this information, we train a variety of discriminative models that can be used for automated labelling and severity prediction of a newly submitted bug report. A boosting algorithm is also implemented for an enhanced performance. The classification performance is measured using accuracy and a set of other measures including: precision, recall, F-measure and the area under the Receiver Operating Characteristic (ROC) curve. For automated labelling, the accuracy reaches around 91% with the AdaBoost algorithm and cross-validation test. On the other hand, for severity prediction, our results show that the proposed feature set has proved successful with a classification performance accuracy of around 67% with the AdaBoost algorithm and cross-validation test. Experimental results with the variation of training set size are also presented. Overall, the results are encouraging and show the effectiveness of the proposed feature sets.
    Keywords: severity prediction; software bugs; machine learning; bug labeling.

Special Issue on: IEEE ISPA-16 Parallel and Distributed Computing and Applications

  • Method of key node identification in command and control networks based on level flow betweenness   Order a copy of this article
    by Wang Yunming, Pan Cheng-Sheng, Chen Bo, Zhang Duo-Ping 
    Abstract: Key node identification in command and control (C2) networks is an appealing problem that has attracted increasing attention. Owing to the particular nature of C2 networks, the traditional algorithms for key node identification have problems with high complexity and unsatisfactory adaptability. A new method of key node identification based on level flow betweenness (LFB) is proposed, which is suitable for C2 networks. The proposed method first proved the definition of LFB by analysing the characteristics of a C2 network. Then, this method designs algorithms for key node identification based on LFB, and theoretically derives the complexity of this algorithm. Finally, a number of numerical simulation experiments are carried out, and the results demonstrate that this method reduces algorithm complexity, improves identification accuracy and enhances adaptability for C2 networks.
    Keywords: command and control network; complex network; key node identification; level flow betweenness.

  • CODM: an outlier detection method for medical insurance claims fraud   Order a copy of this article
    by Yongchang Gao, Haowen Guan, Bin Gong 
    Abstract: Data is high dimensional in medical insurance claims management, and there are both dense and sparse regions in these datasets, so traditional outlier detection methods are not suitable for these data. In this paper, we propose a novel method to detect the outliers for abnormal medical insurance claims. Our method consists of three core steps feature bagging to reduce the dimensions of data, calculating the core of the objects k-nearest neighbours, and computing the outlier score for each object by measuring the amount of movement of the core by sequentially increasing k. Experimental results demonstrate our method is promising to tackle this problem.
    Keywords: data mining; outlier detection; medical insurance claims fraud.
    DOI: 10.1504/IJCSE.2017.10008174
     

Special Issue on: Advanced Computer Science and Information Technology

  • MigrateSDN: efficient approach to integrate OpenFlow networks with STP-enabled networks   Order a copy of this article
    by Po-Wen Chi, Ming-Hung Wang, Jing-Wei Guo, Chin-Laung Lei 
    Abstract: Software defined networking (SDN) is a paradigm-shifting technology in networking. However, in current network infrastructures, removing existing networks to build pure SDN networks or replacing all operating network devices with SDN-enabled devices is impractical because of the time and cost involved in the process. Therefore, SDN migration, which implies the use of co-existing techniques and a gradual move to SDN, is an important issue. In this paper, we focus on how SDN networks can be integrated with legacy networks that use spanning tree protocol (STP). Our approach demonstrates three advantages. First, our approach does not require an SDN controller to apply the STP exchange on all switches but only on boundary switches. Second, our approach enables legacy networks to concurrently use multiple links that used to be blocked except one for avoiding loops. Third, our approach decreases bridge protocol data unit (BPDU) frames used in STP construction and topology change.
    Keywords: software defined networking; spanning tree protocol; network migration.

Special Issue on: PDCAT 2016 Parallel and Distributed Algorithms and Applications

  • Data grouping scheme for multi-request retrieval in MIMO wireless communication   Order a copy of this article
    by Ping He, Zheng Huo 
    Abstract: The multi-antenna data retrieval problem refers to findng an access pattern (to retrieve multiple requests by using multiple antennae, where each request has multiple data items) such that the access latency of some requests retrieved by each antenna is minimised and the total access latency of all requests retrieved by all antennae keeps balance. So it is very important that these requests are divided into multiple groups for achieving the retrieval by using each antenna in MIMO wireless communication, called the data grouping problem. There are few studies focused on data grouping schemes applied to the data retrieval problem when the clients equipped with multi-antenna send multiple requests. Therefore, this paper proposes two data grouping algorithms (HOG and HEG) that are applied in data retrieval such that the requests can be reasonably classified into multiple groups. Through experiments, the proposed schemes have currently better efficiency compared with some existing schemes.
    Keywords: mobile computing; data broadcast; indexing; data scheduling; data retrieval; data grouping.

  • Improved user-based collaborative filtering algorithm with topic model and time tag   Order a copy of this article
    by Liu Na, Lu Ying, Tang Xiao-jun, Li Ming-xia, Chunli Wang 
    Abstract: Collaborative filtering algorithms make use of interactions rates between users and items for generating recommendations. Similarity among users is calculated based on rating mostly, without considering explicit properties of users involved. Considering the number of tags of users can direct response the user preference to some extent, we propose a collaborative filtering algorithm using the topic model called UITLDA in this paper. UITLDA model consists of two parts. The first part is active user with its item. The second part is active user with its tag. We form the topic model from these two parts. The two topics constrain and integrate into a new topic distribution. This model not only increases the user's similarity, but also reduces the density of the matrix. In prediction computation, we also introduce time delay function to increase the precision. The experiments showed that the proposed algorithm achieved better performance compared with baseline on MovieLens datasets.
    Keywords: collaborative filtering; LDA; topic model; time tag.

  • Improving runtime performance and energy consumption through balanced data locality with NUMA-BTLP and NUMA-BTDM static algorithms for thread classification and thread type-aware mapping   Order a copy of this article
    by Iulia Știrb 
    Abstract: Extending compilers such as LLVM with NUMA-aware optimisations significantly improves runtime performance and energy consumption on NUMA systems. The paper presents the NUMA-BTDM algorithm, which is a compile-time thread-type dependent mapping algorithm that performs the mapping uniformly, based on the type of each thread given by NUMA-BTLP algorithm following a static analysis on the code. First, the compiler inserts in the program code architecture-dependent code that detects at runtime the characteristics of the underlying architecture for Intel processors, and then the mapping is performed at runtime (using specific functions calls from the PThreads library) depending on these characteristics following a compile-time mapping analysis which gives the CPU affinity of each thread. NUMA-BTDM allows the application to customise, control and optimise the thread mapping and achieves balanced data locality on NUMA systems for C parallel code that combine PThreads-based task parallelism with OpenMP-based loop parallelism.
    Keywords: thread mapping; task parallelism; loop parallelism; compiler optimizations; NUMA systems; performance improvements; energy consumption improvements.

  • Accumulative energy-based seam carving for image resizing   Order a copy of this article
    by Yuqing Lin, Jiawen Lin, Yuzhen Niu, Haifeng Zhang 
    Abstract: With the diversified development of the digital devices, such as computer, mobile phone and television, how to resize an image or video to adapt to different display screens has been a heated topic. Seam carving does well in image resizing at most times, however it sometimes produces discontinuity in the image content or impaired salient objects. Therefore, we propose an accumulative energy-based seam carving method for image resizing. We distribute the energy of each pixel on the seam to its adjacent eight-connected pixels to avoid the extreme concentration of seams. In addition, we add the image saliency and the edge information into the energy function to reduce the distortion. To compute more efficiently, we use parallel computing environment as well. Experimental results show that compared with the existing methods, our method can both avoid the discontinuity of image content and distortions as well as better maintain the shape of the salient objects.
    Keywords: image resizing; seam carving; optimal seam; accumulative energy; saliency detection; edge detection.
    DOI: 10.1504/IJCSE.2018.10014036
     

Special Issue on: Smart X 2016 Smart Everything

  • An adaptive feature combination method based on ranking order for 3D model retrieval   Order a copy of this article
    by Qiang Chen, Bin Fang, Yinong Chen, Yan Tang 
    Abstract: Directly combining several complementary features may increase the retrieval precision for 3D models. However, in most cases, we need to set the weights manually and empirically. In this paper, we propose a new schema for automatically choosing the proper weights for different features on each database. The proposed schema uses the ranking order of the retrieval results, and it is invariant to the magnitude scaling. We choose the best feature as the standard one, and the relevance values between the standard and other features are the weights for feature combination. Furthermore, we propose an improved re-ranking algorithm for further improving the retrieval performance. Experiment shows the proposed method can automatically choose the proper weights for different features, and the experiment results on the existing features exceed the benchmark.
    Keywords: 3D retrieval; re-ranking; ranking order; feature combination.

Special Issue on: Cyberspace Security Protection, Risk Assessment and Management

  • CSCAC: one constant-size CPABE access control scheme in trusted execution environment   Order a copy of this article
    by Yongkai Fan, Shengle Liu, Gang Tan, Xiaodong Lin 
    Abstract: With the popularity of versatile mobile devices, there have been increasing concerns about their security. How to protect sensitive data is an urgent issue to be solved. CPABE (Ciphertext-policy attribute-based encryption) is a practical method for encrypting data and can use a users attributes to encrypt the sensitive data. In this paper, we propose a CSCAC (Constant-size CPABE Access Control) model by using the trusted execution environment to manage the dynamic key generated by attribute. The original data is encrypted by a symmetric storage key, then the storage key is encrypted under an AND-gate access policy. Only the user who possesses a set of attributes that satisfy the access policy can recover the storage key. The security analysis shows the design of this access control scheme reduces the burden and risk in the case of one single authority. Our experiment results indicate that the proposed scheme is more secure and effective compared with the traditional access scheme.
    Keywords: constant-size ciphertext; access control; trusted execution environment; attribute-based encryption; security.

  • Recognizing continuous emotions in dialogues based on DISfluencies and non-verbal vocalisations features for a safer network environment   Order a copy of this article
    by Huan Zhao, Xiaoxiao Zhou, Yufeng Xiao 
    Abstract: With the development of networks and social media, audio and video have become a more popular way to communicate. Audio and video can spread information to create some negative effect, e.g., negative sentiment with suicide tendency or threatening messages to make people panic. In order to keep a safe network environment, it is necessary to recognise emotion in dialogue. To improve recognition of continuous emotion in dialogue, we propose to combine DISfluencies and non-verbal vocalisations (DIS-NV) features with a bidirectional long short-term memory (BLSTM) model to predict continuous emotion. DIS-NV features are effective emotion features, including filled pause features, filler features, shutter features, laughter feature and breath feature. BLSTM can learn past information and use future information. State-of-the-art recognition attains 62% accuracy. Our experimental method can increase accuracy to 76%.
    Keywords: continuous emotion; BLSTM; dialogue; knowledge-inspired features; safe network environment; DIS-NV; AVEC2012; discretisation; speech emotion recognition; LLD.

  • PM-LPDR: a prediction model for lost packets based on data reconstruction on lossy links in sensor networks   Order a copy of this article
    by Zeyu Sun, Guozeng Zhao 
    Abstract: During the data gathering process in sensor networks, lots of transmitted data packets can be lost owing to the limiting node energy and the influence of data redundancy, which seriously undermines the transmission reliability. In order to solve this problem, a prediction model is proposed for lost packets based on data reconstruction on lossy links in sensor networks. In this model, the lost packet on lossy links can be modelled as a random loss during the transmission, and the matching type of the lost packets can be further predicted. Retransmission is adopted for data recovery when a random packet loss is predicted while prediction algorithms based on time sequences can be employed for data recovery when a random packet loss cannot be predicted. It is shown via the simulation results that this model could effectively alleviate the influence of lost packets on the lossy links. The operation of the whole system can still be guaranteed with this model when the packet loss probability of the network is lower than 15%, while the relative error for data reconstruction remains between 0.17 and 0.22% when the packet loss probability is higher than 22%. It is thus proven that this prediction model exhibits a strong stability and flexibility.
    Keywords: sensor networks; lossy links; matching for lost packets; data redundancy; data reconstruction.

  • An information network security policy learning algorithm based on Sarsa with optimistic initial values   Order a copy of this article
    by Fang Wang, Renjun Feng, Haiyan Chen, Fei Zhu 
    Abstract: With the widespread applications of artificial intelligence and automation, more and more devices are monitored by computer systems. In many cases, multiple management control information systems compose a comprehensive information system network. As the scale of the network is getting larger and larger and the topology of the network is getting more and more sophisticated, it is impossible to have a fixed mode network system control policy, which was designed for small and simple networks that often lacked thee ability to deal with dynamic environment and to handle security policy tasks. Hereby a network security policy online learning algorithm based on Sarsa with the optimistic initial values is proposed. The algorithm consists of two parts, one acting as the defence agent and the other acting as the attacking agent. The defence agent learns and improves the system protection policy by fighting against simulating attacking from attacking agent. The defence agent takes advantage of Sarsa method to improve its defence policy, which uses historical experience to improve the defence policy in an online mode. The use of optimistic initial values speeds up the training time.
    Keywords: information network; optimistic initial values; Sarsa; network defence; risk control.

  • Evaluation of borrower's credit of P2P loan based on adaptive particle swarm optimisation BP neural network   Order a copy of this article
    by Sen Zhang, Yuping Hu, Chunmei Wang 
    Abstract: Personal credit assessment is the main method to reduce the credit risk of P2P online loans. In this paper, after the adaptive mutation operator is used to reinitialise the particles with a certain probability and the global search capability of the particle swarm optimisation algorithm is used to optimise the weights and thresholds of the BP neural network, a method that adopts adaptive mutation particle swarm combined with BP neural network model is proposed to evaluate a borrower's credit of P2P network loan. Results of simulation experiments show that the AMPSO-BP neural network model has higher prediction accuracy, smaller error variation range, better fitting ability and more robustness than the BP neural network model in P2P network loan borrower credit evaluation applications.
    Keywords: P2P network loan; personal credit; BP neural network; particle swarm algorithm; adaptive mutation.

  • An approach for public cloud trustworthiness assessment based on users evaluation and performance indicators   Order a copy of this article
    by Guosheng Zhou, Wei Du, Hanchao Lin, Xiaowei Yan 
    Abstract: Aiming at the problem of how to quantify the trustworthiness of public cloud services, this paper proposes a method to evaluate the trustworthiness of public clouds based on users' subjective evaluations and objective monitoring values of performance indicators. According to the objective values obtained from monitoring organisations, the objective trustworthiness is calculated by using a multi-attribute decision-making method based on the ideal point (TOPSIS). Meanwhile, qualitative evaluations by users are quantified by exploiting the cloud model. Finally, a comprehensive assessment of the trustworthiness of cloud service is formed based on the above results. To meet the personalised requirements, users could configure the weight of parameters included in the proposed algorithms, ranking cloud services for specific needs. Compared with other assessment algorithms, both the subjective source and objective source are taken into account, the customised weight scheme for users is provided, the model and the algorithms are designed, and a prototype is developed.
    Keywords: public cloud services; trustworthiness model; comprehensive trustworthiness; TOPSIS; cloud model.

  • A novel computational model for SRAM PUF min-entropy estimation   Order a copy of this article
    by Dongfang Li, Qiuyan Zhu, Hong Wang, Zhihua Feng, Jianwei Zhang 
    Abstract: Min-entropy is the standard for quantization of uncertainty of security key source under the worst case. It indicates the upper bound of length of security key that is able to be extracted from its source. The openly published min-entropy estimation methods are all based on experimental data or statistical tests to obtain the underlying probability distribution of the security key source, where huge amounts of samples are required and therefore are not feasible from an engineering perspective. Aimed at computational complexity optimization, this paper proposes a novel model for SRAM PUF min-entropy estimation based on the generic coupling relationship between entropy and average energy consumption of the SRAM cell. The model mainly investigated the way of min-entropy evaluation derived from the average energy consumption of memory cell during power-up stage via simulation. We apply the model to estimate the min-entropy of IS62WV51216BLL SRAM chip. The experimental results demonstrated that the accuracy of the proposed min-entropy estimation model is in parallel with that of conventional methods, while its computational efficiency surpasses by a large extent.
    Keywords: min-entropy; SRAM; PUF; estimation model; entropy-energy coupling.

  • LWE-based multi-authority attribute-based encryption scheme with hidden policies   Order a copy of this article
    by Qiuting Tian, Dezhi Han, Xingao Liu, Xueshan Yu 
    Abstract: Most attribute-based encryption (ABE) schemes are based on bilinear maps, which leads to relatively high computation and storage costs, and which cannot resist quantum attacks. In view of the above problems, a LWE-based multi-authority attribute-based encryption scheme with hidden policies is proposed. Firstly, the scheme uses the lattice theory to replace the previous bilinear maps, and supports multi-authority to manage different attribute sets, and also uses the SampleLeft algorithm to extract keys for the authenticated users in the system. Secondly, the Shamir secret sharing mechanism is introduced to construct the access tree structure which can support AND, OR and THRESHOLD operations of attributes, which improves the flexibility of the system. At the same time, the access policies can be completely hidden so as to protect the privacy of users. Finally, the scheme is proved to be secure against the chosen plaintext attack under the standard model. Compared with the existing related schemes, the size of public parameters, master secret key, users private key and ciphertext are all optimised to some degree. Therefore it is more suitable for data encryption in cloud environment.
    Keywords: attribute-based encryption; learning with errors; hidden policies; lattices; standard model.

  • A secure hierarchical community detection algorithm   Order a copy of this article
    by Wei Zhu, Osama Hosam 
    Abstract: In complex networks, the security of real community is low. Besides, the structure of a community network is hierarchical and overlapped. Therefore, a community network cannot divide the secure structure accurately. To address this issue, this work presents a hierarchical community detection algorithm. First, a secure community clustering model is built. On the basis of the hierarchical structure, the bridge joint between communities can be found. After that, the secure clustering is performed. Finally, the community is detected based on the hierarchical and overlapped features. The experiments show that the proposed algorithm has improved the computation speed. The detected complex network community has obvious structure. Besides, the security performance in probability of coincidence is encouraging.
    Keywords: complex network; community; hierarchical; overlap.

  • Efficient security credential management for named data networking   Order a copy of this article
    by Bo Deng 
    Abstract: As a promising future internet architecture, Named Data Networking (NDN) shifts the network focus from where the data is to what content it carries. In NDN, the communication is driven by consumer requesting data by specifying its name or name prefix, and is secured by producer sinning the data and optionally encrypting the content. Therefore, during NDN communications, consumers may need to fetch digital certificates, as named data as well, in order to verify datas signature. A chain of certificates may be involved in verifying some signatures. Maintaining those certificates efficiently, especially when the number of certificates to maintain is large, is challenging. This paper proposes a novel mechanism to store certificates efficiently, based on NDN certificate naming convention. According to our experimental results, the proposed approach can gain a reduction on memory consumption by almost 80%, while providing faster lookup speed.
    Keywords: NDN; security; certificate; management; hash.

  • Intrusion detection approach for cloud computing based on improved fuzzy c-means clustering algorithm   Order a copy of this article
    by Xuchong Liu, Jiuchuan Lin, Xin Su, Yi Zheng 
    Abstract: Recently, cloud computing has become more and more important on the internet. Meanwhile, network attackers aim at this platform, and launch various of attacks to threaten the security of cloud computing. Some researchers have proposed fuzzy C-means clustering algorithm (FCM) to detect such attack. However, FCM contains some limitations, such as low detection accuracy, low precision, and slow convergence speed when detecting intrusions under the cloud computing scenario. In this paper, we propose an intrusion detection approach based on an objective function optimisation FCM algorithm. This approach uses kernel function to improve optimisation ability of FCM algorithm. Then, the proposed approach uses Lagrange multiplier approach to calculate cluster centre and membership matrix, which is able to optimise the objective function of the FCM algorithm and reduce algorithm complexity. The simulation experiment shows that our approach can achieve higher detection accuracy and precision in detecting intrusion into a cloud computing network, and has great advantages in performance of convergence.
    Keywords: cloud computing; intrusion detection; network attack; objective function optimization; Lagrange multiplier approach.

  • A network traffic-aware mobile application recommendation system based on network traffic cost consideration   Order a copy of this article
    by Xin Su, Yi Zheng, Jiuchuan Lin, Xuchong Liu 
    Abstract: A large amount and different types of mobile applications (or apps) are being offered to end users via app markets. Existing mobile app recommender systems generally recommend the most popular mobile apps to mobile users to facilitate the proper selection of mobile apps. However, these apps normally generate network traffic, which will consume users' mobile data plan and may even cause potential security issues. Therefore, more and more mobile users are hesitant or even reluctant to use the mobile apps that are recommended by the mobile app markets. To fill this crucial gap, this paper proposes a mobile app recommendation approach which can provide app recommendations by considering both the app popularity and their traffic cost. To achieve this goal, this paper first estimates an app network traffic score based on bipartite graph. Then, this paper proposes a flexible approach based on benefit-cost analysis, which can recommend apps by maintaining a balance between the apps' popularity and the traffic cost concern. Finally, this paper evaluates our approach with extensive experiments on a large-scale dataset collected from Google Play. The experimental results clearly validate the effectiveness and efficiency of our approach.
    Keywords: mobile apps; network traffic cost; recommendation approach.

  • Design of DDoS attack detection system based on intelligent bee colony algorithm   Order a copy of this article
    by Xueshan Yu, Dezhi Han, Gongjun Yin, Zhenxin Du, Qiuting Tian 
    Abstract: With the large data applications gaining popularity, DDoS (Distributed Denial of Service) has become increasingly a serious major network security issue. In response to the problem of DDoS attack detection in big data environment, a DDoS attack detection system based on traffic reduction and EABC_elite (intelligent artificial bee colony algorithm) is designed. The system combines the traffic reduction algorithm and the intelligent bee colony algorithm to reduce the data traffic according to the idea of abnormal extraction. It uses the traffic feature distribution entropy and the generalized likelihood comparison discrimination factor to jointly detect the characteristics of DDoS attack data streams in order to quickly and efficiently achieve DDoS attack data flow accuracy detection. The experimental results show that the demand of traffic detection in this system is greatly reduced, the algorithm time-consuming and DDoS detection accuracy are obviously better than the separate traffic reduction algorithm and traffic reduction algorithm combined with common artificial bee colony algorithm.
    Keywords: DDoS attack; intelligent bee colony algorithm; traffic feature distribution entropy; traffic segmentation algorithm; generalized likelihood comparison.

  • Hybrid design for cloud data security using a combination of AES, ECC and LSB steganography   Order a copy of this article
    by Osama Hosam, Muhammad Hammad Ahmed 
    Abstract: The ever-growing popularity of cloud systems is unleashing a revolutionary change in information technology. Parallel and flexible services offered by cloud technology are making it the ultimate solution for individuals as well as for organisations of all sizes. The grave security concerns present in the cloud must be addressed to protect the data and privacy of the huge numbers of cloud users. We present a hybrid solution to tackle the key management problem. The data in the cloud is encrypted with AES encryption with private key. The AES 256-bits key is then encrypted with ECC. The ECC encrypted key will be embedded in the users image with LSB steganography. If the user decides to share cloud data with a second user, he only needs to embed the AES key in the second users image. Using steganography, ECC and AES we can achieve strong security posture and efficient key management and distribution for multiple users.
    Keywords: cloud security; encryption; ECC; AES; steganography; public key; private key.
    DOI: 10.1504/IJCSE.2018.10016054
     

Special Issue on: IJCSE PDCAT'17 Parallel Computations and Applications

  • User preferences-oriented cloud service selection in multi-cloud environment   Order a copy of this article
    by Li Liu, Letian Yang, Qi Fan 
    Abstract: Service selection based on user preference is a challenge owing to the diversity of user demands and preferences in the multi-cloud environment. Few works have clearly reviewed the existing works for the user preference-oriented service selection in the multi-cloud environment. In this paper, we firstly develop a taxonomy for the user preference-oriented service selection according to the architecture and algorithms. Then, considering the actual situation of uncertain user demands and fuzzy preferences, a cloud service selection method is proposed based on user preference and credibility evaluation. The user preference is expressed by combining the semantic terms and attribute comparison. Experiments show that our method performs better in terms of the user preference and credibility.
    Keywords: multi-cloud; service selection; credibility evaluation; user preference; intuitionistic fuzzy sets.

  • The loading-aware energy saving scheme for EPON networks   Order a copy of this article
    by Chien-Ping Liu, Ho-Ting Wu, Kai-Wei Ke 
    Abstract: This paper proposes a loading-aware energy saving mechanism for Ethernet passive optical networks (EPONs), aiming at providing satisfied energy saving, delay performance and transmission efficiency jointly for the optical network unit (ONU) component in EPON networks. This energy saving scheme measures upstream and downstream traffic loading continuously to identify traffic load for each ONU, then classifies such load of each ONU to either a low or a high loading level. The proposed scheme allows ONU to transmit packets only when the system stays in high load conditions, otherwise ONU changes to one of power saving modes. Therefore, this scheme allows ONU to accumulate queued packets in low-load scenario and stay in power saving mode for longer duration. However, in order to avoid long queueing delay of high priority packets, the ONU will not switch to power saving mode if the queue of high priority packets is not empty on either upstream or downstream channel. Compared with a previous proposed tri-mode energy saving scheme, which imposed strict restriction on ONUs staying in energy saving modes, such design achieves energy saving improvement without suffering noticeable delay performance degradation. The simulation results show that the proposed scheme is able to achieve a good balance between energy saving effect and delay performance with a proper parameter setting.
    Keywords: EPON; energy saving; delay performance; loading aware.

  • Using RFID technology to develop an intelligent equipment lock management system   Order a copy of this article
    by Yeh-cheng Chen, Hun-Ming Sun, Ruey-shun Chen, S. Felix Wu 
    Abstract: The equipment lock has been acted as an important tool for the power company to protect the electricity metering equipment. However, the conventional equipment lock has two potential problems: vandalism and counterfeiting. To fulfill the control and track the potential illegal behaviour, human labour and paper are required to proceed with related operations, resulting in the consumption of a large amount of human resources and maintenance costs. This research focused on the design of RFID technology applied to the traditional equipment lock, with the mobile and electronic technology, we are able to strengthen the management/operating convenience of the lock and provide the solutions for anti-counterfeiting and spoilage detection, so that the national energy can be properly protected and fairly distributed. The integration of RFID data interface and mobile sensing devices will enhance the control of electricity meters and other metering devices for the power company, and it will also provide accurate and mobile support with interactive mode, real-time display by the system, and real-time information services. It will serve as the last-mile management tool for the electrical equipment and improve development of the intelligent electric grids.
    Keywords: radiofrequency identification; equipment lock management; near field communication; power company.

  • Managing changes to a packet-processing virtual machines instruction set architecture over time   Order a copy of this article
    by Ralph Duncan 
    Abstract: We describe an approach to deploying only those bytecodes that can be executed by the current operating system and hardware resources in an environment that combines parallelism, processor heterogeneity and software-defined networking capabilities (SDN). Packet processings escalating speed requirements necessitate parallel processing and heterogeneous, specialized processors acting as accelerators. We use bytecodes for a virtual machine to drive the dissimilar processors with interpreters running in parallel. Since processors and SDN are evolving, bytecodes must evolve as well. We must execute reliable programs for packet processing, so we need to deploy only bytecodes that the interpreters and system resources can support. Our solution combines: (a) correlating supported features, interpreter versions and hardware variants in a manifest file, (b) instrumenting a compiler to recognize key feature use, (c) carrying detected feature data in an object module section and (d) running a checking tool at various stages to prevent compiling or deploying a bytecode that cannot be correctly executed. The scheme has handled deprecating features and adding a broad variety of new features. It has been stress-tested by significant changes in hardware variants.
    Keywords: compatibility; parallel processing; network processing; bytecodes; instruction set; reliability.

  • Spectro-temporal features for environmental sound classification   Order a copy of this article
    by KhineZar Thwe, Mie Mie Thaw 
    Abstract: This paper proposes a 2N_BILBP feature extraction method based on spectro-temporal features for sound event classification. Spectro-temporal features have a similar pattern to texture features in image processing. So, the concept of texture features is used in this digital signal processing field. This papers uses two-neighbour bidirectional local binary pattern (2N_BILBP) for feature extraction. 2N_BILBP is also compared with the previous method called bidirectional local binary pattern. Firstly, the input audio is converted into spectrogram using short-time Fourier transform and then gamma tone is used. The resulting gamma-tone-like spectrogram is then used to extract features. These features are used as feature features. Finally, the input audio is labelled using this feature vector. Evaluation is tested on three benchmark datasets, called ESC-10 dataset, ESC-50 dataset and UrbanSound8K dataset.
    Keywords: local binary pattern; sound event classification; audio event classification; texture features; specto-temporal fetures; ESC-10 dataset; ESC-50 dataset; UrbanSound8K dataset;.

  • A privacy-preserving cloud-based data management system with efficient revocation scheme   Order a copy of this article
    by Shih-Chien Chang, Ja-Ling Wu 
    Abstract: There are lots of data management systems, based on various reasons, delegating their high computational workload and storage requirement to public cloud service providers. It is well known that once we entrust our tasks to a cloud server, we may face several threats, such as privacy infringement with regard to users attribute information; therefore, an appropriate privacy preserving mechanism is a must for constructing a Secure Cloud-Based Data Management System (SCBDMS). To design a reliable SCBDMS with server-enforced revocation ability is a very challenging task even if the server is working under the honest-but-curious threat model. Existing data management systems seldom provide privacy-preserving revocation services, especially when the tasks are outsourced to a third party. In this work, with the aids of oblivious transfer and the newly proposed Stateless Lazy Re-Encryption (SLREN) mechanism, a SCBDMS with secure, reliable, and efficient server-enforced attribute revocation ability is built. Compared with related works, experimental results show that, in the newly constructed SCBDMS, the storage-requirement of the cloud server and the communication overheads between the cloud server and system users are largely reduced, owing to the nature of the late involvement of SLREN.
    Keywords: privacy-preserving; lazy re-encryption; revocation.

  • Out-of-core streamline visualisation based on adaptive partitioning and data prefetching   Order a copy of this article
    by Guo Yumeng, Wang Wenke, Li Sikun 
    Abstract: As huge amounts of flow data come into being every day, it is challenging for most flow field visualisation applications on a single PC to handle the large-scale data, because of the memory size restriction. To address the problem, an out-of-core strategy is often used to divide the large data into blocks, and each data block is loaded on demand. Data prefetching a technique that synchronises block loading and integral curves calculation is frequently applied in the out-of-core strategy to fill in the speed gap between I/O and computation. In this paper, we focus on improving the efficiency of data-prefetching large-scale streamline visualisation by elevating the hit rate of data block prediction. Our key idea is first to extract feature information of the field, and then adopt a partitioning strategy that slices important regions into smaller blocks. Finally in run-time, various prefetching structures can be generated from our partitioning strategy, and the effectiveness of our streamline visualisation system is validated by comparing with those generated from uniform partitioning strategy. Experiments show that the major measurement of our partitioning strategy for data prefetching is much better than conventional uniform-partitioned methods, with an increase of about 10% in both prefetch hit rate and effective rate. As a result, the total execution time of visualisation system decreases 10% on an average.
    Keywords: streamline visualisation; out-of-core technique; data prefetching; block partition.

  • Accelerating the discontinuous Galerkin cell-vertex scheme solver on GPU-powered systems   Order a copy of this article
    by Xiaoqi Hu, Mengshen Zhao, Shuangzhang Tu, Byunghyun Jang 
    Abstract: The Discontinuous Galerkin Cell-Vertex Scheme (DG-CVS) is a high-order space-time Riemann-solver-free numerical solver for general hyperbolic conservation laws. It fuses the Discontinuous Galerkin (DG) method and the Conservation Element/Solution Element (CE/SE) method to take advantage of the best features of both methods. In DG-CVS, the time derivatives of the solution are treated as independent unknowns together with spatial derivatives of the solution, which is amendable to a GPU's parallel execution style. In a GPU environment, this type of scientific application poses challenges, such as high thread divergence, low kernel occupancy, and hardware-unfriendly memory access patterns. This paper presents various optimisations that address those issues. Our proposed optimisations include thread remapping and register pressure reduction, as well as software-managed cache memory utilisation. DG-CVS is accelerated by up to 54% on AMD HD7970 GPU, when compared to CPU-only execution.
    Keywords: numerical solver; discontinuous Galerkin method; high-performance computing; GPGPU; OpenCL.

  • Q-learning and ACO hybridisation for real-time scheduling on heterogeneous distributed architectures   Order a copy of this article
    by Younes Hajoui, Omar Bouattane, Mohamed Youssfi, Elhocein Illoussamen 
    Abstract: In the intensive computation field, it is worth mentioning that extensive computing power and considerable storage capacity are needed by greedy applications. To reach the required processing power, multiple processing units should be linked in order to handle the distributed jobs. However, the heterogeneity of the associated resources/workers is to be considered during the task-scheduling process. Our approach consists of combining Q-learning with ACO (Ant Colony Optimisation) to solve the job-scheduling problems on heterogeneous architectures. The proposed approach is implemented by using the mobile agents systems. The obtained results from simulation demonstrated the effectiveness of the proposed hybridisation due to the considerable reduction of the overall execution time (makespan) and to the fast convergence observed after small number of learning steps.
    Keywords: task scheduling; hybridisation; mobile agent system; Q-learning; ant colony; makespan.

Special Issue on: Recent Advances in the Security and Privacy of Multimedia Big Data in the Social Internet of Things

  • Smart embarked electrical network based on embedded system and monitoring camera   Order a copy of this article
    by Mohammed Amine Benmahdjoub, Abdelkader Mezouar, Larbi Boumediene, Youcef SAIDI 
    Abstract: To improve the quality of life and its comfort with more security, the world of transport is moving towards all-electric. This imposes an embarked electrical network type operation; this network is based on parallel alternators connecting, which requires more energy and needs synchronisation with identical phases between alternators. In addition, some conditions must be respected to avoid energy crises and increase the efficiency of the system. To ensure the stability and protection of this type of system, the control will be performed by a reliable controller with remote control and monitoring of all data in real time. In this paper, we realise a prototype of protection and monitoring of electrical equipment using a Raspberry Pi as an intermediate embedded system and an RPi camera. In addition, the communication between the electrical system and the web application will be done by Json file or by data stored in the database. For any change in the desired values, the electrical protection system sends a message and a musical warning to the website in real time. In addition, the monitoring will be generated by the FIFO memory for image processing and the servomotor to control the direction of the RPi camera.
    Keywords: embarked electrical network; phase shift detector; remote control; embedded system; ethernet network; semantic web.

Special Issue on: ICCIDS 2018 Computational Intelligence and Data Science

  • Statistical tree-based feature vector for content-based image retrieval   Order a copy of this article
    by Sushila Aghav-Palwe, Dhirendra Mishra 
    Abstract: The efficiency of any content-based image retrieval system depends on the extracted feature vectors of individual images stored in the database. The generation of compact feature vectors with good discriminative power is a real challenge in the image retrieval system. This paper presents the experimentation carried out to generate compact feature vectors for a colour image retrieval system based on image content. It has two stages of operation. In the first stage, the energy compaction property of image transforms is used; in the second stage, the statistical tree approach is used for feature vector generation. The performance of image retrieval is tested using an image feature database as per various performance evaluation parameters, such as precision recall crossover point along with newly proposed conflicting string of images. With different colour spaces, image transforms and statistical measures, the proposed approach achieves a reduction in the feature vector size with better discriminative power.
    Keywords: statistical tree; image retrieval; image transform; feature extraction; low level features.

  • A benchmarking framework using nonlinear manifold detection techniques for software defect prediction   Order a copy of this article
    by Soumi Ghosh, Ajay Rana, Vineet Kansal 
    Abstract: Prediction of software defects in time improves quality and helps in locating the defect-prone areas accurately. Although earlier considerable methods were applied, actually none of those measures was found to be fool-proof and accurate. Hence, a newer framework includes a nonlinear manifold detection model, and its algorithm originated for defect prediction using different techniques of nonlinear manifold detection along with 14 different machine learning techniques on eight defective software datasets. A critical analysis cum exhaustive comparative estimation revealed that the nonlinear manifold detection model has a more accurate and effective impact on defect prediction than feature selection techniques. The outcome of the experiment statistically tested by Friedman and post hoc analysis using the Nemenyi test, which validates that the hidden Markov model along with the nonlinear manifold detection model, outperforms and is significantly different compared with other machine learning techniques.
    Keywords: dimensionality reduction; feature selection; Friedman test; machine learning; Nemenyi test; nonlinear manifold detection; software defect prediction; post hoc analysis.