Forthcoming articles


International Journal of Computational Science and Engineering


These articles have been peer-reviewed and accepted for publication in IJCSE, but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.


Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.


Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.


Articles marked with this Open Access icon are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.


Register for our alerting service, which notifies you by email when new issues of IJCSE are published online.


We also offer RSS feeds which provide timely updates of tables of contents, newly published articles and calls for papers.


International Journal of Computational Science and Engineering (182 papers in press)


Regular Issues


  •   Free full-text access Open AccessA decision system based on active perception and intelligent analysis for key location security information
    ( Free Full-text Access ) CC-BY-NC-ND
    by Jingzhao Li, Zihua Chen, Guangming Cao, Mei Zhang 
    Abstract: In various enterprises, the security problems (or latent danger) of key location can not be processed in time. It is because that the security data are manually entered by multiple security workers at different times, which can lead to disordered data in some related security information systems, and analysis decision files need to be manually processed. To solve this problem, this paper presents a decision system for key location security information based on active perception and intelligent analysis to help worker/staff making proper decision. First, the designed system is developed based on C/S framework, and the functions mainly include four aspects: intelligent semantic analysis extraction, standard keyword database, intelligent(analysis) retrieval decision and early warning function. Then, the perception model based on deep learning and intelligent decision analysis model is constructed to achieve the above functions. Experimental results show that this system can significantly reduce the heavy workload of security inspectors, carry out intelligent retrieval and decision analysis, prevent safety accidents and reduce the frequency of safety accidents. It has highly social application value and innovation.
    Keywords: security risk information; semantic analysis; active perception; ant colony optimisation; intelligent decision making.

  • Hough transform-based cubic spline recognition for natural shapes   Order a copy of this article
    by Cheng-Huang Tung, Wei-Jyun Syu, Wei-Cheng Huang 
    Abstract: A two-stage GHT-based cubic spline recognition method is proposed for recognising flexible natural shapes. First, the proposed method uses cubic splines to interpolate a flexible natural shape, and a sequence of connected boundary points is generated from the cubic splines. Each such point has accurate tangent and curvature features. At the first recognition stage, the proposed method uses the modified GHT to adjust the scale and orientation factors of the input shape with respect to each reference model. At the second recognition stage, the proposed point-based matching technique calculates the difference between each specific reference model and its corresponding adjusted input shape at the point level. Experiments for recognising 15 categories of natural shapes, including fruits and vegetables, the recognition rate of the proposed two-stage method is 97.3%, much higher than 79.3% measured by the standard GHT.
    Keywords: Hough transform; GHT; cubic spline; natural shape; curvature; tangent; point-based matching; recognition method; template database; boundary point.

  • Personalised service recommendation process based on service clustering   Order a copy of this article
    by Xiaona Xia 
    Abstract: Personalised service recommendation is the key technology for service platforms, and the demand preferences of users are the important factors for personalised recommendation. First, in order to improve the accuracy and adaptability of service recommendation, services are needed to be initialised before being recommended and selected, then they are classified and clustered according to demand preferences, and service clusters are defined and demonstrated. For sparse problems of the service function matrix, historical and potential preferences are expressed as double matrices. Second, a service cluster is viewed as the basic business unit, and we optimise the graph summarisation algorithm and construct service recommendation algorithm SCRP. Helped by the experiments about variety parameters, SCRP has more advantages than other algorithms. Third, we select fuzzy degree and difference to be the two key indicators, and use some service clusters to complete the simulation and analyse the algorithm performance. The results show that our service selection and recommendation method is better than others, which might effectively improve the quality of service recommendation.
    Keywords: service clustering; service recommendation; graph summarisation algorithm; personalisation; preference matrix.

  • Optimising data access latencies of virtual machine placement based on greedy algorithm in datacentre   Order a copy of this article
    by Xinyan Zhang, Keqiu Li, Yong Zhang 
    Abstract: The total completion time of a task is also the major bottleneck in the big data processing applications based on parallel computation, since the computation and data are distributed on more and more nodes. Therefore, the total completion time of a task is an important index to evaluate the cloud performance. The access latency between the nodes is one of the key factors affecting task completion time for cloud datacentre applications. Additionally, minimising total access time can reduce the overall bandwidth cost of running the job. This paper proposes an optimisation model focused on optimising the placement of virtual machines (VM) so as to minimise the total data access latency where the datasets have been located. According to the proposed model, our optimising VMs problem is linear programming. Therefore, we obtain the optimum solution of our model by the branch-and-bound algorithm that its time complexity is O(2^{NM}). Simultaneously, we also present a greedy algorithm, which has O(NM) of time complexity, to solve our model. Finally, the simulation results show that all of the solutions of our model are superior to existing models and close to the optimal value.
    Keywords: datacentre; cloud environment; access latency; virtual machine placement; greedy algorithm.

  • An empirical study of disclosure effects in listed biotechnology and medicine industry using MLR model   Order a copy of this article
    by Chiung-Lin Chiu, You-Shyang Chen 
    Abstract: This research employs the multiple linear regression model to investigate the relationship between voluntary disclosure and firm performance in biotechnology and medicine industry in Taiwan. Using 44 firm-year observations collected from Information Transparency and Disclosure Ranking System and Taiwan Economic Journal financial database for companies listed in the Taiwan Stock Exchange and Taipei Exchange Market, the regression results reveal that there is a positive and significant relationship between voluntary disclosure and firm performance. Firms with better voluntary disclosure have better performance than do firms without voluntary disclosure. The results suggest that companies should pay more attention to voluntary disclosure as additional information. It is also considered by investors as valuable information when making their investment decision.
    Keywords: voluntary disclosure; firm performance; investment decision; MLR; multiple linear regression model; biotechnology and medicine industry; TSE; Taiwan Stock Exchange; ITDRS; information transparency and disclosure ranking system.

  • A static analytical performance model for GPU kernel   Order a copy of this article
    by Jinjing Li 
    Abstract: Graphics processing units (GPUs) have shown increased popularity and play an important role as a kind of coprocessor in heterogeneous co-processing environments. Heavily data parallel problems can be solved efficiently by tens of thousands of threads collaboratively working in parallel in GPU architecture. The achieved performance, therefore,depends on the capability of multiple threads in parallel collaboration, the effectiveness of latency hiding, and the use of multiprocessors. In this paper, a static analytical kernel performance model (SAKP) is proposed, based on this performance principle, to estimate the execution time of the GPU kernel. Specifically, a set of kernel and device features for the target GPU is generated in the proposed model. We determine the performance-limiting factors and generate an estimation of the kernel execution time with this model. Matrix Multiplication (MM) and Histogram Generation (HG) in NVIDIA GTX680 GPU card were performed to verify our proposed model, and showed an absolute error in prediction of less than 6.8%.
    Keywords: GPU; co-processing; static analytical kernel performance model; kernel and device features; absolute error.

  • A universal compression strategy using sorting transformation   Order a copy of this article
    by Bo Liu, Xi Huang, Xiaoguang Liu, Gang Wang, Ming Xu 
    Abstract: Although traditional universal compression algorithms can effectively use repetition located in a slide window, they cannot take advantage of some message source in which similar messages are distributed uniformly. In this paper, we come up with a universal segmenting-sorting compression algorithm to solve this problem. The key idea is to reorder the message source before compressing it with the Lz77 algorithm. We design transformation methods for two common data types, corpus of webpages and access log. The experimental results show that segmenting-sorting transformation is truly beneficial to the compression ratio. Our new algorithm is able to make the compression ratio 20% to 50% lower than the naive Lz77 algorithm does and takes almost the same decompression time. For some read-heavy sources, segmenting-sorting compression can reduce space cost while guaranteeing throughput.
    Keywords: segmenting; sorting; Lz77; compression; universal compression method.

  • Executing time and cost-aware task scheduling in hybrid cloud using a modified DE algorithm   Order a copy of this article
    by Yuanyuan Fan, Qingzhong Liang, Yunsong Chen 
    Abstract: Task scheduling is one of the basic problems in cloud computing. In a hybrid cloud, task scheduling faces new challenges. In this paper, we propose a GaDE algorithm, based on a differential evolution algorithm, to improve the single objective scheduling performance of a hybrid cloud. In order to better deal with the multi-objective task scheduling optimisation in hybrid clouds, on the basis of the GaDE and Pareto optimum of the quick sorting method, we present a multi-objective algorithm, named NSjDE. This algorithm also reduces the frequency of evaluation. Compared with experiments using the Min-Min algorithm, GaDE algorithm and NSjDE algorithm, results show that for the single object task scheduling, GaDE and NsjDE algorithms perform better in getting the approximate optimal solution. The optimisation speed of the multi-objective NSjDE algorithm is faster than the single-objective jDE algorithm, and NSjDE can produce more than one non-dominated solution meeting the requirements, in order to provide more options to the user.
    Keywords: hybrid cloud; task scheduling; executing time-aware; cost-aware.
    DOI: 10.1504/IJCSE.2016.10012716
  • Modelling and simulation research of vehicle engines based on computational intelligence methods   Order a copy of this article
    by Ling-ge Sui, Lan Huang 
    Abstract: We assess the feasibility of two kinds of widely used artificial neural network (ANN) technologies applied in the field of transient emission simulation. In this work, the back-propagation feedforward neural network (BPNN) is shown to be more suitable than the radial basis function neural network (RBFNN). Considering the transient change rule of a transient operation, the composite transient rate is innovatively adopted as an input variable to the BPNN transient emission model, which is composited by the torque transient rate and air-fuel ratio (AFR) transient rate. Thus, a whole process transient simulation platform based on the multi-soft coupling technology of a test diesel engine is established. Through a transient emission simulation, the veracity and generalisation ability of the simulation platform is confirmed. The simulation platform can correctly predict the change trends and establish a peak value difference within 8%. Our findings suggest that the simulation platform can be applied to a control strategies study of typical transient operations.
    Keywords: transient emission; simulation; back-propagation feedforward neural network; radial basis function neural network; diesel engine.
    DOI: 10.1504/IJCSE.2018.10006094
  • Institution-based UML activity diagram transformation with semantic preservation   Order a copy of this article
    by Amine Achouri, Yousra Bendaly Hlaoui, Leila Jemni Ben Ayed 
    Abstract: This paper presents a specific tool, called MAV-UML-AD, allowing the specification and the verification of workflow models using UML Activity Diagrams (UML AD) and Event-B and Based on Institutions. The developed tool translates an activity diagram model into an equivalent Event-B specification according to a mathematical semantics. The transformation approach of UML AD models is based on the theory of institutions. In fact, each of UML AD and Event-B specification is defined by an instance of its corresponding institution. The transformation approach is represented by an institution co-morphism, which is defined between the two institutions. Institution theory is adopted as the theoretical framework of the tool essentially for two reasons. First, it gives a locally mathematical semantics for UML AD and Event-B. Second, to define a semantic preserving mapping between UML AD specification and Event-B machine. Thanks to the B theorem prover, functional proprieties such as liveness and fairness can be formally checked. The core of the model transformation approach will be highlighted in this paper and how institution concepts such category, co-morphism and signature are presented in the two used formalisms. This paper will also illustrate the use of the developed tool MAV-UML-AD through an example of specification and verification.
    Keywords: formal semantics; model-driven engineering; institution theory; Event-B; UML activity diagram; formal verification

  • The analysis of evolutionary optimisation on the TSP(1,2) problem   Order a copy of this article
    by Xiaoyun Xia, Xinsheng Lai, Chenfu Yi 
    Abstract: The TSP(1,2) problem is a special case of the travelling salesperson problem, which is NP-hard. Many heuristics including evolutionary algorithms (EAs) are proposed to solve the TSP(1,2) problem. However, we know little about the performance of the EAs for the TSP(1,2) problem. This paper presents an approximation analysis of the (1+1) EA on this problem. It is shown that both the (1+1) EA and $(mu+lambda)$ EA can obtain $3/2$ approximation ratio for this problem in expected polynomial runtime $O(n^3)$ and $Oleft((frac{mu}{lambda})n^3+nright)$, respectively. Furthermore, we prove that the (1+1) EA can provide a much tighter upper bound than a simple ACO on the TSP(1,2) problem.
    Keywords: evolutionary algorithms; TSP(1,2); approximation performance; analysis of algorithm; computational complexity.
    DOI: 10.1504/IJCSE.2016.10007955
  • A novel rural microcredit decision model and solving via binary differential evolution algorithm   Order a copy of this article
    by Dazhi Jiang, Jiali Lin, Kangshun Li 
    Abstract: Generally, as an economic means of lifting people out of poverty, microcredit has been accepted as an effective method for empowering both individuals and communities. However, risk control is still a core part of the implementation of agriculture-related loans business for microcredit companies. In this paper, a rural microcredit decision model is presented based on maximising the profit while minimising the risk. Then, a binary differential evolution algorithm is applied to solve the decision model. The result shows that the proposed method and model are scientific and easy to operate, which can also provide a referential solution for the decision management in microcredit companies.
    Keywords: risk control; microcredit; decision model; binary differential evolution.

  • Q-grams-imp: an improved q-grams algorithm aimed at edit similarity join   Order a copy of this article
    by Zhaobin Liu, Yunxia Liu 
    Abstract: Similarity join is more and more important in many applications and has attracted widespread attention from scholars and communities. Similarity join has been used in many applications, such as spell checking, copy detection, entity linking, pattern recognition and so on. Actually, in many web and enterprise scenarios, where typos and misspellings often occur, we need to find an efficient algorithm to handle these situations. In this paper, we propose an improved algorithm on q-grams called q-grams-imp that is aimed at solving edit similarity join. We use this algorithm in order to reduce the number of tokens and thus reduce space costs; it is best fitted for same size strings. But for different sizes of strings, we need to handle these strings in order to fit the algorithm. Finally, we conclude and get the results that our proposed algorithm is better than the traditional method.
    Keywords: similarity join; q-grams algorithm; edit distance.
    DOI: 10.1504/IJCSE.2016.10008631
  • An algorithm based on differential evolution for satellite data Transmission Scheduling   Order a copy of this article
    by Qingzhong Liang, Yuanyuan Fan, Xuesong Yan, Ye Yan 
    Abstract: Data transmission task scheduling is one of the important problems in satellite communication. It can be considered as a combinatorial optimisation problem among satellite data transmission demand, visible time window and ground station resource, which is an NP-complete problem. In this paper, we propose a satellite data transmission task scheduling algorithm that searches for an optimised solution based on a differential evolution algorithm framework. In its progress of evolution, the individuals evaluating procedure is improved by a modified 0/1 knapsack based method. Extensive experiments are conducted to examine the effectiveness and performance of the proposed scheduling algorithm. Experimental results show that the scheduling results generated from the algorithm satisfy scheduling constraints and are consistent with the expectation.
    Keywords: data transmission; task scheduling; differential evolution; knapsack problem.
    DOI: 10.1504/IJCSE.2016.10012717
  • Dynamic load balance strategy for parallel rendering based on deferred shading   Order a copy of this article
    by Mingqiang Yin, Dan Sun, Hui Sun 
    Abstract: To solve the problem of low efficiency in rendering of large scenes with a complex illumination model, a new deferred shading method is proposed, which is applied to the parallel rendering system. In order to make the rendering times of slave nodes in the parallel rendering system equal to each other, the algorithm for rendering task assignment is designed. For the deferred shading method, the process of rendering every frame is divided into two phases. The first one called geometrical process is responsible for the visibility detection. In this phase, the primitives are distributed to each rendering node evenly and are rendered without illumination. The pixels which should be shaded and their corresponding primitives are found. The second one called pixel shading is responsible for colouring the pixels which have been found in the first phase. The pixels are assigned to the rendering node evenly according the image of the last frame. As the rendering tasks in the two phases are assigned evenly, the rendering times of node in the cluster system are roughly equal to each other. Experiments show that this method can improve the rendering efficiency of the parallel rendering system.
    Keywords: parallel rendering; deferred shading; load balance.

  • Big data automatic analysis system and its applications in rockburst experiment   Order a copy of this article
    by Yu Zhang 
    Abstract: In 2006, State Key Laboratory for GeoMechanics and Deep Underground Engineering, GDLab for short, has successfully reconstructed the rockburst procedure indoors. Since then, a series of valuable research results has been gained in the area of rockburst mechanism. At the same time, there are some dilemmas, such as data storage dilemma, data analysis dilemma and prediction accuracy dilemma. GDLab has accumulated more than 500 TB data of rockburst experiments. But so far, the amount of analysed data is less than 5%. The primary cause of these dilemmas is that a large amount of experimental data in the procedure of the study of rockburst are produced. In this paper, a novel big data automatic analysis system for rockburst experiment is proposed. Various modules and algorithms were designed and realised. Theoretical analysis and experimental research show that the system can improve the existing research mechanism of rockburst. It also can make many impossible things become possible. The work of this paper has laid a theoretical foundation for rockburst mechanism research.
    Keywords: rock burst; experiment data; big data; automatic analysis.

  • Training auto-encoders effectively via eliminating task-irrelevant input variables   Order a copy of this article
    by Hui Shen, Dehua Li, Zhaoxiang Zang, Hong Wu 
    Abstract: Auto-encoders are often used as building blocks of deep network classifiers to learn feature extractors, but task-irrelevant information in the input data may lead to bad extractors and result in poor generalisation performance of the network. In this paper, via dropping the task-irrelevant input variables the performance of auto-encoders can be obviously improved. Specifically, an importance-based variable selection method is proposed to aim at finding the task-irrelevant input variables and dropping them. The paper first estimates the importance of each variable, and then drops the variables with importance value lower than a threshold. In order to obtain better performance, the method can be employed for each layer of stacked auto-encoders. Experimental results show that when combined with our method the stacked denoising auto-encoders achieve significantly improved performance on three challenging datasets.
    Keywords: feature learning; deep learning; neural network; auto-encoder; stacked auto-encoders; variable selection; feature selection; unsupervised training.

  • Model-checking software product lines based on feature slicing   Order a copy of this article
    by Mingyu Huang, Yumei Liu 
    Abstract: Feature model is a popular formalism for describing the commonality and variability of software product line in terms of features. Feature models symbolise a presentation of the possible application configuration space, and can be customised based on specific domain requirements and stakeholder goals. As feature models are becoming increasingly complex, it is desired to provide automatic support for customised analysis and verification based on the specific goals and requirements of stakeholders. This paper first presents feature model slicing based on the requirements of the users. We then introduce three-valued abstraction of behaviour models based on the slicing unit. Finally, based on a multi-valued model checker, a case study was conducted to illustrate the effectiveness of our approach.
    Keywords: feature model; slicing; three-valued model; model checking.

  • Decomposition-based multi-objective comprehensive learning particle swarm optimisation   Order a copy of this article
    by Xiang Yu, Hui Wang, Hui Sun 
    Abstract: This paper proposes decomposition-based comprehensive learning particle swarm optimisation (DCLPSO) for multi-objective optimisation. DCLPSO uses multiple swarms, with each swarm optimising a separate objective. Two sequential phases are conducted: independent search and then cooperative search. Important information related to extreme points of the Pareto front often can be found in the independent search phase. In the cooperative search phase, a particle randomly learns from its personal best position or an elitist on each dimension. Elitists are non-dominated solutions and are stored in an external repository shared by all the swarms. Mutation is applied to each elitist in this phase to help escaping from local Pareto fronts. Experiments conducted on various benchmark problems demonstrate that DCLPSO is competitive in terms of convergence and diversity of the resulting non-dominated solutions.
    Keywords: particle swarm optimisation; comprehensive learning; decomposition; multi-objective optimisation.

  • Applicability evaluation of different algorithms for daily reference evapotranspiration model in KBE system   Order a copy of this article
    by Yubin Zhang, Zhengying Wei, Lei Zhang, Jun Du 
    Abstract: An irrigation decision-making system based on Knowledge-based Engineering (KBE) is reported in this paper. It can accurately predict water and fertiliser requirements and achieve intelligent irrigation diagnosis and decision support. However, the basis of the KBE was knowledge of reference crop evapotranspiration (ET0). Therefore, the research examined the accuracy of the support vector machines (SVMs) in the model of ET0. The main obstacles of computing ET0 by the PenmanMonteith model were the complicated nonlinear process and the many climate variables required; furthermore, these were calculated based on the original meteorological data, and the calculation standard was not the only one. Thus, the SVM models are applied with the original or limited data, especially in developing countries. The flexibility of the SVMs in ET0 modelling was assessed using the original meteorological data (Tmax, Tm, Tmin, n, Uh, RHm, φ, Z ) of the years 1990-2014 in five stations of Shaanxi, China. Those eight parameters were used as the input, while the reference evapotranspiration values were the output. In the first part of the study, the SVMs were compared with FAO-24, Hargreaves, McCloud, Priestley-Taylor and Makkink models. The comparison results indicated that the SVMs performed better than other models. In the second part, the total ET0 estimation of the SVMs was compared with the other models in the validation. It was found that the SVM models were superior to the others in terms of relative error. The further assessment of SVMs was conducted, and confirmed that the models could provide a powerful tool in KBE irrigation with a lack of meteorological data. This research could provide a reference for accurate ET0 estimation for decision-making in KBE irrigation systems based on collecting data from humidity sensors and weather stations in the field.
    Keywords: reference evapotranspiration; support vector machines; knowledge-based engineering; original meteorological data.
    DOI: 10.1504/IJCSE.2019.10017950
  • Multi hidden layer extreme learning machine optimised with batch intrinsic plasticity   Order a copy of this article
    by Shan Pang, Xinyi Yang 
    Abstract: Extreme learning machine (ELM) is a novel learning algorithm where the training is restricted to the output weights to achieve a fast learning speed. However, ELM tends to require more neurons in the hidden layer and sometimes leads to ill-condition problem owing to random selection of input weights and hidden biases. To address these problems, we propose a multi hidden layer ELM optimised with batch intrinsic plasticity (BIP) scheme. The proposed algorithm has a deep structure and thus learns features more efficiently. The combination with the BIP scheme helps to achieve better generalisation ability. Comparisons with some state-of-the-art ELM algorithms on both regression and classification problems have verified the performance and effectiveness of our proposed algorithm.
    Keywords: neural network; extreme learning machine; batch intrinsic plasticity; multi hidden layers.

  • Chaotic artificial bee colony with elite opposition-based learning strategy   Order a copy of this article
    by Zhaolu Guo, Jinxiao Shi, Xiaofeng Xiong, Xiaoyun Xia, Xiaosheng Liu 
    Abstract: Artificial bee colony (ABC) algorithm is a promising evolutionary algorithm inspired by the foraging behaviour of honey bee swarms, which has obtained satisfactory solutions in diverse applications. However, the basic ABC demonstrates insufficient exploitation capability in some cases. To address this issue, a chaotic artificial bee colony with elite opposition-based learning strategy (CEOABC) is proposed in this paper. During the search process, CEOABC employs the chaotic local search to promote the exploitation ability. Moreover, the elite opposition-based learning strategy is used to exploit the potential information of the exhausted solution. Experimental results compared with several ABC variants show that CEOABC is a competitive approach for global optimisation.
    Keywords: artificial bee colony; chaotic local search; opposition-based learning; elite strategy.

  • Numerical simulations of electromagnetic wave logging instrument response based on self-adaptive hp finite element method   Order a copy of this article
    by L.I. Hui, Zhu Xifang, Liu Changbo 
    Abstract: Numerical simulation of instrument response is an important method to calibrate instrument parameters, evaluate detection performance, and verify complex system theory. Measurement results of electrical well logging are important for the interpretation of measurement data and characterisation of oil reservoirs, especially in horizontal directional drilling and shale gas and oil development. In this paper, a self-adaptive hp finite element method has been used to investigate the electrical well logging instrument responses, such as the electromagnetic wave resistivity logging- while-drilling (LWD) tool and the through-casing resistivity logging (TCRL) tool. Measurement results illustrate the efficiency of the methods, and provide physical interpretation of resistivity measurements obtained with the LWD and TCRL tools. Numerical simulation examples are provided to show the validity, accuracy, and efficiency of the self-adaptive hp finite element method. The high accuracy simulation results have great importance for electrical well logging tools calibration and logging data interpretation.
    Keywords: numerical simulation; parameters calibration; electromagnetic wave resistivity logging-while-drilling; through-casing resistivity logging; self-adaptive hp finite element method.
    DOI: 10.1504/IJCSE.2016.10011966
  • Upgrading event and pattern detection to big data   Order a copy of this article
    by Soumaya Cherichi, Rim Faiz 
    Abstract: One of the marvels of our time is the unprecedented development and use of technologies that support social interaction. Social mediating technologies have engendered radically new ways of information and communication, particularly during events; in cases of natural disaster, such as earthquakes and tsunami, and the American presidential election. This paper is based on data obtained from Twitter because of its popularity and sheer data volume. This content can be combined and processed to detect events, entities and popular moods to feed various new large-scale data-analysis applications. On the downside, these content items are very noisy and highly informal, making it difficult to extract sense out of the stream. Taking into account all the difficulties, we propose a new event detection approach combining linguistic features and Twitter features. Finally, we present our event detection system from microblogs that aims (1) to detect new events, (2) to recognise temporal markers pattern of an event, and (3) to classify important events according to thematic pertinence, author pertinence and tweet volume.
    Keywords: microblogs; event detection; temporal markers; patterns; social network analysis.

  • A security ensemble framework for securing a file in cloud computing environments   Order a copy of this article
    by Sharon Moses J, Nirmala M 
    Abstract: Scalability and on-demand features of cloud computing have revolutionised the IT industry. Cloud computing provides flexibility to the user in several aspects, including pay as you use. The entire burdens of computing, managing resources and file storage are moved to the cloud service provider end. File storage in clouds is an important issue for both service providers and the end users. Securing the file stored in cloud storage from internal and external attacks has become a primary concern for cloud storage providers. Owing to the accumulation of enormous amounts of personal and confidential information in cloud storage, it draws hackers and data-pirates to steal the information at any cost. Once a file gets stored in cloud storage, the user has no authority over the file as well as any knowledge of its physical location. In this paper, the threats involved in file storage and a secure way of protecting the stored files using a novel ensemble of security strategies is presented. An encryption ensemble module is incorporated over an OpenStack cloud infrastructure for protecting the file. Five symmetric block ciphers are used in the encryption module to encrypt and decrypt the file without disturbing existing security measures provided to a file. This proposed strategy helps service providers as well as users to secure the file in cloud storage more efficiently.
    Keywords: Cloud Storage; File Privacy; File Security; Swift storage; OpenStack security; Security ensemble.

  • Virtual guitar: using real-time finger tracking for musical instruments   Order a copy of this article
    by Noorkholis Luthfil Hakim, Shih-Wei Sun, Mu-Hsen Hsu, Timothy K. Shih, Shih-Jung Wu 
    Abstract: Kinect, a 3D sensing device from Microsoft, invokes the Human Computer Interaction (HCI) research evolution. Kinect has been implemented in many areas, including music. One implementation was in a Virtual Musical Instrument (VMI) system, which uses natural gestures to produce synthetic sounds similar to a real musical instrument. From related work, we found that the use of a large joint, such as hand, arm or leg, is inconvenient and limits the way of playing VMI. Thus this study proposed a fast and reliable finger tracking algorithm suitable for VMI playing. In addition, a virtual guitar system application was developed as an implementation of the proposed algorithm. Experimental results show that the proposed method can be used to play a variety of tunes with an acceptable quality. Furthermore, the proposed application could be used by a beginner who does not have any experience in music or playing a real musical instrument.
    Keywords: virtual guitar; finger tracking; musical instrument; human computerrninteraction; HCI; hand detection; hand tracking; hand gesture recognition; virtual musical instrument; VMI; depth camera.
    DOI: 10.1504/IJCSE.2016.10008449
  • A cloud computing price model based on virtual machine performance degradation   Order a copy of this article
    by Dionisio Machado Leite, Maycon Peixoto, Carlos Ferreira, Bruno Batista, Danilo Costa, Marcos Santana, Regina Santana 
    Abstract: This paper reports the interference effects in virtual machines performance running higher workloads to improve the resources payment in cloud computing. The objective is to produce an acceptable pay-as-you-go model to be used by cloud computing providers. Presently, a price of pay-as-you-go model is based on the virtual machine used per time. However, this scheme does not consider the interference caused by virtual machines running concurrently, which may cause performance degradation. In order to obtain a fair charging model, this paper proposes an approach considering a recovery over the initial price considering the virtual machine performance interference. Results showed benefits of a fair pay-as-you-go model, ensuring the effective user requirement. This novel model contributes to cloud computing in a fair and transparent price composition.
    Keywords: cloud computing; pay-as-you-go; virtualisation; quality of service.
    DOI: 10.1504/IJCSE.2016.10011343
  • Designing scrubbing strategy for memories suffering MCUs through the selection of optimal interleaving distance   Order a copy of this article
    by Wei Zhou, Hong Zhang, Hui Wang, Yun Wang 
    Abstract: As technology scales, multiple cell upsets (MCUs) have shown prominent effect, thus affecting the reliability of memory to a great extent. Ideally, the interleaving distance (ID) should be chosen as the maximum expected MCU size. In order to mitigate MCUs errors, interleaving schemes together with single error correction (SEC) codes can be used to provide the greatest protection. In this paper, we propose the use of scrubbing sequences to improve memory reliability. The key idea is to exploit the locality of the errors caused by a MCU to make scrubbing more efficient. The single error correction, double error detection, and double adjacent error correction (SEC-DEDDAEC) codes have also been used. A procedure is presented to determine a scrubbing sequence that maximizes reliability. An algorithm of scrubbing strategy, which keeps the area overhead and complexity as low as possible without compromising memory reliability, is proposed for the optimal interleaving distance, which should be maximized under some conditions. The approach is further applied to a case study and results show a significant increase in the Mean Time To Failure (MTTF) compared with traditional scrubbing.
    Keywords: interleaving distance; memory; multiple cell upsets (MCUs); soft error; reliability; scrubbing; radiation.
    DOI: 10.1504/IJCSE.2016.10004753
  • A model of mining approximate frequent itemsets using rough set theory   Order a copy of this article
    by Yu Xiaomei, Wang Hong, Zheng Xiangwei 
    Abstract: Datasets can be described by decision tables. In real-life applications, data are usually incomplete and uncertain, which poses big challenges for mining frequent itemsets in imprecise databases. This paper presents a novel model of mining approximate frequent itemsets using the theory of rough sets. With a transactional information system constructed on the dataset under consideration, a transactional decision table is put forward, then lower and upper approximations of support are available that can be easily computed from the indiscernibility relations. Finally, by a divide-and-conquer way, the approximate frequent itemsets are discovered taking consideration of support-based accuracy and coverage defined. The evaluation of the novel model is conducted on both synthetic datasets and real-life applications. The experimental results demonstrate its usability and validity.
    Keywords: rough set theory; data mining; decision table; approximate frequent itemsets; indiscernibility relation.

  • Improved predicting algorithm of RNA pseudoknotted structure   Order a copy of this article
    by Zhendong Liu, Daming Zhu, Qionghai Dai 
    Abstract: The prediction of RNA structure with pseudoknots is an NP-hard problem. According to minimum free energy models and computational methods, we investigate the RNA pseudoknotted structure. The paper presents an efficient algorithm for predicting RNA structure with pseudoknots, and the algorithm takes O(n3) time and O(n2) space. The experimental tests in Rfam10.1 and PseudoBase indicate that the algorithm is more effective and precise, and the algorithm can predict arbitrary pseudoknots. And there exists an 1+e (e>0) polynomial time approximation scheme in searching the maximum number of stackings, and we give the proof of the approximation scheme in RNA pseudoknotted structure.
    Keywords: RNA pseudoknotted structure; predicting algorithm; PTAS; pseudoknots; minimum free energy.
    DOI: 10.1504/IJCSE.2016.10010413
  • Reversible image watermarking based on texture analysis of grey level co-occurrence matrix   Order a copy of this article
    by Shu-zhi Li, Qin Hu, Xiao-hong Deng, Zhaoquan Cai 
    Abstract: Embedding the watermark in the complex area of the image can effectively improve concealment. However, most methods simply use the mean squared error (MSE) and some simple methods to judge the texture complexity. In this paper, we propose a new texture analysis method based on grey level co-occurrence matrix (GLCM) and provide an in-depth discussion on how to accurately choose a complex region. This new method is applied to the reversible image watermarking. Firstly, the original host image is divided into 128 * 128 sub-blocks. Then, the mean square error is used to assign the weight of the four texture feature parameters to establish the relationship between the characteristic parameters and the complexity of image sub-block. Applying this formulaic series, we can calculate the complexity of each sub-block, along with the selection of the maximum sub-blocks of the texture complexity. If the embedding position is insufficient, then we select the second sub-block to be embedded in the watermark, until a satisfactory embedding capacity is reached. Pairwise prediction error extend (PPEE) is used to hide the data.
    Keywords: grey level co-occurrence matrix; image sub block; texture complexity; reversible image watermarking.

  • A semantic recommender algorithm for 3D model retrieval based on deep belief networks   Order a copy of this article
    by Li Chen, Hong Liu, Philip Moore 
    Abstract: Interest in 3D modelling is growing; however, the retrieval results achieved for semantic-based 3D model retrieval systems have been disappointing. In this paper we propose a novel semantic recommendation algorithm based on a Deep Belief Network (DBN-SRA) to implement semantic retrieval with potential semantic correlations [between models] being achieved using deep learning form known model samples. The algorithm uses the feature correlation [between the models] as the conditions to enable semantic matching of 3D models to obtain the final recommended retrieval result. Our proposed approach has been shown to improve the effectiveness of 3D model retrieval, in terms of both retrieval time and, importantly, accuracy. Additionally, our study and our reported results suggest that our posited approach will generalise to recommender systems in other domains that are characterised by multiple feature relationships.
    Keywords: deep belief network; 3D model retrieval; recommender algorithm; cluster analysis.

  • Differential evolution with spatially neighbourhood best search in dynamic environment   Order a copy of this article
    by Dingcai Shen, Longyin Zhu 
    Abstract: In recent years, there has been a growing interest in applying differential evolution (DE) to optimisation problems in a dynamic environment. The ability to track a changing optimum over time is concerned in dynamic optimisation problems (DOPs). In this study, an improved niching-based scheme, named spatially neighbourhood best search DE (SnDE), for DOPs is proposed. The SnDE adopts DE with DE/best/1/bin scheme. The best individual in the selected scheme is searched around the considered individual in a predefined neighbourhood size, thus keeping a balance between exploitation ability and exploration ability. A comparative study with several algorithms with different characteristics on a common platform by using the moving peaks benchmark (MPB) and various problem settings is presented in this paper. The results indicate that the proposed algorithm can track the changing optimum in each circumstance effectively on the selected benchmark function.
    Keywords: differential evolution; dynamic optimisation problem; neighbourhood search; niching.

  • Optimal anti-interception orbit design based on genetic algorithm   Order a copy of this article
    by Yifang Liu 
    Abstract: The space defence three-player problem with impulsive thrust is studied in this work. Interceptor spacecraft and anti-interceptor spacecraft have only one chance to manoeuvre, while target spacecraft just keeps running in the target orbit without the ability to manoeuvre. Based on the Lambert theorem, the space defence three-player problem is modelled and divided into two layers. The internal layer is an interception problem in which the interceptor spacecraft tries to intercept the target spacecraft. The external layer is an anti-interception problem in which the anti-interceptor spacecraft tries to defend against the interceptor spacecraft. Because it can get the global solution and does not need the gradient information that is required in traditional optimisation methods, the genetic algorithm is employed to solve the resulting parameter optimisation problem in the interception/anti-interception problem. A numerical simulation is provided to verify the availability of the obtained solution, and the results show that this work is useful for some practical applications.
    Keywords: space three-player problem; anti-interception orbit design; impulsive thrust; parameter optimisation problem; genetic algorithm.
    DOI: 10.1504/IJCSE.2016.10009742
  • Detecting sparse rating spammer for accurate ranking of online recommendation   Order a copy of this article
    by Hong Wang, Xiaomei Yu, Yuanjie Zheng 
    Abstract: Ranking method for online recommendation system is challenging owing to the rating sparsity and the spam rating attacks. The former can cause the well-known cold start problem while the latter complicates the recommendation task by detecting these unreasonable or biased ratings. In this paper, we treat the spam ratings as 'corruptions', which spatially distribute in a sparse pattern, and model them with a L1 norm and a L2,1 norm. We show that these models can characterise the property of the original ratings by removing spam ratings and help to resolve the cold start problem. Furthermore, we propose a group reputation-based method to re-weight the rating matrix and an iteratively programming-based technique for optimising the ranking for online recommendation. We show that our optimisation methods outperform other recommendation approaches. Experimental results on four famous datasets show the superior performances of our methods.
    Keywords: ranking; group-based reputation; sparsity; spam rating; collaborative recommendation.

  • Differential evolution with dynamic neighborhood learning strategy based mutation operators   Order a copy of this article
    by Guo Sun, Yiqiao Cai 
    Abstract: As the core operator of differential evolution (DE), mutation is crucial for guiding the search. However, in most DE algorithms, the parents in the mutation operator are randomly selected from the current population, which may lead to DE being slow to exploit solutions when faced with complex problems. In this study, a dynamic neighborhood learning (DNL) strategy is proposed for DE to alleviate this drawback. The new proposed DE framework is named DE with DNL-based mutation operators (DNL-DE). Unlike the original DE algorithms, DNL-DE uses DNL to dynamically construct neighborhood for each individual during the evolutionary process and intelligently select parents for mutation from the defined neighborhood. In this way, the neighborhood information can be effectively used to improve the performance of DE. Furthermore, two instantiations of DNL-DE with different parent selection methods are presented. To evaluate the effectiveness of the proposed algorithm, DNL-DE is applied to the original DE algorithms, as well as several advanced DE variants. The experimental results demonstrate the high performance of DNL-DE when compared with other DE algorithms.
    Keywords: differential evolution; dynamic neighborhood; learning strategy; mutation operator; numerical optimisation.
    DOI: 10.1504/IJCSE.2016.10005940
  • A word-frequency-preserving steganographic method based on synonym substitution   Order a copy of this article
    by Lingyun Xiang, Xiao Yang, Jiahe Zhang, Weizheng Wang 
    Abstract: Text steganography is a widely used technique to protect communication privacy but it still suffers a variety of challenges. One of these challenge is that a synonym substitution based method may change the statistical characteristics of the content, which may be easily detected by steganalysis. In order to overcome this disadvantage, this paper proposes a synonym substitution based steganographic method taking the word frequency into account. This method dynamically divides the synonyms appearing in the text into groups, and substitutes some synonyms to alter the positions of the relatively low frequency synonyms in each group to encode the secret information. By maintaining the number of relatively low frequency synonyms across the substitutions, it preserves some characteristics of the synonyms with various frequencies in the stego and the original cover texts. The experimental results illustrate that the proposed method can effectively resist attack from the detection using relative frequency analysis of synonyms.
    Keywords: synonym substitution; steganography; word-frequency-preserving; multiple-base coding; steganalysis.
    DOI: 10.1504/IJCSE.2016.10012293
  • A personalised ontology ranking model based on analytic hierarchy process   Order a copy of this article
    by Jianghua Li, Chen Qiu 
    Abstract: Ontology ranking is one of the important functions of ontology search engines, which ranks searched ontologies based on the ranking model applied. A good ranking method can help users to acquire the exactly required ontology from a considerable amount of search results, efficiently. Existing approaches that rank ontologies take only a single aspect into consideration, and ignore users personalised demands, hence produce unsatisfactory results. It is believed that the factors that influence ontology importance and the users demands both need to be considered comprehensively in ontology ranking. A personalised ontology ranking model based on the hierarchical analysis approach is proposed in this paper. We build a hierarchically analytical model and apply an analytic hierarchy process to quantify ranking indexes and assign weights to them. The experimental results show that the proposed method can rank ontologies effectively and meet users personalised demands.
    Keywords: hierarchical analysis approach; ontology ranking; personalised demands; weights assignment.

  • Collective intelligence value discovery based on citation of science article   Order a copy of this article
    by Yi Zhao, Zhao Li, Bitao Li, Keqing He, Junfei Guo 
    Abstract: One of the tasks of scientific paper writing is to recommend. When the number of references is increased, there is no clear classification and the similarity measure of the recommendation system will show poor performance. In this work, we propose a novel recommendation research approach using classification, clustering and recommendation models integrated into the system. In an evaluation of the ACL Anthology papers network data, we effectively use a complex network of knowledge tree node degrees (refer to the number of papers) to enhance the accuracy of recommendation. The experimental results show that our model generates better recommended citation, achieving 10% higher accuracy and 8% higher F-score than the keyword march method when the data is big enough. We make full use of the collective intelligence to serve the public.
    Keywords: citation recommendation; classification; clustering; similarity; citation network.

  • Differential evolution with k-nearest-neighbour-based mutation operator   Order a copy of this article
    by Gang Liu, Cong Wu 
    Abstract: Differential evolution (DE) is one of the most powerful global numerical optimisation algorithms in the evolutionary algorithm family, and it is popular for its simplicity and effectiveness in solving numerous real-world optimisation problems in real-valued spaces. The performance of DE depends on its mutation strategy. However, the traditional mutation operators have difficulty in balancing the exploration and exploitation. To address these issues, in this paper, a k-nearest-neighbour-based mutation operator is proposed for improving the search ability of DE. This operator is used to search in the areas in which the vector density distribution is sparse. This method enhances the exploitation of DE and accelerates the convergence of the algorithm. In order to evaluate the effectiveness of our proposed mutation operator on DE, this paper compares other state-of-the-art evolutionary algorithms with the proposed algorithm. Experimental verifications are conducted on the CEC05 competition and two real-world problems. Experimental results indicate that our proposed mutation operator is able to enhance the performance of DE and can perform significantly better than, or at least comparably with, several state-of-the-art DE variants.
    Keywords: differential evolution; unilateral sort; k-nearest-neighbour-based mutation; global optimisation.

  • Simultaneous multiple low-dimensional subspace dimensionality reduction and classification   Order a copy of this article
    by Lijun Dou, Rui Yan, Qiaolin Ye 
    Abstract: Fisher linear discriminant (FLD) for supervised learning has recently emerged as a computationally powerful tool for extracting features for a variety of pattern classification problems. However, it works poorly with multimodal data. Local Fisher linear discriminant (LFLD) is proposed to reduce the dimensionality of multimodal data. Through experiments tried out on the multimodal but binary data sets created from several multi-class datasets, it has been shown to be better than FLD in terms of performance. However, LFLD has a serious limitation, which is that it is limited to use on small-scale datasets. In order to address the above disadvantages, in this paper we develop a Multiple low-dimensionality Dimensionality Reduction Technique (MSDR) of performing the dimensionality reduction (DR) of input data. In contrast to FLD and LFLD finding an optimal low-dimensional subspace, the new algorithm attempts to seek multiple optimal low-dimensional subspaces that best make the data sharing the same labels more compact. Inheriting the advantages of NC, MSDR reduces the dimensionality of data and directly performs classification tasks without the need to train the model. Experiments of comparing MSDR with the existing traditional approaches tried out on UCI, show the effectiveness and efficiency of MSDR.
    Keywords: Fisher linear discriminant; local FLD; dimensionality reduction; multiple low-dimensional subspaces.
    DOI: 10.1504/IJCSE.2017.10016015
  • Using Gaussian mixture model to fix errors in SFS approach based on propagation   Order a copy of this article
    by Huang WenMin 
    Abstract: A new Gaussian mixture model is used to improve the quality of the propagation method for SFS in this paper. The improved algorithm can overcome most difficulties of the method, including slow convergence, interdependence of propagation nodes and error accumulation. To slow convergence and interdependence of propagation nodes, a stable propagation source and integration path are used to make sure that the reconstruction work of each pixel in the image is independent. A Gaussian mixture model based on prior conditions is proposed to fix the error of integration. Good results have been achieved in the experiment for the Lambert composite image of front illumination.
    Keywords: shape from shading; propagation method; silhouette; Gaussian mixture model; surface reconstruction.

  • Sign fusion of multiple QPNs based on qualitative mutual information   Order a copy of this article
    by Yali Lv, Jiye Liang, Yuhua Qian 
    Abstract: In the era of big data, the fusion of uncertain information from different data sources is a crucial issue in various applications. In this paper, a sign fusion method of multiple Qualitative Probabilistic Networks (QPNs) with the same structure from different data sources is proposed. Specifically, firstly, the definition of parallel path in multiple QPNs is given and the problem of fusion ambiguity is described. Secondly, the fusion operator theorem has been introduced in detail, including its proof and algebraic properties. Further, an efficient sign fusion algorithm is proposed. Finally, experimental results demonstrate that our fusion algorithm is feasible and efficient.
    Keywords: qualitative probabilistic reasoning; QPNs; Bayesian networks; sign fusion; qualitative mutual information.
    DOI: 10.1504/IJCSE.2017.10012592
  • Estimation of distribution algorithms based on increment clustering for multiple optima in dynamic environments   Order a copy of this article
    by Bolin Yu 
    Abstract: Aiming to locate and track multiple optima in dynamic multimodal environments, an estimation of distribution algorithms based on increment clustering is proposed. The main idea of the proposed algorithm is to construct several probability models based on an increment clustering which improved performance for locating multiple local optima and contributed to find the global optimal solution quickly for dynamic multimodal problems. Meanwhile, a policy of diffusion search is introduced to enhance the diversity of the population in a guided fashion when the environment is changed. The policy uses both the current population information and the part history information of the optimal solutions available. Experimental studies on the moving peaks benchmark are carried out to evaluate the performance of the proposed algorithm in comparison with several state-of-the-art algorithms from the literature. The results show that the proposed algorithm is effective for the function with moving optimum and can adapt to the dynamic environments rapidly.
    Keywords: EDAs; dynamic multimodal problems; diffusion policy; incremental clustering.
    DOI: 10.1504/IJCSE.2017.10010004
  • A blind image watermarking algorithm based on amalgamation domain method   Order a copy of this article
    by Qingtang Su 
    Abstract: Combining with the spatial domain and the frequency domain, a novel blind digital image watermarking algorithm is proposed in this paper to resolve the protecting copyright problem. For embedding a watermark, the generation principle and distribution features of direct current (DC) coefficient are used to directly modify the pixel values in the spatial domain, then four different sub-watermarks are embedded into different areas of the host image for four times, respectively. When extracting the watermark, the sub-watermarks are extracted in a blind manner according to the DC coefficients of the watermarked image and the key-based quantisation step, and then the statistical rule and first to select, second to combine are proposed to form the final watermark. Hence, the proposed algorithm not only has the simple and quick performance of the spatial domain but also has the high robustness feature of DCT domain. Many experimental results have proved that the proposed watermarking algorithm has good invisibility of watermark and strong robustness for many added attacks, e.g., JPEG compression, cropping, adding noise, etc. Comparison results also have shown the preponderance of the proposed algorithm.
    Keywords: information security; digital watermarking; combine domain; direct current.

  • A data cleaning method for heterogeneous attribute fusion and record linkage   Order a copy of this article
    by Huijuan Zhu, Tonghai Jiang, Yi Wang, Li Cheng, Bo Ma, Fan Zhao 
    Abstract: In big data era, when massive heterogeneous data are generated from various data sources, the cleaning of dirty data is critical for reliable data analysis. Existing rule-based methods are generally developed in a single data source environment, so issues such as data standardisation and duplication detection for different data-type attributes are not fully studied. In order to address these challenges, we introduce a method based on dynamic configurable rules which can integrate data detection, modification and transformation together. Secondly, we propose a type-based blocking and a varying window size selection mechanism based on a classic sorted-neighborhood algorithm. We present a reference implementation of our method in a real-life data fusion system and validate its effectiveness and efficiency using recall and precision metrics. Experimental results indicate that our method is suitable in the scenario of multiple data sources with heterogeneous attribute properties.
    Keywords: big data; varying window; data cleaning; record linkage; record similarity; SNM; type-based blocking.

  • Chinese question speech recognition integrated with domain characteristics   Order a copy of this article
    by Shengxiang Gao, Dewei Kong, Zhengtao Yu, Jianyi Guo, Yantuan Xian 
    Abstract: Aiming at domain adaptation in speech recognition, we propose a speech recognition method for Chinese question sentence based on domain characteristics. Firstly, by virtue of syllable association characteristics implied in domain term, syllable feature sequences of domain terms are used to construct the domain acoustic model. Secondly, in decoding process of domain-specific Chinese question speech recognition, we use a domain knowledge relationship to optimise and prune the speech decoding network generated by the language model, to improve continuous speech recognition. The experiments on the tourist domain corpus show that the proposed method has an accuracy of 80.50% on Chinese question speech recognition and of 91.50% on domain term recognition, respectively.
    Keywords: Chinese question speech recognition; speech recognition; domain characteristic; acoustic model library; domain terms; language model; domain knowledge library.
    DOI: 10.1504/IJCSE.2017.10008632
  • Original image tracing with image relational graph for near-duplicate image elimination   Order a copy of this article
    by Fang Huang, Zhili Zhou, Ching-Nung Yang, Xiya Liu 
    Abstract: This paper proposes a novel method for near-duplicate image elimination, by tracing the original image of each near-duplicate image cluster. For this purpose, image clustering based on the combination of global feature and local feature is firstly achieved in a coarse-to-fine way. To accurately eliminate redundant images of each cluster, an image relational graph is constructed to reflect the contextual relationship between images, and the PageRank algorithm is adopted to analyse this contextual relationship. Then the original image will be correctly traced with the highest rank, while other redundant near-duplicate images in the cluster will be eliminated. Experiments show that our method achieves better performance both in image clustering and redundancy elimination, compared with the state-of-the-art methods.
    Keywords: near-duplicate image clustering; near-duplicate image elimination; image retrieval; image search; near-duplicate image retrieval; partial-duplicate image retrieval; image copy detection; local feature; contextual relationship.
    DOI: 10.1504/IJCSE.2017.10011852
  • IFOA: an improved forest algorithm for continuous nonlinear optimisation   Order a copy of this article
    by Borong Ma, Zhixin Ma, Dagan Nie, Xianbo Li 
    Abstract: The Forest Optimisation Algorithm (FOA) is a new evolutionary optimisation algorithm which is inspired by seed dispersal procedure in forests, and is suitable for continuous nonlinear optimisation problems. In this paper, an Improved Forest Optimisation Algorithm (IFOA) is introduced to improve convergence speed and the accuracy of the FOA, and four improvement strategies, including the greedy strategy, waveform step, preferential treatment of best tree and new-type global seeding, are proposed to solve continuous nonlinear optimisation problems better. The capability of IFOA has been investigated through the performance of several experiments on well-known test problems, and the results prove that IFOA is able to perform global optimisation effectively with high accuracy and convergence speed.
    Keywords: forest optimisation algorithm; evolutionary algorithm; continuous nonlinear optimisation; scientific decision-making.

  • A location-aware matrix factorisation approach for collaborative web service QoS prediction   Order a copy of this article
    by Zhen Chen, Limin Shen, Dianlong You, Chuan Ma, Feng Li 
    Abstract: Predicting the unknown QoS is often required because most users would have invoked only a small fraction of web services. Previous prediction methods benefit from mining neighborhood interest from explicit user QoS ratings. However, the implicitly existing but significant location information that would potentially tackle the data sparsity problem is overlooked. In this paper, we propose a unified matrix factorisation model that fully capitalises on the advantages of both location-aware neighborhood and latent factor approach. We first develop a multiview-based neighborhood selection method that clusters neighbours from the views of both geographical distance and rating similarity relationships. Then a personalised prediction model is built up by transforming the wisdom of neighborhoods. Experimental results have demonstrated that our method can achieve higher prediction accuracy than other competitive approaches and also better alleviate the concerned data sparsity issue.
    Keywords: service computing; web service; QoS prediction; matrix factorisation; location awareness.

  • Pairing-free certificateless signature with revocation   Order a copy of this article
    by Sun Yinxia, Shen Limin 
    Abstract: How to revoke a user is an important problem in public key cryptosystems. Free of costly certificate management and key escrow, the certificateless public key cryptography (CLPKC) are advantageous over the traditional public key system and the identity-based public key system. However, there are few solutions to the revocation problem in CLPKC. In this paper, we present an efficient revocable certificateless signature scheme. This new scheme can revoke a user with high efficiency. We also give a method to improve the scheme to be signing-key-exposure-resilient. Based on the discrete logarithm problem, our scheme is provably secure.
    Keywords: revocation; certificateless signature; without pairing; discrete logarithm problem.
    DOI: 10.1504/IJCSE.2017.10009399
  • Large universe multi-authority attribute-based PHR sharing with user revocation   Order a copy of this article
    by Enting Dong, Jianfeng Wang, Zhenhua Liu, Hua Ma 
    Abstract: In the patient-centric model of health information exchange, personal health records (PHRs) are often outsourced to third parties, such as cloud service providers (CSPs). Attribute-based encryption (ABE) can be used to realise flexible access control on PHRs in the cloud environment. Nevertheless, the issues of scalability in key management, user revocation and flexible attributes remain to be addressed. In this paper, we propose a large-universe multi-authority ciphertext-policy ABE system with user revocation. The proposed scheme achieves scalable and fine-grained access control on PHRs. In our scheme, there are a central authority (CA) and multiple attribute authorities (AAs). When a user is revoked, the system public key and the other users' secret keys need not be updated. Furthermore, because our scheme supports a large attribute universe, the number of attributes is not polynomially bounded and the public parameter size does not linearly grow with the number of attributes. Our system is constructed on prime order groups and proven selectively secure in the standard model.
    Keywords: attribute-based encryption; large universe; multi-authority; personal health record; user revocation.

  • A multi-objective optimisation multicast routing algorithm with diversity rate in cognitive wireless mesh networks   Order a copy of this article
    by Zhufang Kuang 
    Abstract: Cognitive Wireless Mesh Networks (CWMNs) were developed to improve the usage ratio of the licensed spectrum. Since the spectrum opportunities for users vary over time and location, enhancing the spectrum effectiveness is a goal and also a challenge for CWMNs. Multimedia applications have recently generated much interest in CWMNs supporting Quality-Of-Service (QoS) communications. Multicast routing and spectrum allocation is an important challenge in CWMNs. In this paper, we design an effective multicast routing algorithm based on diversity rate with respect to load balancing and the number of transmissions for CWMNs. A Load Balancing wireless links weight computing function and computing algorithm based on Diversity Rate (LBDR) are proposed, and a load balancing Channel and Rate Allocating algorithm based on Diversity Rate (CRADR) is proposed. On this basis, a Load balancing joint Multicast Routing, channel and Rate allocation algorithm based on Diversity rate with QoS constraints for CWMNs (LMR2D) is proposed. Balancing the load of node and channel, and minimising the number of transmissions of multicast tree are the objectives of LMR2D. Firstly, LMR2D computes the weight of wireless links using LBDR and the Dijkstra algorithm for constructing the load balancing multicast tree step by step. Secondly, LMR2D uses CRADR to allocate channel and rate of its to links, which is based on the Wireless Broadcast Advantage (WBA). Simulation results show that LMR2D can achieve the expected goal. Not only can it balance the load of node and channel, but also it needs fewer transmissions for multicast tree.
    Keywords: cognitive wireless mesh networks; multicast routing; spectrum allocation; load balanced; diversity rate.

  • Context discriminative dictionary construction for topic representation   Order a copy of this article
    by Shufang Wu 
    Abstract: The construction of a discriminative topic dictionary is important for describing the topic and increasing the accuracy of topic detection and tracking. In this method, we rank the mutual information of words, and the top few words with the maximum mutual information are selected to construct the discriminative topic dictionaries. Considering context words can provide a more accurate expression of the topic, during word selection, we both consider the differences between different topics and the context words that appear in the stories. Since the news topic is dynamic over time,it is not reasonable to keep the topic dictionary unchanged, so a dictionary updating method is also proposed. Experiments were carried out on TDT4 corpus, and we adopt miss probability and false alarm probability as evaluation criteria to compare the performance of incremental TF-IDF and the proposed method. Extensive experiments are conducted to show that our method can provide better results.
    Keywords: discriminative dictionary; context word; topic representation; word selection.
    DOI: 10.1504/IJCSE.2017.10011825
  • Demystifying echo state network with deterministic simple topologies   Order a copy of this article
    by Duaa Elsarraj, Maha Al Qisi, Ali Rodan, Nadim Obeid, Ahmad Sharieh, Hossam Faris 
    Abstract: Echo State Networks (ESN) are a special type of Recurrent Neural Networks (RNN) with distinct performance in the field of Reservoir computing. The state space of the ESN is initially randomised and the reservoir weights are fixed with training done only on the state readout. Beside the advantages of ESN, there remains some opacity in the dynamic properties of the reservoir owing to the presence of randomisation. Our aims in this paper are to demystify the model of ESN in a complete deterministic structure with the use of different proposed reservoir structures (topologies) and to compare their performance with the random ESN on different benchmark datasets. All applied topologies maintain the simplicity of random ESN computation complexity. Most of the topologies showed comparable or even better performance.
    Keywords: echo state network; reservoir computing; reservoir structure topology; memory capacity; echo state network algorithm; complexity.

  • A state space distribution approach based on system behaviour   Order a copy of this article
    by Imene Bensetira, Djamel Eddine Saidouni, Mahfud Al-la Alamin 
    Abstract: In this paper, we propose a novel approach to deal with the state space explosion problem occurring in model checking. We propose an off-line algorithm for distributed state space construction. That is carried out by reviewing the behaviour of the constructed system and redistributing the state space according to the accumulated information about the optimal considered behaviour. Therefore, the distribution will be guided by the systems behaviour. The proposed policy maintains the spatial-time balance. The simulation and implementation of our system are based on a multi-agent technique which fits very well the development of distributed systems. The experimental measures performed on a cluster of machines have shown very promising results for both workload balance and communication overhead.
    Keywords: model checking; combinatorial state space explosion; distributed state space construction; graph distribution; system behaviour; distributed algorithms; reachability analysis.

  • Consensus RNA secondary structure prediction using information of neighbouring columns and principal component analysis   Order a copy of this article
    by Tianhang Liu, Jianping Yin, Long Gao, Wei Chen, Minghui Qiu 
    Abstract: RNA is a family of biological macromolecules. It is important to all kinds of biological processes. RNA structures are closely related to their functions. Hence, determining the structure is invaluable in understanding genetic diseases and creating drugs. Nowadays, RNA secondary structure prediction is a field yet to be researched. In this paper, we present a novel method using an RNA sequence alignment to predict a consensus RNA secondary structure. In essence, the goal of the method is to give a prediction about whether any two columns of an alignment correspond to a base pair or not, using the information provided by the alignment. The information includes the covariation score, the fraction of complementary nucleotides and the consensus probability matrix of the column pair and those of its neighbours. Then principal component analysis is applied to overcome the problem of over-fitting. A comparison of our method and other consensus RNA secondary structure prediction methods, including NeCFold, ELMFold, KnetFold, PFold and RNAalifold, in 47 families from Rfam (version 11.0), is performed. Results show that our method surpasses the other methods in terms of Matthews correlation coefficient, sensitivity and selectivity.
    Keywords: RNA secondary structure prediction; comparative sequence analysis; principal component analysis; information of neighbouring columns.

  • Research on RSA and Hill hybrid encryption algorithm   Order a copy of this article
    by Hongyu Yang, Yuguang Ning, Yue Wang 
    Abstract: An RSA-Hill hybrid encryption algorithm model based on random division of plaintext is proposed. First, the key of the Hill cipher is replaced by a Pascal matrix. Secondly, the session key of the model is replaced by random numbers of plaintext division, and encrypted by the RSA cipher. Finally, the dummy problem in the Hill cipher can be solved, and the model can achieve the one-time pad. Security analysis and experimental results show that our method has better encryption efficiency and stronger anti-attack capacity.
    Keywords: hybrid encryption; plaintext division; Pascal matrix; RSA cipher; Hill cipher.

  • An auction mechanism for cloud resource allocation with time-discounting values   Order a copy of this article
    by Yonglong Zhang 
    Abstract: Group-buying has emerged as a new trading paradigm and has become more attractive. Both sides of the transaction benefit from group-buying: buyers enjoy a lower price and sellers receive more demanding orders. In this paper, we investigate an auction mechanism for cloud resource allocation with time discounting values via group-buying, called TDVG. TDVG consists of two steps: winning seller and buyer selection, and pricing. In the first step, we choose winning seller and buyer in a greedy manner according to some criterion, and calculate the payment for each winning seller and buyer in the second step. Rigorous proof demonstrates that TDVG satisfies the properties of truthfulness, budget balance and individual rationality. Our experiment results show that TDVG achieves better total utility, matching rate and commodities use than the existing works.
    Keywords: cloud resource allocation; auction; time discounting values; group-buying.
    DOI: 10.1504/IJCSE.2017.10012967
  • Study on data sparsity in social network-based recommender system   Order a copy of this article
    by Ru Jia, Ru Li, Meng Gao 
    Abstract: With the development of information technology and the expanding of information resources, it is more difficult for people to get the information that they are really interested in, which is so-called information overload. Recommender systems are regarded as an important approach to deal with information overload, because it can predict users preferences according to users records. Matrix factorisation is very successful in recommender systems, but it faces the problem of data sparsity. This paper deals with the sparsity problem from the perspective of adding more kinds of information from social networks, such as friendships and tags, into the recommending model in order to alleviate the sparsity problem. The paper also validates the impacts of users friendships, tags and neighbours of items on reducing the sparseness of the data and improving the accuracy of recommending by the experiments using the dataset from real life.
    Keywords: social network-based recommender systems; matrix factorisation; data sparsity.
    DOI: 10.1504/IJCSE.2017.10012119
  • A novel virtual disk bandwidth allocation framework for data-intensive applications in cloud environments   Order a copy of this article
    by Peng Xiao, Changsong Liu 
    Abstract: Recently, cloud computing has become a promising distributed processing paradigm to deploy various kinds of non-trivial applications. In those applications, most of them are considered data-intensive and therefore require the cloud system providing massive storage space as well as desirable I/O performance. As a result, virtual disk technique has been widely applied in many real-world platforms to meet the requirements of these applications. Therefore, how to efficiently allocate the virtual disk bandwidth become an important issue that need to be addressed. In this paper, we present a novel virtual disk bandwidth allocation framework, in which a set of virtual bandwidth brokers are introduced to make allocation decisions by playing two game models. Theoretical analysis and solution are presented to prove the effectiveness of the proposed game models. Extensive experiments are conducted on a real-world cloud platform, and the results indicate that the proposed framework can significantly improve the use of virtual disk bandwidth compared with other existing approaches.
    Keywords: cloud computing; bandwidth reservation; quality of service; queue model; gaming theory.

  • Academic research trend analysis based on big data technology   Order a copy of this article
    by Weiwei Lin, Zilong Zhang, Shaoliang Peng 
    Abstract: Big data technology can well support the analysis of academic research trends, which requires the ability to process an enormous amount of metadata efficiently. On this point, we propose an academic trend analysis method that exploits a popular topic model for paper feature extraction and an influence propagation model for field influence evaluation. We also propose a parallel association rule mining algorithm based on Spark to accelerate trend analysis process. Experimentally, a vast amount of paper metadata was collected from four popular digital libraries: ACM, IEEE, Science Direct and Springer, serving as the raw data for our final feature dataset. Focusing on the hotspot of cloud computing, our result demonstrates that the most relevant topics to cloud computing have been changing these years from basic research to applied research, and from a microscopic point of view, the development of cloud computing related fields presents a certain periodicity.
    Keywords: big data; associate rule mining; Spark; Apriori; technology convergence.
    DOI: 10.1504/IJCSE.2017.10016151
  • The discovery in uncertain-social-relationship communities of opportunistic network   Order a copy of this article
    by Xu Gang, Wang Jia-Yi, Jin Hai-He, Mu Peng-Fei 
    Abstract: In the current studies of communities division of the opportunistic network, we always take the uncertain social relations as the input. In the practical application scenarios, because communications are always disturbed and the movements of nodes are random, the social relations are in the uncertain states. Therefore, the result of the community division based on the certain social relations is impractical. To solve the problem which cannot get the accurate communities under the uncertain social relations, we propose an uncertain-social-relation model of the opportunistic network in this paper. Meanwhile we analyze the probability distribution of the uncertain social relation and propose an algorithm of the community division based on the social cohesion, and then we divide communities by the uncertain social relations of opportunistic network. The experimental result shows that the Clique_detection_Based_SoH algorithm of the community division, which is based on the social cohesion, is more in accord with practical communities than the traditional K-clique algorithm of community division.
    Keywords: opportunistic network; uncertain social relations; k-clique algorithm; social cohesion; key node.
    DOI: 10.1504/IJCSE.2017.10016989
  • Tag recommendation based on topic hierarchy of folksonomy   Order a copy of this article
    by Han Xue, Bing Qin, Ting Liu, Shen Liu 
    Abstract: As a recommendation problem, tag recommendation has been receiving increasing attention from both the business and academic communities. Traditional recommendation methods are inappropriate for folksonomy because the basis of such mechanism remains un-updated in time owing to the bottleneck of knowledge acquisition. Therefore, we propose a novel method of tag recommendation based on the topic hierarchy of folksonomy. The method applies the topic tag hierarchy constructed automatically from folksonomy to tag recommendation using the proposed strategy. The method can improve the quality of folksonomy and can evaluate the topic tag hierarchy through tag recommendation. The precision of tag recommendation reaches 0.892. The experimental results show that the proposed method significantly outperforms state-of-the-art methods (t-test, p-value <0.0001) and demonstrates effectiveness with respect to data sources on tag recommendation.
    Keywords: tag recommendation; topic hierarchy; folksonomy.

  • Incremental processing for string similarity join   Order a copy of this article
    by Cairong Yan, Bin Zhu 
    Abstract: String similarity join is an essential operation of data quality management and a key step to find the value of data. Now in the era of big data, the existing methods cannot meet the demands of incremental processing. By using the string partition technique, an incremental processing framework for string similarity join is proposed in this paper. This framework treats the inverted index of strings as a state that will be updated after each operation of a string similarity match. Compared with the batching processing model, such framework can avoid the heavy time cost and the space cost brought by the duplicate similarity computation among historical strings and is suitable for processing data streams. We implement two algorithms, Inc-join and Inp-join. Inc-join runs on a stand-alone machine while Inp-join runs on a cluster with Spark environment. The experimental results show that this incremental processing framework can reduce the number of string matchings without affecting the join accuracy and improve the response time for the streaming data join compared with the batch computation model. When the data quantity becomes large, Inp-join can make full use of the advantage of parallel processing and obtain a better performance than Inc-join.
    Keywords: string similarity join; incremental processing; parallel processing; string matching.

  • A hybrid filtering-based network document recommendation system in cloud storage   Order a copy of this article
    by Wu Yuezhong, Liu Qin, Li Changyun, Wang Guojun 
    Abstract: Since the key requirement of users is to efficiently obtain personalised services from mass network document resources, a hybrid filtering-based network document recommendation system is designed with the method of incorporating the content-based recommendation and collaborative filtering recommendation based on the powerful and extensible storage and computing power in cloud storage. The proposed system realises the main service module on Hadoop and Mahout platform, and processes the documents containing the information of user interests by applying AHP-based attribute weighted fusion method. Based on the network interaction, the proposed system not only has advantages on the extensible storage space and high recommendation precision but also has an essential role in realizing network resources sharing and personalised recommendation.
    Keywords: user interest model; collaborative filtering; recommendation system; cloud storage.
    DOI: 10.1504/IJCSE.2017.10008648
  • Multiobjective evolutionary algorithm on simplified biobjective minimum weight minimum label spanning tree problems   Order a copy of this article
    by Xinsheng Lai, Xiaoyun Xia 
    Abstract: As general purpose optimisation methods, evolutionary algorithms have been efficiently used to solve multiobjective combinatorial optimisation problems. However, few theoretical investigations have been conducted to understand the efficiency of evolutionary algorithms on such problems, and even fewer theoretical investigations have been conducted on multiobjective combinatorial optimisation problems coming from the real world. In this paper, we analyse the performance of a simple multiobjective evolutionary algorithm on two simplified instances of the biobjective minimum weight minimum label spanning tree problem, which comes from real world. This problem is to find spanning trees that simultaneously minimise the total weight and also the total number of distinct labels in a connected graph where each edge has a label and a weight. Though these two instances are similar, the analysis results show that the simple multiobjective evolutionary algorithm is efficient for one instance, but it may be inefficient for the other. According to the analysis on the second instance, we think that the restart strategy may be useful in making the multiobjecctive evolutionary algorithm more efficient for the biobjective problem.
    Keywords: multiobjective evolutionary algorithm; biobjective; spanning tree problem; minimum weight; minimum label.

  • High dimensional Arnold inverse transformation for multiple images scrambling   Order a copy of this article
    by Weigang Zou, Wei Li, Zhaoquan Cai 
    Abstract: The traditional scrambling technology based on the low dimensional Arnold transformation (AT) is not able to assure the security of images during the transmission process, since the key space of the low dimensional AT is small and the scrambling period is short. Actually, the Arnold inverse transformation (AIT) is also a good image scrambling technique. The high-dimension AIT used in image scrambling can solve the shortcomings of low dimensional geometric transformation, have good image scrambling effect, and achieve the purpose of image encryption, which enriches the theory and application of image scrambling. Taking into account that an image has location space and colour space, the high dimensional AIT for image scrambling improves the anti-attack ability of image scrambling since the combination of the location space coordinates and the colour space component is very flexible. We investigated the property and application of AIT with five or six dimensions in the digital images scrambling. Specifically, we propose the theory of n dimensional AIT. Our investigations show that the technology in larger key space has a good effect on scrambling and has a certain application value.
    Keywords: information hiding; image scrambling; high dimensional transformation; Arnold transformation; Arnold inverse transformation; periodicity.

  • CAT: a context-aware teller for supporting tourist experiences   Order a copy of this article
    by Francesco Colace, Massimo De Santo, Saverio Lemma, Marco Lombardi, Mario Casillo 
    Abstract: The aim of this paper is the introduction of a methodology for the dynamic creation of an adaptive generator of stories related to a tourist context. The proposed approach selects the most suitable contents for the user and builds a context-aware teller that can support them during the exploration of the context, making it more appealing and immersive. The tourist can use the system by a hybrid app. The dynamic context-aware telling engine grabs the contents from a knowledge base that contains data coming both from the knowledge base and from the web. The user profile is updated thanks to information obtained during the visit and from social networks. A case study and some experimental results are presented and discussed.
    Keywords: context-aware; storyteller; social content; pervasive systems.
    DOI: 10.1504/IJCSE.2017.10013620
  • Saving energy consumption for mixed workloads in cloud platforms   Order a copy of this article
    by Dongbo Liu, Peng Xiao, Yongjian Li 
    Abstract: Virtualisation technology has been widely applied in cloud systems, however it also introduces many energy-efficiency losses especially when I/O virtualisation mechanism is concerned. In this paper, we present an energy-efficiency enhanced virtual machine (VM) scheduling policy, namely Share-Reclaiming with Collective I/O (SRC-I/O), with aim to reducing the energy-efficiency losses caused by I/O virtualisation. The SRC-I/O scheduler allows running VMs to reclaim extra CPU shares in certain conditions so as to increase CPU use. Meanwhile, SRC-I/O policy separates I/O-intensive VMs from CPU-intensive ones and schedules them in a batch manner, so as to reduce the context-switching costs of scheduling mixed workloads. Extensive experiments are conducted on various platforms by using different benchmarks to investigate the performance of the proposed policy. The results indicate that when the virtualisation platform is in presence of mixed workloads, the SRC-I/O scheduler outperforms existing VM schedulers in terms of energy efficiency and I/O responsiveness.
    Keywords: cloud computing; virtual machine; energy efficiency; mixed workload; task scheduling.

  • The extraction of security situation in heterogeneous log based on Str-FSFDP density peak cluster   Order a copy of this article
    by Chundong Wang, Tong Zhao, Xiuliang Mo 
    Abstract: Log analysis has been widely developed for identifying intrusion at the host or network. In order to reduce the false alarm rate in the process of security events extraction and discover a wide range of anomalies by scrutinising various logs, an improvement of Str-FSFDP (a fast search and find of peak density based data stream) clustering algorithm in heterogeneous log analysis is presented. Because of the advantages in data attribute relationship analysis for mixed attributes data, this algorithm can classify log data into two types whose corresponding distance measure metrics are designed. In order to apply Str-FSFDP in various logs, 12 attributes are defined in the unified XML format for clustering in this paper. These attributes are divided by the characteristics of each type of log and the importance of expressing a security event. To match the new micro cluster characteristic vector mentioned in the Str-FSFDP algorithm, this paper uses time gap to improve the UHAD (unsupervised anomaly detection model) framework. The time gap is designed as a threshold value based on micro cluster strategy. Experimental results reveal that the framework using Str-FSFDP clustering algorithm with time threshold can improve the aggregation rate of the log events and reduce the false alarm rate. As the algorithm has an analysis of attributes correlation, the connections between different IP addresses have been tested in the experiment. This helps us to look for the same attackers exploitation traces even if he fakes the IP addresses. It can increase the degree of aggregation in the same event. According to our analysis of each cluster, some serious attacks in the experiment have been summarised through the time line.
    Keywords: heterogeneous log; micro cluster; mixed attributes; unsupervised anomaly detection.

  • An improved KNN text classification method   Order a copy of this article
    by Fengfei Wang, Zhen Liu, Chundong Wang 
    Abstract: A text classification method based on improved SOM and KNN is introduced in this paper. In order to overcome the shortcomings of KNN in the text space model, this paper uses the SOM neural network to optimise the text classification. Based on this, this paper presents an improved SOM combined with KNN algorithm model. The SOM neural network weights of each dimension of the vector space model are calculated, using the SOM neural network in an unsupervised and no prior knowledge state of the sample to execute self-organisation and self-learning, to achieve evaluation and classification of the sample. This characteristic, using the SOM neural network combined with the KNN algorithm, effectively reduces the dimension of the vector, improves the clustering accuracy and speed and can effectively improve the efficiency of text classification.
    Keywords: text classification; KNN; SOM; neural network.

  • Privacy-preserving location-based service protocols with flexible access   Order a copy of this article
    by Shuyang Tang, Shengli Liu, Xinyi Huang, Zhiqiang Liu 
    Abstract: We propose an efficient privacy-preserving, content-protecting Location-based Service (LBS) scheme. Our proposal gives refined data classification and uses generalised ElGamal to support flexible access to different data classes. We also make use of Pseudo-Random Function (PRF) to protect users' position query. Since PRF is a light-weighted primitive, our proposal enables the cloud server to locate position efficiently while preserving the privacy of the queried position.
    Keywords: location-based services; outsourced cloud; security; privacy preserving.

  • On providing on-the-fly resizing of the elasticity grain when executing HPC applications in the cloud   Order a copy of this article
    by Rodrigo Righi, Cristiano Costa, Vinicius Facco, Luis Cunha 
    Abstract: Today, we observe that cloud infrastructures are gaining more and more space to execute HPC (High Performance Computing) applications. Unlike clusters and grids, the cloud offers elasticity, which refers to the ability of enlarging or reducing the number of resources (and consequently, processes) to support as close as possible the needs of a particular moment of the execution. In the best of our knowledge, current initiatives explore the elasticity and HPC duet by always handling the same number of resources at each scaling in or out of operation. This fixed elasticity grain commonly reveals a stair-shaped behaviour, where successive elasticity operations take place to address the load curve. In this context, this article presents GrainElastic: an elasticity model to execute HPC applications with the capacity to adapt the elasticity grain to the requirements of each elasticity operation. Its contribution concerns a mathematical formalism that uses historical execution traces and ARIMA time series model to predict the required number of resources (in our case, VMs) to address a reconfiguration point. Based on the proposed model, we developed a prototype that was compared with two other scenarios: (i) non-elastic application and (ii) elastic middleware with a fixed grain. The results presented gains up to 30% in favour of GrainElastic, showing us the relevance on adapting the elasticity grain to enhance system reactivity and performance.
    Keywords: elasticity; resource management; HPC; cloud computing; elasticity grain; adaptivity.
    DOI: 10.1504/IJCSE.2017.10013365
  • Can the hybrid colouring algorithm take advantage of multi-core architectures?   Order a copy of this article
    by João Fabrício Filho, Luis Gustavo Araujo Rodriguez, Anderson Faustino Da Silva 
    Abstract: Graph colouring is a complex computational problem that focuses on colouring all vertices of a given graph using a minimum number of colours. However, adjacent vertices are restricted from receiving the same colour. Over recent decades, various algorithms have been proposed and implemented to solve such a problem. An interesting algorithm is the Hybrid Coloring Algorithm (HCA), which was developed in 1999 by Philippe Galinier and Jin-Kao Hao. The HCA was widely regarded at the time as one of the best performing algorithms for graph colouring. Nowadays, high-performance out-of-order multi-cores have emerged that execute applications faster and more efficiently. Thus, the objective of this paper is to analyse whether the HCA can take advantage of multi-core architectures, in terms of performance, or not. For this purpose, we propose and implement a parallel version of the HCA that takes advantage of all hardware resources. Several experiments were performed on a machine with two Intel(R) Xeon(R) CPU E5-2630 processors, thus having a total of 24 cores. The experiment proved that the parallel HCA, using multi-core architectures, is a significant improvement over the original because it achieves enhancements of up to 40% in terms of the distance to the best chromatic number found in the literature. The expected contribution of this paper is to encourage developers to take advantage of high performance out-of-order multi-cores to solve complex computational problems.
    Keywords: metaheuristics; hybrid colouring algorithm; graph colouring problem; architecture of modern computers.

  • Learning pattern of hurricane damage levels using semantic web resources   Order a copy of this article
    by Quang-Khai Tran, Sa-kwang Song 
    Abstract: This paper proposes an approach for hurricane damage level prediction using semantic web resources and matrix completion algorithms. Based on the statistical unit node set framework, streaming data from five hurricanes and damage levels from 48 counties in the USA were collected from the SRBench dataset and other web resources, and then trans-coded into matrices. At a time t, the pattern of possible highest damage levels at 6 hours into the future was estimated using a multivariate regression procedure based on singular value decomposition. We also applied the Soft-Impute algorithm and k-nearest-neighbours concept to improve the statistical unit node set framework in this research domain. Results showed that the model could deal with inaccurate, inconsistent and incomplete streaming data that were highly sparse, to learn future damage patterns and perform forecasting in near real time. It was able to estimate the damage levels in several scenarios even if two-thirds of the relevant weather information was unavailable. The contributions of this work will be able to promote the applicability of the semantic web in the context of climate change.
    Keywords: hurricane damage; statistical unit node set; matrix completion; SRBench dataset; streaming data.

  • CUDA GPU libraries and novel sparse matrix-vector multiplication implementation and performance enhancement in unstructured finite element computations   Order a copy of this article
    by Richard Haney, Ram V. Mohan 
    Abstract: The efficient solution to systems of linear and non-linear equations arising from sparse matrix operations is a ubiquitous challenge for computing applications that can be exacerbated by the employment of heterogeneous architectures such as CPU-GPU computing systems. There is a common need for efficient implementation and computational performance of solution of sparse system of linear equations in many unstructured finite element-based computations of physics based modeling problems. This paper presents our implementation of a novel sparse matrix-vector multiplication (a significant compute load operation in the iterative solution via pre-conditioned conjugate gradient based methods) employing LightSpMV with Compressed Sparse Row (CSR) format, and the resulting performance characteristics. An unstructured finite element-based computational simulation involving multiple calls to iterative pre-conditioned conjugate gradient algorithm for the solution to a linear system of equations employing a single CPU-GPU computing system using NVidia Compute Unified Device Architecture libraries is employed for the results discussed in the present paper. The matrix-vector product implementation is examined within the context of a resin transfer molding simulation code. Results from the present work can be applied without loss of generality to many other unstructured, finite element-based computational modeling applications in science and engineering that employ solutions to sparse linear and non-linear system of equations using CPU-GPU architecture. Computational performance analysed indicates that LightSpMV can provide an asset to boost performance for these computational modelling applications. This work also investigates potential improvements in the LightSpMV algorithm using CUDA 35 intrinsic, which results in an additional performance boost by 1%. While this may not be significant, it supports the idea that LightSpMV can potentially be used for other full-solution finite element-based computational implementations.
    Keywords: general purpose GPU computing; sparse matrix-vector; finite element method; CUDA; performance analysis.
    DOI: 10.1504/IJCSE.2017.10011618
  • Rational e-voting based on network evolution in the cloud   Order a copy of this article
    by Tao Li, Shaojing Li 
    Abstract: Physically distributed voters can vote online through an electronic voting (e-voting) system. It can outsource the counting work to the cloud when the system is overloaded. However, this kind of outsourcing may lead to some security problems such as anonymity, privacy, fairness etc. Suppose servers in the cloud have no incentives to deviate from the e-voting system, these security problems can be effectively solved. In this paper, we assume that servers in the cloud are rational, and try to maximise their utilities. We look for incentives for rational servers not to deviate from the e-voting system. Here, no deviation means rational servers prefer to cooperate in the e-voting system. Simulation results of our evolution model show that the cooperation level is high after certain rounds. Finally, we put forward a rational e-voting protocol based on the above results and prove that the system is secure under proper assumptions.
    Keywords: electronic voting; utility; cloud computing; rational secret sharing.

  • Water contamination monitoring system based on big data: a case study   Order a copy of this article
    by Gaofeng Zhang, Yingnan Yan, Yunsheng Tian, Yang Liu, Yan Li, Qingguo Zhou, Rui Zhou, Kuan-Ching Li 
    Abstract: Water plays a vital role in peoples lives, and individuals cannot survive without it. However, water contamination has become a serious issue with the development of industry and agriculture, and has become a threat to peoples daily life. Moreover, the amount of data people need to process becomes excessively complex and huge in the big data era. Hence, data management is increasingly a difficult task. There is an urgent need to develop a system to identify major changes of water quality through monitoring and managing these water quality variables. In this paper, we develop a data monitoring system named Monitoring and Managing Data Center (MMDC) for monitoring, downloading, sharing, and time-series analysis based on big data technology. In order to reflect the real hydrological ecosystem, water quality variable data collected from Taihu Lake in China is used to verify the effectiveness of MMDC. Results show that MMDC is effective for monitoring and management of massive data. Although this investigation is focused on Taihu Lake, it is applicable as a general monitoring system for other similar natural resources.
    Keywords: water contamination; big data; MMDC; monitoring; data analysis.
    DOI: 10.1504/IJCSE.2017.10011736
  • Passive image autofocus by using direct fuzzy transform   Order a copy of this article
    by Ferdinando Di Martino, Salvatore Sessa 
    Abstract: We present a new passive autofocusing algorithm based on fuzzy transforms. In a previous work a localised variation of the variance operator was proposed based on the concept of fuzzy subspaces of the image: fuzzy C-means and conditional fuzzy C-means algorithms are applied for detecting the fuzzy subspaces. The direct fuzzy transform is used for extracting the mean values of the image intensity in a fuzzy subspace, then a weighted sum of the local variance operators obtained in each subspace is calculated as well. We propose a new approach based on the fuzzy generalised fuzzy C-means algorithm, where the number of fuzzy subspaces is obtained by using the partition coefficient and exponential separation validity indexes. Comparisons show that our method is more robust with respect to the localised variation of the variance operator.
    Keywords: image autofocusing; image contrast; variance; FCM; fuzzy transform.
    DOI: 10.1504/IJCSE.2017.10011885
  • Arrhythmia recognition and classification through deep learning based approach   Order a copy of this article
    by Rui Zhou, Xue Li, Binbin Yong, Zebang Shen, Chen Wang, Qingguo Zhou, Yunshan Cao, Kuan-Ching Li 
    Abstract: Arrhythmia is a cardiac condition caused by abnormal electrical activity of the heart, which can be life-threatening. Electrocardiogram (ECG) is the principal diagnostic tool used to detect arrhythmias or heart abnormalities. It contains information about the different types of arrhythmia. However, owing to the complexity and non-linearity of ECG signals, such as the presence of noise, the time dependence of ECG signals and the irregularity of the heartbeat, it is troublesome to analyse ECG signals manually. Moreover, the interpretation of ECG signals is subjective and might vary among experts in the field. Therefore, an automatic, high-precision ECG recognition method is important to arrhythmia detection. For such, it is proposed in this paper a method to arrhythmia classification, which is based on a deep learning based approach called Long Short-Term Memory (LSTM), where five classes of arrhythmia as recommended by the Association for Advancement of Medical Instrumentation (AAMI) are analysed. The method has been tested on the MIT-BIH Arrhythmia Database with a number of useful performance evaluation measures, showing that it has a promising and better performance than other artificial intelligence methods used.
    Keywords: electrocardiogram signal; long short-term memory; arrhythmia classification; artificial intelligence; deep learning.
    DOI: 10.1504/IJCSE.2017.10011740
  • Publicly verifiable function secret sharing   Order a copy of this article
    by Qiang Wang, Fucai Zhou, Su Peng, Jian Xu 
    Abstract: Function Secret Sharing (FSS) allows a dealer to split a secret function into n sub-functions, described by n evaluation keys, such that only a combination of all of these keys could reconstruct the secret function. However, it is impossible to recover the secret correctly if there exist some sharers deviating from intended behaviors. To settle this problem, we propose a new primitive called Publicly Verifiable Function Secret Sharing (PVFSS), in which any client could verify the validity of secret in constant time. Furthermore, we define three important properties: public delegation, public verification and high efficiency, which are an essential part in our scheme. Finally, we construct a PVFSS scheme for point function, then we prove its security and make performance analysis in two major directions: key length and algorithm efficiency. The analysis validates that our proposed scheme is asymptotic to FSS. It would be applicable to cloud computing.
    Keywords: PVFSS; cloud computing; high efficiency; public delegation; public verification.

  • Parallel context-aware multi-agent tourism recommender system   Order a copy of this article
    by Richa Singh, Punam Bedi 
    Abstract: The presence of millions and millions of users and items makes real-time filtering a time-consuming process in recommender systems. In context-aware recommender systems, the choices of users depend on the contextual information as well as available items. This helps to reduce the user item data to some extent, but the rapid change in the interests of a user under different contexts puts an extra load on recommender systems. To address this problem, we present a parallel approach for context-aware recommender systems using a multi-agent system that greatly accelerates the processing time. General Purpose Graphic Processing Unit (GPGPU) is used to exploit the parallel behaviour of the system along with CUDA (Compute Unified Device Architecture) and JCuda. The proposed algorithm works in both offline and online phases. Contextual filtering and multi-agent environment help to keep the system updated with the context of the user. A prototype of the system is developed using JCuda, JADE and Java technologies for the tourism domain. The performance of the presented system is compared with the context-aware recommender system without parallel processing with respect to processing time and scalability, as well as precision, recall and F-measure. The results show a significant speedup for the presented system over the non-parallel context-aware recommender system.
    Keywords: multi-agent system; recommender system; context aware; parallel processing; tourism.
    DOI: 10.1504/IJCSE.2017.10010189
  • Graph databases for openEHR clinical repositories   Order a copy of this article
    by Samar El Helou, Shinji Kobayashi, Goshiro Yamamoto, Naoto Kume, Eiji Kondoh, Shusuke Hiragi, Kazuya Okamoto, Hiroshi Tamura, Tomohiro Kuroda 
    Abstract: The archetype-based approach has now been adopted by major EHR interoperability standards. Soon, owing to an increase in EHR adoption, more health data will be created and frequently accessed. Previous research shows that conventional persistence mechanisms such as relational and XML databases have scalability issues when storing and querying archetype-based datasets. Accordingly, we need to explore and evaluate new persistence strategies for archetype-based EHR repositories. To address the performance issues expected to occur with the increase of data, we proposed an approach using labelled property graph databases for implementing openEHR clinical repositories. We implemented the proposed approach using Neo4j and compared it with an Object Relational Mapping (ORM) approach using Microsoft SQL Server. We evaluated both approaches over a simulation of a pregnancy home-monitoring application in terms of required storage space and query response time. The results show that the proposed approach provides a better overall performance for clinical querying.
    Keywords: openEHR; graph database; EHR; database; performance; archetypes; reference model; EHR repository; archetype-based storage; query response time.
    DOI: 10.1504/IJCSE.2017.10017366
  • Kernel-based tensor discriminant analysis with fuzzy fusion for face recognition   Order a copy of this article
    by Xiaozhang Liu, Hangyu Ruan 
    Abstract: This paper proposes a novel kernel-based image subspace learning method for face recognition, by encoding a face image as a tensor of second order (matrix). First, we propose a kernel-based discriminant tensor criterion, called kernel bilinear fisher criterion (KBFC), which is designed to simultaneously pursue two projection vectors to maximise the interclass scatter and at the same time minimise the intraclass scatter in its corresponding subspace. Then, a score level fusion method is presented to combine two separate projection results to achieve classification tasks. Experimental results on the ORL and UMIST face databases show the effectiveness of the proposed approach.
    Keywords: kernel; tensor discriminant; bilinear discriminant; matrix representation; face recognition.
    DOI: 10.1504/IJCSE.2017.10012617
  • Modelling of advanced persistent threat attack monitoring based on the artificial fish swarm algorithm   Order a copy of this article
    by Biaohan Zhang 
    Abstract: In recent years, Advanced Persistent Threat (APT) has become one of the important factors that threaten network security. Aiming at the APT attack defence problem, this paper proposes an APT attack monitoring method based on the principle of artificial fish swarm algorithm. The attack monitoring model is established by imitating the behaviour of the artificial fish swarm. The model is used to dynamically monitor the environment, and the APT attack index is simulated with the food consistence to monitor the position of the highest APT attack index. The experimental results show that the monitoring model designed by this method can effectively monitor and forecast the attack target, and also has good expansibility and practicability.
    Keywords: artificial fish swarm algorithm; advanced persistent threat attack; monitoring model.

  • Multilayer ensemble of ELMs for image steganalysis with multiple feature sets   Order a copy of this article
    by Punam Bedi, Veenu Bhasin 
    Abstract: A multilayer ensemble of Extreme Learning Machines (ELM) for multi-class image steganalysis is proposed in this paper. The proposed ensemble consists of three levels and uses multiple feature sets extracted from images. The first two layers form sub-ensembles, one sub-ensemble for each of the feature sets. Each feature set is partitioned and used with multiple ELMs at level-1. These feature sets along with the output of the ELMs at level-1 are used by different ELMs at level-2 to classify images into multiple classes. To combine these results from sub-ensembles a stacking technique is used. Results of level-2 ELMs are used as input for the last level ELM. The fast learning process of ELM aids the speedy execution of the proposed method. Performance of the proposed method is compared with existing steganalysis methods based on individual feature sets and on 2-level ensemble. The experimental study demonstrates that the proposed method classifies images into multiple classes with higher accuracy and this has been confirmed using t-test with 99% confidence.
    Keywords: steganalysis; extreme learning machine; Markov random process; ensemble of ELMs.
    DOI: 10.1504/IJCSE.2017.10010576
  • An anchor node selection mechanism-based node localisation for mines using wireless sensor networks   Order a copy of this article
    by Kangshun Li, Hui Wang, Ying Huang 
    Abstract: To tackle the low localisation accuracy problem in wireless sensor network (WSN) nodes in mines, a localisation algorithm is proposed to improve the localisation accuracy of Received Signal Strength Indication (RSSI) using an anchor node selection mechanism. This localisation mainly includes three phases. First, the anchor node RSSI values received from an unknown node are sorted from high to low. Second, the four anchor nodes with the highest RSSI values are selected by a Gaussian elimination method. These nodes are not in the same plane and form a prismatic shape, and the distance from any one node to a plane consisting of another three points is not less than a certain threshold value. Finally, the least squares method is used to estimate the coordinates of the unknown nodes to realise the precise localisation of the unknown nodes. The simulation results show that the proposed algorithm has greatly improved the localisation accuracy compared with other traditional localisation algorithms.
    Keywords: underground tunnel; received signal strength indication; anchor node selection; least squares method; Gauss elimination method.

  • Allocation of energy-efficient tasks in cloud using dynamic voltage frequency scaling   Order a copy of this article
    by Sambit Kumar Mishra, Md Akram Khan, Sampa Sahoo, Bibhudatta Sahoo, Sanjay Kumar Jena 
    Abstract: Nowadays, the expanding computational capabilities of the cloud system rely on the minimisation of the absorbed power to make them sustainable and economically productive. Power management of cloud data centres has received great attention from industry and academia since their operational cost is expensive owing to their high energy consumption. Issues about the distribution of energy in the system, such as energy saving and energy consumption, have been found to be crucial. One of the core approaches for the conservation of energy in the cloud data centres is the task scheduling. This task allocation in a heterogeneous environment is a well known NP-hard problem owing to which researchers pay attention for proposing various heuristic techniques for the problem. In this paper, a technique is proposed based on dynamic voltage frequency scaling (DVFS) for optimising the energy consumption in the cloud environment. The basic idea is a compromise between energy consumption and configures different types of hosts or virtual machines. Here, we formally describe a model that includes various subsystems and assess the implementation of the algorithm in the heterogeneous environment. The resulting analysis is discussed after comparing the proposed method with some standard algorithms.
    Keywords: cloud computing; big data; dynamic voltage frequency scaling; task allocation; energy consumption; virtual machine; virtualisation.
    DOI: 10.1504/IJCSE.2017.10017137
  • A malware variants detection methodology with an opcode-based feature learning method and a fast density-based clustering algorithm   Order a copy of this article
    by Hui Yin, Jixin Zhang, Zheng Qin 
    Abstract: Malware is one of the most terrible and major security threats facing the internet today, which can be defined as any type of malicious code to harm a computer or network. As malware variants may be equipped with sophisticated mechanisms to bypass traditional detection systems, in this paper, we propose a malware variant detection approach that can automatically, quickly, and accurately detect malware variants. In our approach, we present an asynchronous architecture for automated training and detection. Under this architecture, to improve the detection speed while retaining the accuracy, we propose an information entropy-based feature extraction method to extract a few but very useful features and a distance-based weight learning method to weight these features. To further improve the detection speed, we propose our fast density-based clustering algorithm. We evaluate our approach with a number of Windows-based malware instances that belong to six large families, and our experiments demonstrate that our automated malware variant detection method is able to achieve high accuracy with a significant speedup in comparison with the other state-of-art approaches.
    Keywords: distance-based weight learning; fast density-based clustering; information entropy; malware variants.

  • Optimised tags with time attenuation recommendation algorithm based on tripartite graphs network   Order a copy of this article
    by Ming Zhang, Wei Chen 
    Abstract: Social recommendation has attracted increasing attention in recent years owing to the potential value of social relations in recommender systems. Social tags play an important role in improving the recommendation accuracy. However, garbage tags may lead to the issue of data matrix sparseness and affect the accuracy and performance of the recommendation system. To optimise the social tags in the recommendation system, the tags are sorted by popularity ranking method with the time attention model in order to remove the garbage tags. The time attention model is used to consider the variation of tags with the change of time. Then a novel recommendation algorithm with the optimised social tags is proposed, based on the complete tripartite graph network. This method considers the preference information of users and items, and generates the recommendation items for users on the basis of collaborative filtering. Experimental results show that the proposed algorithm predicts the recommendation items more accurately than other existing approaches.
    Keywords: tags optimisation; tripartite graphs network; time attenuation model; social recommendation.

  • Probabilistic rough-set-based band selection method for hyperspectral data classification   Order a copy of this article
    by Li Min, Wang Lei, Deng Shaobo 
    Abstract: This paper proposes an innovative band selection algorithm called probabilistic rough-set-based band selection (PRSBS) algorithm. The proposed algorithm is a supervised band selection algorithm with efficiency because it needs to calculate only the first-order significance measure. The main novelty of the proposed PRSBS algorithm is lined in criterion function, which measures the effectiveness of the considered band. The algorithm uses a probabilistic distribution dependency as the relevance measure between the bands and class labels, which can effectively measure the uncertainty of both the positive and the boundary samples in a dataset. We compared the proposed PRSBS with the most relevant band selection algorithm, RSBS, on three different hyperspectral datasets; the experimental results show that the PRSBS has better results than the RSBS. Moreover, the PRSBS algorithm runs significantly faster than the RSBS algorithm, which makes it a good choice for band selection in hyperspectral image datasets.
    Keywords: band selection; probabilistic rough set; hyperspectral image; classification.

  • A universal designated multi verifiers content extraction signature scheme   Order a copy of this article
    by Min Wang, Yuexin Zhang, Jinhua Ma, Wei Wu 
    Abstract: A notion to combine the content extraction signature and the universal designated verifier signature was put forth by Lin in 2012. Specifically, it allows an extracted signature holder to designate the signature to a prospective verifier. However, existing designs become inefficient when multi verifiers are involved. To improve the efficiency, in this paper, we extend the notion to the Universal Designated Multi Verifiers Content Extraction Signature ($\mathrm{UDMVCES}$). Implementing our new scheme, the extracted signature holder can efficiently designate the signature to multi verifiers. Additionally, we provide the security notions and prove the security of the proposed scheme in the random oracle model. To illustrate the efficiency of our $\mathrm{UDMVCES}$ scheme, we analyse its performance. The analysis shows that the computation costs and signature lengths of the new scheme are independent of the number of verifiers.
    Keywords: content extraction signature; universal designated multi verifiers signature; extracted signature; random oracle model.

  • Dynamic input domain reduction for test data generation with iterative partitioning   Order a copy of this article
    by Esmaeel Nikravan, Saeed Parsa 
    Abstract: A major difficulty concerning test data generation for white box testing is to detect the domain of input variables covering a certain path. To this aim a new concept, domain coverage, is introduced in this article. In search of appropriate input variable subdomains, covering a desired path, the domains are randomly partitioned as far as subdomains whose boundaries satisfy the path constraints are found. When partitioning, the priority is given to those subdomains whose boundary variables do not satisfy the path constraints. Representing the relation between the subdomains and their parents as a directed acyclic graph, an Euler/Venn reasoning system could be applied to select the most appropriate subdomains. To evaluate our proposed path-oriented test data generation method, the results of applying the method to six known benchmark programs, Triangle, GCD, Calday, Shellsort, Quicksort and Heapsort, is presented.
    Keywords: random testing; test data generation; Euler/Venn diagram; directed acyclic graph.

  • Multi-class instance-incremental framework for classification in fully dynamic graphs   Order a copy of this article
    by Hardeo Kumar Thakur, Anand Gupta, Ritvik Shrivastava, Sreyashi Nag 
    Abstract: Existing work in the area of graph classification is mostly restricted to static graphs. These static classification models prove ineffective in several real-life scenarios that require an approach capable of handling data of a dynamic nature. Further, the limited work in the domain of dynamic graphs has mainly focused on solely incremental graphs, which fail to accommodate Fully Dynamic Graphs (FDG). Hence, in this paper, we propose a comprehensive framework targeting multi-class classification in fully dynamic graphs by using the efficient Weisfeiler-Lehman graph kernel (W-L) with a multi-class Support Vector Machine (SVM). The framework iterates through each update using the instance-incremental method while retaining all historical data in order to ensure higher accuracy. Reliable validation metrics are used for the model parameter selection and output verification. Experimental results over four case studies on real-world data demonstrate the efficacy of our approach.
    Keywords: fully dynamic graph; dynamic graph; graph classification; multi-class classification.
    DOI: 10.1504/IJCSE.2017.10016991
  • Assessment of nested-parallel task model under real-time scheduling on multi-core processors   Order a copy of this article
    by Mahesh Lokhande, Mohammad Atique 
    Abstract: Real-time applications contain numerous time-bound parallel tasks with enormous computations. Parallel models, not sequential models, have the capability to handle intra-task parallelism and accomplish such tasks in a specific time or before. Previous researchers presented the task models for parallel tasks, but not for the nested-parallel tasks. This paper deals with the real-time scheduling of periodic nested-parallel tasks with an implicit-deadline on multi-core processors. Initially, the focus is on a nested-parallel task model. Next, a novel task disintegration technique is studied where the MAMs ratio is defined to categorise the segments. It is theoretically proved that the discussed disintegration technique achieved a speedup factor of 4.30 and 3.40 when the tasks, after disintegration, were scheduled under partitioned DM (Deadline Monotonic) and global EDF (Earliest Deadline First) scheduling, respectively. Further, considering the overhead factor (β) for non-preemptive global EDF scheduling, the disintegration technique was analysed and achieved a speedup factor of 3.73 (for β=1). The proposed disintegration technique is assessed through the simulations thereby indicating the adequacy of derived speedup factors.
    Keywords: nested-parallel tasks; real-time scheduling; partitioned DM scheduling; EDF scheduling; multi-core processors; task disintegration; speedup factor.
    DOI: 10.1504/IJCSE.2017.10011790
  • Recognition of landslide disasters with extreme learning machine   Order a copy of this article
    by Guanyu Chen, Xiang Li, Wenyin Gong 
    Abstract: The geological disasters of landslides induced by the Wenchuan earthquake are great in number so landslide disaster recognition and investigation must be conducted in the early stage of large construction planning in the disaster area. In recent years, the studies on image recognition have focused on the extreme learning machine (ELM)algorithm. Based on the preprocessing of remote sensing images, this paper conducts landslide recognition with remote sensing images through the ELM classification combined with colour and texture features of ground objects. The comparison experiments of landslide recognition with the support vector machine (SVM) algorithm shows that the recognition accuracy of the ELM algorithm is not much different from that of the SVM algorithm, but the ELM takes short time in training with absolute advantage.
    Keywords: geological disaster; remote sensing image; extreme learning machine; landslide recognition.

  • Graffiti-writing recognition with fine-grained information   Order a copy of this article
    by Jiashuang Xu, Jiashuang Zhangjie Fu 
    Abstract: Contactless HCI (Human-Computer Interaction) has become a new trend due to the springing up of the novel intelligent terminals. The existing interaction systems usually adopt depth cameras, motion controller, and radiofrequency devices. The common drawback of the above approaches is that all the participants are required to obey the unistroke writing standard for data acquisition. The uniformity of the writing rule simplifies the data acquisition stage, but it breaks the integrity of the handwriting system. In practice, the writing habits vary among people. It is observed that eight capitalised letters of the alphabet possess more than one writing pattern. Thus, we are motivated to propose a more adaptive, contactless graffiti-writing recognition system with CSI (Channel State Information) derived from Wi-Fi signals. The discrete wavelet transform is used for denoising. We choose a sliding window to calculate the MAD (Mean Absolute Deviation)to detect the start and end points. We extract the unique CSI waveform caused by writing action to represent each letter. To cater for more users writing customs and improve the universality of the system, we train separate HMMs (Hidden Markov Model) for the eight letters and conduct cross-validation for testing. The average detection accuracy reaches 94.5%. The average recognition accuracy for the 26-letter model is 85.96% when the number of the training sample is 100 from five subjects. The real-time recognition efficiency measured by characters per minute is 11.97(= 31/155.24 s).
    Keywords: air-write recognition; wireless sensing; channel state information.

  • A new neural architecture for feature extraction of remote sensing data   Order a copy of this article
    by Mustapha Si Tayeb, Hadria Fizazi 
    Abstract: The paper presents a novel method for the classification of remote sensing data. The proposed approach comprises two main steps: 1) Extractor Multi-Layer Perceptron (EMLP) is used for feature extraction of the remote sensing data; then 2) the data resulting from the EMLP are classified using a Support Vector Machine (SVM) algorithm. The contribution of this work is mainly in the creation of the EMLP method based on the Multi-Layer Perceptron (MLP) method, which has the role of creating a dataset more representative of the classes from the original dataset. To better situate and evaluate our proposed approach, we applied our proposed technique to three datasets, namely, Statlog Landsat Satellite, Urban Land Cover and Landsat TM Oran, and several measures were used, for example, classification rate, classification error, precision, recall and F-measure. The experimental results show that the proposed approach (EMLP-SVM) is more efficient and powerful than the basic methods (MLP and SVM) and the existing state-of-the-art classification methods.
    Keywords: classification methods; feature extraction; remote sensing data; extractor multi-layer perceptron; support vector machine; supervised learning.

  • Parallelisation of practical shared sampling alpha matting with OpenMP   Order a copy of this article
    by Tien-Hsiung Weng, Chi-Ching Chiu, Huimin Lu 
    Abstract: In the modern filmmaking industry, image matting has been one of the common tasks in video side effects and the necessary intermediate steps in computer vision. It pulls the foreground object from the background of an image by estimating the alpha values. However, the computational speed for matting high resolution images can be significantly slow owing to its complexity and the computation that is proportional to the size of unknown region. In order to improve the performance, we implement a parallel alpha matting code with OpenMP from existing sequential code for running on the multicore servers. We present and discuss the algorithm and experimentation results in the parallel application developer perspective. The development takes less effort, however the results show significant performance improvement of the entire program.
    Keywords: OpenMP; image matting; multicores; parallel programming.

  • A novel coverless text information hiding method based on double-tags and twice-send   Order a copy of this article
    by Xiang Zhou, Xianyi Chen, Fasheng Zhang, Ningning Zheng 
    Abstract: Recently, coverless text information hiding (CTIH) has attracted the attention of an increasing number of researchers because of the high security. However, there are still many problems to be solved, for example the efficiency of retrieving and the hiding capacity. In the existing CTIH methods, the secret information is embedded to be one carrier with one label to ensure the success rate of hiding. In this paper, we proposed a novel CTIH method based on the double tags and twice-send, in which the double tags in a text are achieved by designing the odd-even adjudgement, and a reverse index is created firstly to promote the efficiency of retrieving, then transform characters into binary numbers, which will be employed as the location tags to determine the secret information in the received texts. Finally, this improves the success rate of hiding by sending the document twice. The experimental results show that the proposed method improves the hiding capacity and efficiency compared with existing text CIH algorithms.
    Keywords: coverless information hiding; double tags; twice-send.

  • Secure search service based on Word2vec in the public cloud   Order a copy of this article
    by Yangen Liu, Zhangjie Fu 
    Abstract: Accompanied by the continuous development of the information technology, many information domains have grown explosively. The users of cloud servers continue to grow as the cloud is flexible and useful to the economy. For data privacy, data owners will encrypt their private data before outsourcing them to the public cloud. How to query the data quickly becomes a new problem when a large amount of encrypted data is stored in the public cloud. Most existing solutions generate index vectors based on dictionaries, but these vectors do not reflect the semantic information of the articles. In this paper, we propose two safety retrieval schemes based on Word2vec (SSSW-1 and SSSW-2). By establishing a model through Word2vec training method, we generate index vectors directly for keywords extracted from data documents. At the same time, the index vector can also reflect the semantic relationship of the document so as to improve the accuracy of the retrieval. Subsequently, we outsource data and index that have been encrypted to the public cloud. The cloud server will return documents in the order of similarity scores according to the search request. Cosine measure will be used in this paper to calculate the similarity scores. Experiments based on real data show that the two schemes are effective and feasible.
    Keywords: encrypted cloud data; Word2vec; secure search; cloud computing.
    DOI: 10.1504/IJCSE.2018.10017005
  • Fast CU size decision based on texture-depth relationship for depth map encoding in 3D-HEVC   Order a copy of this article
    by Liming Chen, Zhaoqing Pan, Xiaokai Yi, Yajuan Zhang 
    Abstract: Because many advanced encoding techniques are introduced into the 3D-HEVC, it achieves higher encoding efficiency than HEVC. However, the encoding complexity of 3D-HEVC increases significantly as these high complexity coding techniques are used. In this paper, a new fast CU size decision algorithm based on texture depth is proposed to reduce the depth map encoding complexity, because there is strong correlation between texture and depth map, including motion characteristic and background region. Both kinds of map tend to choose the same CU depth as their best depth level. By building a one-to-one match for collocated largest coding unit (LCU), the information of texture encoding can be used to predict the depth level of the depth map. Experimental results have shown that the proposed method can achieve 41.89% time saving on average, with the negligible drop of 0.04 dB on BDPSNR and a small increase of 2.29% on BDBR.
    Keywords: 3D-HEVC; early termination; CU split; PU mode; depth map.

  • Aligning molecular sequences using hybrid bioinspired algorithm in GPU   Order a copy of this article
    by Jayapriya Jayakumar, Michael Arock 
    Abstract: To explicate the functionality of the basic cell, there is a need for the study of bioinformatics. To better understand the structural and functional information of molecules, sequence analysis is considered as the root domain. In this, aligning the sequences is the first step, an NP-complete problem like all biological problems. Owing to the increased molecular data in biology, there is a demand for the development of efficient approaches to this sequence alignment problem. From the study, it is concluded that there is trade-off between accuracy and computational time. Focusing on the latter in this paper, a new parallel hybridised bio-inspired approach (PGWOGO) is proposed without sacrificing the accuracy. A Grey Wolf Optimizer technique is hybridised with the genetic operators, and the parallel phases are implemented in Quadro 4000 Graphics Processing unit. New crossover and mutation operators, namely horizontal crossover and local gaps shuffle mutation operator between aligned blocks, are employed. The performance of the proposed algorithm is evaluated using the CUPS (cells update per second) and compared with the state-of-the-art techniques. The results show that the proposed algorithm yields better alignment than other techniques.
    Keywords: GPU; alignment; hybrid bioinspired; GWO; genetic operators; crossover; mutation.

  • Deep learning for collective anomaly detection   Order a copy of this article
    by Mohiuddin Ahmed, Al-Sakib Khan Pathan 
    Abstract: Deep learning has been performing well in a number of application domains. Inspired by its popularity in domains such as image processing, speech recognition etc., in this paper we explore the effectiveness of deep learning and other supervised learning algorithms for collective anomaly detection. Recently, collective anomaly has become popular for DoS (Denial of Service) attack detection, however, all these approaches are unsupervised in nature and often have high false alarm rate owing to being unsupervised. Therefore, to reduce the false alarm rates, we have experimented using the deep learning method that is supervised in nature. Our experimental results on UNSW-NB15 and KDD Cup 1999 datasets show that the deep learning implemented using H2O achieves ≈97% recall for collective anomaly detection. Deep learning outperforms a wide range of unsupervised techniques for collective anomaly detection. The key insight of this paper is to report the efficiency of deep learning for collective anomaly detection. To the best of our knowledge, this paper is the first one to address the collective anomaly detection problem using deep learning.
    Keywords: deep learning; collective anomaly; DoS attack; traffic analysis.

  • Experimental investigation and CFD simulation of power consumption for mixing in the gyro shaker   Order a copy of this article
    by P.A. Abdul Samad, P.R. Shalij 
    Abstract: The better mixing of ingredients is the key to improving the quality of the process in the manufacturing of several products. The gyro shaker is a dual rotation mixer commonly used for mixing highly viscous fluids. In this work, CFD simulation for the multiphase mixing in the gyro shaker is carried out for obtaining numerical solutions. Simulations of three different mixing models, namely Eulerian granular model, mixture model and volume of fluid (VOF) model are compared. Reynolds number and power number based on characteristic velocity were derived for the gyro shaker. Experiments were conducted to validate the mixing power by simulation using torque method and viscous dissipation method. The viscous dissipation method for mixing power demonstrates a smaller deviation from the experimental data than torque method. Among the three simulation models, the multiphase mixture model shows the minimum variation of the experimental data. A comparison of the flow fields of the different mixing models is also carried out.
    Keywords: computational fluid dynamics; characteristic velocity; Eulerian granular; gyro shaker; mixture model; multiphase; power consumption; power number; viscous dissipation; VOF; volume of fluid.

  • Optimisation model of price changes after knowledge transfer in the big data environment   Order a copy of this article
    by Chuanrong Wu, Evgeniya Zapevalova, Deming Zeng 
    Abstract: Big data knowledge and private knowledge are the two dominant types of knowledge that an enterprise needs for new product innovation in a big data environment. Big data knowledge can help enterprises to make decisions, trim costs and lift sales. Private knowledge is usually the core patent knowledge, which can help to enhance the quality of products. The decline in product cost and the increased quality of a new product after knowledge transfer may lead to pricing decisions with respect to an enterprises new product. Should the enterprise cut prices, keep prices unchanged, or raise prices after knowledge transfer? It is important to analyse the impact of a new products price change on knowledge transfer activities in the big data environment. By considering the changes in product costs, market share and the profits after knowledge transfer caused by price changes to a new product, an optimisation model of price changes after knowledge transfer in the big data environment is presented. The model can assess the impact that a price change will have on the expected profits of an enterprise that has engaged in knowledge transfer in the big data environment. The experimental results are consistent with previous studies and the actual economic situations, and the model is deemed valid. It can enable pricing decisions about new products for enterprises in the big data environment.
    Keywords: big data; knowledge transfer; optimisation model; price change; price decision.

  • Predicting new composition relations between web services via link analysis   Order a copy of this article
    by Mingdong Tang, Fenfang Xie, Wei Liang, Yanmin Xia 
    Abstract: With the wide application of Service-Oriented Architecture (SOA) and Service-Oriented Computing (SOC), the past decade has witnessed a rapid growth of the number of web services on the internet. Against this background, combining different web services to create new applications has attracted great interest from developers. However, according to the latest statistics, only a small number of popular services are frequently used by developers and the use rates of most web services are rather low. To help service users to discover appropriate web services and promote service compositions has thus become a significant need. In this paper, we propose a link-based approach to predict new composition relations between web services. The approach is based on exploration of known composition relations and similarity relations among web services. To measure the composition or similarity degrees, several link-based methods are exploited, and two reasonable heuristic rules for integrating the existing composition and similarity relations for service composition prediction are developed. Case studies and experiments based on real web service datasets validated the proposed approach.
    Keywords: service composition; composition relations; similarity relations; link prediction; web services; API; Mashup.

  • Research on product design knowledge organisation model based on granularity principle   Order a copy of this article
    by Youyuan Wang, Weiwei Qian, Lu Zhao 
    Abstract: In order to solve the problem of weak discernibility relation between the demand of knowledge in the process of product design, a knowledge organisation model based on the granularity principle is put forward. The paper applies knowledge unit and knowledge point to describe product design knowledge, adopts the granularity principle to perform the granulation tissue of product design knowledge, monitors the classification, association and inference of knowledge points according to task requirements and structures, formalises the related knowledge, and ultimately provides knowledge service in the form of knowledge unit. Through the analysis of a case, the method is proven to be effective to improve the relevance of knowledge and to improve the efficiency of knowledge service.
    Keywords: knowledge organisation; granularity principle; product design.

  • A novel clustering algorithm based on the deviation factor model   Order a copy of this article
    by Jungan Chen, Chen Yinyin, Yang Dongyong 
    Abstract: For classical clustering algorithms, it is difficult to find clusters that have non-spherical shapes or varied size and density. In view of this, many methods have been proposed in recent years to overcome this problem, such as introducing more representative points per cluster, considering both interconnectivity and closeness, and adopting the density-based method. However, the density defined in DBSCAN is decided by minPts and Eps, and it is not the best solution to describe the data distribution of one cluster. In this paper, a deviation factor model is proposed to describe the data distribution and a novel clustering algorithm based on artificial immune system is presented. The experimental results show that the proposed algorithm is more effective than DBSCAN, k-means, etc.
    Keywords: clustering algorithm; DBSCAN; artificial immune system.

  • Multi-keywords carrier-free text steganography method based on Chinese Pinyin   Order a copy of this article
    by Yuling Liu, Jiao Wu, Guojiang Xin 
    Abstract: By combining big data with the characteristics of steganography, carrier-free steganography was proposed to resist all the steganalysis attacks. A novel method named multi-keywords carrier-free text steganography method, based on Chinese Pinyin, is introduced in this paper. In the proposed method, the hidden tags are selected from the Pinyin combinations of two words. In the process of information hiding, the POS (Part of Speech) is used for hiding the number of keywords. Also, the redundancy of hidden tags in extraction process is eliminated by ensuring the uniqueness of each hidden tag in each stego-text. Meanwhile, the way of joint retrieval is used for hiding multi-keywords. Experimental results show that the proposed method has good performance in the hiding capacity, the success rate of hiding, the extraction accuracy and the time efficiency with appropriate hidden tags and large scale of the big text data.
    Keywords: carrier-free steganography; big text data; multi-keywords; Chinese Pinyin; POS tagging.

  • Collaborative filtering-based recommendation system for big data   Order a copy of this article
    by Jian Shen, Tianqi Zhou, Lina Chen 
    Abstract: The collaborative filtering algorithm is widely used in the recommendation system of e-commerce websites (Wong et al. 2016), which are based on the analysis of a large number of users' historical behaviour data, so as to explore the users' interest and recommend the appropriate products to users. In this paper, we focus on how to design a reliable and highly accurate algorithm for movie recommendation. It is worth noting that the algorithm is not limited to film recommendation, but can be applied in many other areas of e-commerce. In this paper, we use Java language to implement a movie recommendation system in Ubuntu system. Benefitting from the MapReduce framework and the recommendation algorithm based on items, the system can handle large data sets. The experimental results show that the system can achieve high efficiency and reliability in large datasets.
    Keywords: big data; collaborative filtering; e-commerce; movie recommendation; MapReduce framework.

  • Communication optimisation for intermediate data of MapReduce computing model   Order a copy of this article
    by Yunpeng Cao, Haifeng Wang 
    Abstract: MapReduce is a typical computing model for processing and analysis of big data. MapReduce computing job produces a large amount of intermediate data after Map phase. Massive intermediate data results in a large amount of intermediate data communication across rack switches in the Shuffle process of MapReduce computing model, which degrades the performance of heterogeneous cluster computing. In order to optimise the intermediate data communication performance of Map-intensive jobs, the characteristics of pre-running scheduling information of MapReduce computing jobs are extracted, and job classification is realised by machine learning. The jobs of active intermediate data communication are mapped into a rack to keep the communication locality of intermediate data. The jobs with inactive communication are deployed to the nodes sorted by computing performance. The experimental results show that the proposed communication optimisation scheme has a good effect on Shuffle-intensive jobs, and can reach 4-5%. In the case of a larger amount of input data, the communication optimisation scheme is robust and can adapt to heterogeneous cluster. In the case of a multi-user application scene, the intermediate data communication can be reduced by 4.1%.
    Keywords: MapReduce computing model; big data processing; communication optimisation; intermediate data; machine learning.

  • Evaluating the trustworthness of BPEL processes based on data dependency and XBFG   Order a copy of this article
    by Chunling Hu, Cuicui Liu, Bixin Li 
    Abstract: Composite services implement value-added functionality by composing service components of various granularity. Trust is an important criterion to judge whether a composite service can behave as expected. There is a great need for a flexible trust evaluation method for composite services, which can guide service selection and the trust-based optimisation and evolution of composite services. In this paper, a data dependency based trust evaluation approach for composite services in Business Process Execution Language (BPEL) is proposed. Firstly, we derive define-use pairs of variables to identify data dependency between service components in BPEL processes modeled by eXtensible BPEL Flow Graph (XBFG); in addition, dependency links including both direct and indirect data dependencies are used to evaluate the trust of these service components; furthermore, on the basis of BPEL structure and XBFG, reduction rules are proposed to evaluate the global trust of BPEL processes. Experimental results demonstrate that the proposed approach is effective for the trust evaluation of BPEL composite services and stable with the growing number of service components in BPEL.
    Keywords: trust evaluation; data dependency; dependency link; reduction rules; XBFG.

  • Assessing classification complexity of datasets using fractals   Order a copy of this article
    by André Luiz Marasca, Marcelo Teixeira, Dalcimar Casanova 
    Abstract: Supervised classification is a mechanism used in machine learning to associate classes with objects from datasets. Depending on the dimension and on the internal data structuring, classification may become complex. In this paper, we claim that the complexity level of a given dataset can be estimated by using fractal analysis. A novel fractal measure, called transition border, is proposed in order to estimate the chaos behind labelled points distribution. Their correlation with the success rate is tested by comparing it against results obtained from other supervised classification methods. Results suggest that this approach can be used to measure the complexity behind a classification task problem in real-valued datasets with three dimensions. The proposed method can also be useful for other science domains for which fractal analysis is applicable.
    Keywords: supervised classification; fractal analysis; chaotic datasets.

  • Partial-duplicate image retrieval based on HSV colour space for coverless information hiding   Order a copy of this article
    by Ningsheng Zhao, Zhili Zhou 
    Abstract: In conventional image steganography approaches, modification traces will be left in the designated cover images while embedding secret information. Consequently, the secret information could be possibly detected by the steganalysis tools via capturing the modification traces. In this paper, a coverless image steganography approach based on partial-duplicate image retrieval is proposed to hide a secret image into a set of natural images without modification. Instead of modifying a designated image to embed secret information, the partial-duplicate images of the secret image are retrieved from a large-scale image database to hide the secret image. First, we divide equally any given query or database image into a set of image blocks. Then, for a given query image, we search for its partial-duplicate images in a large-scale database according to the similarity between image blocks. To compute the similarity between image blocks, we extract and compare the HSV color features of these blocks. Finally, these retrieved partial-duplicate images are used to represent and hide the secret image. Moreover, we also propose an encryption algorithm based on chaotic maps to ensure the security of our approach. Both the theoretical analysis and experimental results demonstrate that the proposed coverless image steganography approach has strong resistance to the existing steganalysis tools, good security, as well as high robustness to common images attacks.
    Keywords: information hiding; coverless image steganography; HSV; image retrieval;partial-duplicate image.

  • A routing strategy with energy optimisation based on community in mobile social networks   Order a copy of this article
    by Gaocai Wang, Nao Wang, Ying Peng, Shuqiang Huang 
    Abstract: In current mobile networks, usage has drastically shifted from mobile users base station end-to-end communication to message/content retrieval among mobile users, which forms a so-called mobile social network. Usually, in a mobile social network, the movement feature of the mobile users has social aggregation characteristics, and the same mobile user who visits different communities forms a connected network based on community. This paper studies the energy consumption optimisation problem of message delivery based on the social characteristics of mobile users. The paper proposes an optimal energy efficiency routing strategy based on community, which minimises the network energy consumption with a given delay. Firstly, the expected energy consumption and delay of message delivery in a connected network are obtained through Markov chain. Then a comprehensive cost function for message delivery from a source node to a destination node is designed, which is combined with energy consumption and delay. Thus we obtain the optimisation function for delivering a message of relay to comprehensive cost. Further, the reward function of relay is given. Finally, the optimal expected reward of optimal relay is achieved using the optimal stopping theory for realising the optimal energy efficiency routing strategy. In simulations, the average energy consumption, the average delay and the average de-livery ratio of the routing optimisation strategy in this paper are compared with those of other routing strategies in related literature. The results show that the strategy proposed by this paper has smaller average energy consumption, smaller average delay and bigger average delivery ratio, and better energy consumption optimisation results can be achieved.
    Keywords: mobile social networks; optimal energy efficiency routing; community; optimal stopping; optimal relay.

  • A holistic IT infrastructure management framework   Order a copy of this article
    by Sergio Varga 
    Abstract: Information Systems (IS) are becoming increasingly complex and they have issues to be solved. New technologies, products and deployment models make the management of an IS difficult to maintain. Organisations need to deploy tools, processes, and governance in their Information Technology (IT) environment to support their IS. This increases even more the complexity of an IT environment and it drives organisations to manage the environment by silos or components. This type of management inhibits organisations to ensure the entire environment has been properly managed accordingly to what was agreed in the outsourcing contract despite the usage of IT frameworks available. This paper intends to analyse and identify these issues as well as it proposes an IT management framework that will help organisations to provide an efficient service. This service is based on agreed scope and will ensure that all contracted services will be deployed with accurate, completeness, management, and awareness.
    Keywords: information systems; information technology; IT Management; ITIL; ITSM; cloud.

  • Super-sampling by learning-based super-resolution   Order a copy of this article
    by Ping Du, Jinhuan Zhang, Jun Long 
    Abstract: In this paper, we present a novel problem of intelligent image processing, which is how to infer a finer image in terms of intensity levels for a given image. We explain the motivation for this effort and present a simple technique that makes it possible to apply the existing learning-based super-resolution methods to this new problem. As a result of the adoption of the intelligent methods, the proposed algorithm needs notably little human assistance. We also verify our algorithm experimentally in the paper.
    Keywords: texture synthesis; super-resolution; image manifold.

  • An evolutionary algorithm for finding optimisation sequences: proposal and experiments   Order a copy of this article
    by João Fabrício Filho, Luis Gustavo Araujo Rodriguez, Anderson Faustino Da Silva 
    Abstract: Evolutionary algorithms are metaheuristics for solving combinatorial and optimisation problems. A combinatorial problem, important in the context of software development, consists in selecting code transformations that must be used by the compiler while generating the target code. The objective of this paper is to propose and evaluate an evolutionary algorithm that is capable of finding an efficient sequence of optimising transformations, which will be used while generating the target code. The results indicate that it is efficient to find good transformation sequences, and a good option to generate databases for machine learning systems.
    Keywords: evolutionary algorithms; code optimisation; iterative compilation; machine learning.

  • An adaptive model for traffic flow optimisation in dynamic environments   Order a copy of this article
    by Rahul M V, Rajashree Shettar, K.N. Subramanya 
    Abstract: Formulating the solution as an optimisation problem has proven to be effective in developing solutions to many real world problems. We generally obtain the best possible solution using these methods. In this work, the traffic scheduling problem has been formulated as a waiting time minimisation problem, and appropriate cost functions have been developed, in pursuit of finding the optimal solution. A first-in first-out queuing model is used, with the vehicles arriving in a Poisson process, and the service time being exponentially distributed. The key feature of this model is that it adapts to varying service and arrival rates of the lanes. These rates are forecast using a neural network model, and appear in the objective function. It was observed that the use of the neural network greatly improved the robustness of the model. Although the model has been developed for a four-lane, two-way architecture, it can be generalised to any architecture. Results have been analysed by comparing the proposed method with a proportional time distribution. It is shown that the proposed model performs relatively well, when there is rapid variation in arrival and service rates.
    Keywords: queuing model; isolated intersections; neural network; interior point optimisation; traffic scheduling; Poisson process; four-lane intersection;.
    DOI: 10.1504/IJCSE.2019.10016802
  • Energy replenishment optimisation via density-based clustering   Order a copy of this article
    by Xin Gu, Jun Peng, Yijun Cheng, Xiaoyong Zhang, Kaiyang Liu 
    Abstract: This paper investigates a density-based clustering approach to achieve efficient energy replenishment in wireless rechargeable sensor networks (WRSNs). Sensor nodes with charging request are divided into several clusters. Some of them are selected as head nodes, adopting a mobile charger to visit. The rest are arranged to the closest head nodes. Then the mobile charger serves all nodes in the same cluster simultaneously. Different from other clustering algorithms, our proposed clustering approach selects the head nodes with high local density. The distance between high-density nodes is also taken into consideration, effectively reducing the charging delay. Simulation results show that our proposed clustering approach can achieve optimal cluster results. Moreover, compared with two other cluster-based charging methods, the charging delay and travel distance can be reduced by our proposed clustering approach, in both dense and sparse deployment scenarios.
    Keywords: wireless rechargeable sensor networks; clustering; mobile charger; wireless energy transfer; charging delay.

  • Evolutionary ant colony algorithm using firefly-based transition for solving vehicle routing problems   Order a copy of this article
    by Rajeev Goel, Raman Maini 
    Abstract: In this paper, we propose an evolutionary optimisation algorithm that adapts the advantages of ant colony optimisation and firefly optimisation algorithms to solve the vehicle routing problem and its variants. Firefly optimisation (FA) based transition rule along with pheromone shaking rule is used to escape local optima. Whereas the multi-modal nature of FA helps in exploring the search space, pheromone shaking avoids the stagnation of pheromone deposit on the exploited paths. This is expected to improve the working of ant colony system (ACS). Performance of the proposed algorithm has been compared with the performance of some of other available meta-heuristic approaches currently being used for solving vehicle routing problems on some benchmark problems. Results show the consistency of the proposed approach. Moreover, its convergence rate is faster and the obtained solutions are closer to optimal compared with solutions obtained using certain other existing meta-heuristic approaches. The results also demonstrate the effectiveness of the presented algorithm over other existing FA-based algorithms for solving vehicle routing problems.
    Keywords: ant colony optimisation; evolutionary algorithms; firefly optimisation; vehicle routing problems.

  • Performance evaluation of main-memory hash joins on KNL   Order a copy of this article
    by Deyou Tang, Yazhuo Zhang, Qingmiao Zeng, Hu Chen 
    Abstract: New hardware features have propelled designs and analysis in main-memory hash joins. In previous studies, memory access has always been the primary bottleneck for hash join algorithms. However, there are relatively few studies devoted to bottlenecks analysis on the Knights Landing (KNL) processor. In this paper, we pay attention to the state-of-the-art hash join algorithms on KNL and analysing their bottlenecks under different workloads. The analysis and comparisons in the paper show that both memory latency and bandwidth are keys to improve hash joins, and multi-channel dynamic random access memory (MCDRAM) reasonably plays a vital role in enhancing performance. Notably, we find that hash join algorithms that are hardware-oblivious perform better than hardware-conscious approaches. A typical algorithm of hardware-oblivious joins achieves better performance than ever before to the best of our knowledge. Through the analysis, we shed light on how new features of KNL affect the performance of hash joins.
    Keywords: performance evaluation; main-memory; hash join; algorithm; KNL; memory latency; bandwidth; cache alignment; cache miss; prefetching; MCDRAM.
    DOI: 10.1504/IJCSE.2018.10016618
  • Feature selection with improved binary artificial bee colony algorithm for microarray data   Order a copy of this article
    by Sheng-Sheng Wang, Ru-Yi Dong 
    Abstract: In the areas of clinical and medical diagnosis, gene expression profiles are known to have latent qualities as they denote the state of cells in molecular rankings. But the sample sizes are relatively small compared with the number of genes concerned in the training datasets during the classification of cancer in researches. This indisputable case is a daunting problem in the classification since the scarcity of training data is still prevailing. Hence, the need to develop an efficient gene selection algorithm for sample classification is appropriate to enhance predictive accuracy and as well as to prevent unfathomable conditions as a result of the extensive number of genes involved in the research. In this article, we propose to improve the Binary Artificial Bee Colony algorithm (BABC) based on chaotic catfish effect for feature selection. Chaotic effect was added to the initialisation procedure of BABC, and further introduced chaotic catfish-bee for new nectar exploration, which can thus improve the BABC algorithm by preventing bees from getting trapped in a local optimum. This allows for a search through all possible solution spaces in a short time. It is clear from the results of our experiment that this new method indicated an elaborate feature simplification that achieved a very precise and significant accuracy in the classification of 9 among the 11 datasets in comparison with other methods of feature selection.
    Keywords: feature selection; binary artificial bee colony; support vector machines; chaotic catfish effect.

  • An integrated ambient intelligence system for a smart lab environment   Order a copy of this article
    by Dat Do, Scott King, Alaa Sheta, Thanh Pham 
    Abstract: The goals of the ambient intelligence system are not only to enhance the way people communicate with the surrounding environment but also to advance safety measures and enrich human lives. In this paper, we introduce an Integrated Ambient Intelligence System (IAmIS) to perceive the presence of people, identify them, determine their locations, and provide suitable interaction with them. The proposed framework can be applied in various application domains such as a smart house, authorisation, surveillance, crime prevention, and many others. The proposed system has five components: body detection and tracking, face recognition, controller, monitor system, and interaction modules. The system deploys RGB cameras and Kinect depth sensors to monitor human activity. The developed system is designed to be fast and reliable for indoor environments. The proposed IAmIS can interact directly with the environment or communicate with humans acting on the environment. Thus, the system behaves as an intelligent agent. The system has been deployed in our research lab and can recognise lab members and guests to the lab as well as track their movements and have interactions with them depending on their identity and location within the lab.
    Keywords: ambient intelligence system; awareness system; object tracking; face recognition; body tracking; Kinect.

  • An internet-of-things based security scheme for healthcare environment for robust location privacy   Order a copy of this article
    by Aakanksha Tewari, Brij Gupta 
    Abstract: Recently, various applications of the internet of things have been developed for the healthcare sector. Our contribution is to provide a secure and low-cost environment for the IoT devices in healthcare. The main goal is to make patients lives easier and more comfortable by providing them with more effective treatments. Nevertheless, we also intend to address the issues related to location privacy and security due to the deployment of IoT devices. We have proposed a very simple mutual authentication protocol, which provides strong location privacy by using one-way hash, pseudo-random number generators and bitwise operations. Strong location privacy is a key factor while ensuring healthcare security. We can enforce this property by making sure that tags in the network are indistinguishable and the protocol ensures forward secrecy. The security strength of our protocol is verified through a formal proof model for location privacy.
    Keywords: internet of things; location privacy; RFID; mutual authentication; forward secrecy; indistinguishability.

  • A fuzzy controller for an adaptive VNFs placement in 5G network architecture   Order a copy of this article
    by Sara Retal, Abdellah Idrissi 
    Abstract: In cloud computing, computation and memory resources are becoming a relevant growing business. On the other hand, mobile network architecture faces many hurdles, including lack of flexibility for providing enhanced services and distributed architecture, and expensive cost to provide a network topology that meets the users' equipment (UE) needs. To cope with these problems, cloud computing is used in mobile telecommunications market thanks to network functions virtualisation. In our paper, we develop a fuzzy controller to support virtual network functions placement and provide an adaptive solution to manage and organise the network. Our approach enables the solution to adapt to UE mobility and needs in terms of quality of experience. Furthermore, it minimises serving gateways relocation cost and the path between UEs and packet data network gateways, taking into account the resource capacities. The experimental results show that our approach provides good results compared with the literature methods.
    Keywords: cloud computing; virtual network functions placement; adaptive placement; fuzzy controller; multi-objective optimisation.

  • The key user discovery model based on user importance calculation   Order a copy of this article
    by Lei Zhang, Dandan Jiang, Ruirong Xue, Yawen Yi, Xiangfeng Luo 
    Abstract: Recently, more and more users publish their views on events in social media. Identifying influential users in social media and calculating the importance of users can help to analyse the impact of hot events or enterprise products in the real world. A method based on attribute analysis selects relatively simple characteristics without digging into the event-targeted properties; the network-based analysis method only uses the user behaviour relation or the content association relation to build a network, which does not take the user attributes into consideration and cannot effectively calculate the user importance. This paper proposes a multi-angle user importance calculation method with event-specificity. The overall importance of a user is measured by taking into account the four levels within the user layer, the fan layer, the micro-blog layer, and the event layer. Experimental results show that our method can effectively calculate the importance of users.
    Keywords: key user discovery; multi-layer; social media.

  • Event-triggered fault estimation for stochastic state-delay systems against adversaries with application to a DC motor system   Order a copy of this article
    by Yunji Li, Yi Gao, Quansheng Liu, Li Peng 
    Abstract: This paper is concerned with the problem of fault estimation for stochastic state-delay systems subject to adversaries under an event-triggered data-transmission framework. An adversarial fault estimator is designed for remote state and fault estimation simultaneously. Furthermore, a sufficient condition is provided for exponential stability in the mean-squared of the proposed event-triggered fault estimator. The corresponding event-triggered sensor data transmission scheme is to reduce the overall communication burden. Finally, an application example on a DC motor system is presented and the benefits of the obtained theoretical results are demonstrated by comparative experiments.
    Keywords: fault estimation; event-triggered data-transmission scheme; time delays.

  • Dependence structure between bitcoin price and its influence factors   Order a copy of this article
    by Weili Chen, Zibin Zibin Zheng, Mingjie Ma, Jiajing Wu, Jiaquan Yao, Yuren Zhou 
    Abstract: Bitcoin is a decentralised digital currency that has attracted growing interest over recent years. Much research from different subjects emerged because bitcoin is a multidisciplinary product. Among all these studies, the interpretation of the drastic fluctuation of the bitcoin price attracts a great attention. Many influence factors of the bitcoin price were found. However, research seldom reveals the dependence structure between price and its influence factors. By selecting 10 interpretable influence factors from the bitcoin network and using copula theory, we find that the bitcoin price has different correlation structures with its influence factors. These findings provide new insights into the behaviour of miners, users, and coins in the bitcoin system, thus leading to meaningful implications for policymakers, investors and risk managers dealing with bitcoin and other cryptocurrencies.
    Keywords: bitcoin; price fluctuation; influence factor; dependency structure; copula function.

  • Software design of monitoring and flight simulation for UAV swarms based on OSGEarth   Order a copy of this article
    by Meimin Wu, Yuxiang Xiao, Qian Bi 
    Abstract: Real-time monitoring of Unmanned Aerial Vehicle (UAV) swarms is critical for flight safety. In order to monitor the position and working condition of UAV intuitively, we propose a three-dimensional (3D) monitoring software for UAV swarms based on OpenSceneGraph Earth. The software is built on platform + plug-ins architecture. The flight scene is constructed via 3D visualisation. UAV nodes are updated and moved in the flight scene when data is received in real time. Meanwhile, in order to decrease the cost and improve the work efficiency in the development and performance verification of UAV swarms, the simulation platform for UAV swarms is designed. The swarm behaviour algorithm is pre-designed in a Python file, which will be read to parse the position data and display the flight scene. The software has been successfully applied to monitor the flight of UAV swarms. Excellent accuracy and reliability are demonstrated.
    Keywords: OSGEarth; UAV swarms; real-time monitoring; 3D visualisation; swarm simulation.

  • Improved quantum secret sharing scheme based on GHZ states   Order a copy of this article
    by Ming-Ming Wang, Zhi-Guo Qu, Lin-Ming Gong 
    Abstract: With the rapid progress of quantum cryptography, secret sharing has been developed in the quantum setting for achieving a high level of security, which is known as quantum secret sharing (QSS). The first QSS scheme was proposed by Hillary et al. in 1999 [Phys. Rev. A 59, 1829 (1999)] based on entangled Greenberger-Horne-Zeilinger (GHZ) states. However, only 50% of the entangled quantum states are effective for eavesdropping detection and secret splitting in the original scheme. In this paper, we introduce a possible method, called measurement-delay strategy, to improve the qubit efficiency of the GHZ-based QSS scheme. By using this method, the qubit efficiency of the improved QSS scheme can reach 100% for both security detection and secret distribution. The improved QSS scheme can be implemented experimentally based on current technologies.
    Keywords: quantum secret sharing; efficiency; security; GHZ state.

  • Analysing research collaboration through co-authorship networks in a big data environment: an efficient parallel approach   Order a copy of this article
    by Carlos Roberto Valêncio, José Carlos De Freitas, Rogéria Cristiane Gratão De Souza, Leandro Alves Neves, Geraldo Francisco Donegá Zafalon, Angelo Cesar Colombini, William Tenório 
    Abstract: Bibliometry is the quantitative study of scientific productions and enables the characterisation of scientific collaboration networks. However, with the development of science and the increase of scientific production, large collaborative networks are formed, which makes it difficult to extract bibliometrics. In this context, this work presents an efficient parallel optimisation of three bibliometrics for co-authorship network analysis using multithread programming: transitivity, average distance, and diameter. Our experiments found that the time wasted to calculate the transitivity value using the sequential approach grows 4.08 times faster than the parallel proposed approach when the size of co-authorship network grows. Similarly, the time wasted to calculate the average distance and diameter values using the sequential approach grows 5.27 times faster than the parallel proposed approach when the size of co-authorship network grows. In addition, we reported relevant values of speed up and efficiency for the developed algorithms.
    Keywords: bibliometrics; graphs; knowledge extraction; co-authorship network; NoSQL; parallel computing.

  • Design of fault-tolerant majority voter for error-resilient TMR targeting micro- to nano-scale logic   Order a copy of this article
    by Mrinal Goswami, Subrata Chattopadhyay, Shiv Bhushan Tripathi, Bibhash Sen 
    Abstract: The shrinking size of transistors for satisfying the increasing demand for higher density and low power has made the VLSI circuits more vulnerable to faults. Therefore, new circuits in advanced VLSI technology have forced designers to use fault-tolerant techniques in safety-critical applications. Also, the presence of some faults (not permanent) due to the complexity of the nanocircuit or its interaction with software results in malfunctioning of circuits. The fault-tolerant scheme, where majority voter plays the core role in triple modular redundancy (TMR), is being implemented increasingly in digital systems. This work aims to implement a different fault-tolerant scheme of majority voter for the implementation of TMR using quantum-dot cellular automata (QCA), which is a viable alternative nanotechnology to CMOS VLSI. The fault-masking ability of various voter designs has been analysed in detail. The fault-masking ratio of the proposed voter (FMV) is 66% considering single/multiple faults. Simulation results establish the validation of the proposed logic in QCA, which targets nano-scale devices. The proposed logic is also suitable for conventional CMOS technology, which is verified with the Cadence tool.
    Keywords: quantum dot cellular automata; triple modular redundancy; fault-tolerant majority voter; QCA defects; reliability; nanoelectronics.

  • Time series clustering using stochastic and deterministic influences   Order a copy of this article
    by Mirlei Silva, Rodrigo Mello, Ricardo Rios 
    Abstract: As part of the unsupervised machine learning area, time series clustering aims at designing methods to extract patterns from temporal data in order to organise different series according to their similarities. According to the literature, most of researches either perform a preprocessing step to convert time series into an attribute-value matrix to be later analyzed by traditional clustering methods, or apply measures specifically designed to compute the similarity among time series. Based on such studies, we have noticed two main issues: i) clustering methods do not take into account the stochastic and the deterministic influences inherent of time series from real-world scenarios; and ii) similarity measures tend to look for recurrent patterns, which may not be available in stochastic time series. In order to overcome such drawbacks, we present a new clustering approach that considers both influences and a new similarity measure to deal with purely stochastic time series. Experiments provided outstanding results, emphasizing time series are better clustered when their stochastic and deterministic influences are properly analysed.
    Keywords: time series; clustering; similarity measure.

  • Laius: an energy-efficient FPGA CNN accelerator with the support of a fixed-point training framework   Order a copy of this article
    by Zikai Nie, Zhisheng Li, Lei Wang, Shasha Guo, Yu Deng, Rangyu Deng, Qiang Dou 
    Abstract: With the development of Convolutional Neural Networks (CNNs), their high computational complexity and energy consumption become significant problems. Many CNN inference accelerators have been proposed to reduce the energy consumption. Most of them are based on 32-bit float-point matrix multiplication, where the data precision is over-provisioned for inference. This paper presents Laius, an 8-bit fixed-point LeNet inference engine implemented on FPGA. To economise FPGA resources, we propose a methodology to find the optimal bit-length for weight and bias in LeNet. We use optimisations of pipelining, PE tiling, and theoretical analysis to improve the performance. Moreover, we optimise the convolutional sequence and data layout for further research. Experiment results show that Laius achieves 44.9 Gops throughput. Moreover, with only 1% accuracy loss, Laius largely reduces 31.43% in delay, 87.01% in LUT consumption, 66.50% in BRAM consumption, 65.11% in DSP consumption and 47.95% reduction in power compared with the 32-bit version with the same structure.
    Keywords: CNN accelerator; FPGA; inference engine; fixed-point training; data layout.

Special Issue on: High-Performance Information Technologies for Engineering Applications

  • Parallel data processing approaches for effective intensive care units with the internet of things   Order a copy of this article
    by N. Manikandan, S Subha 
    Abstract: Computerisation in health care is more general and monitoring Intensive Care Units(ICU) is more significant and life-critical. Accurate monitoring in an ICU is essential. Failing to take right decisions at the right time may prove fatal. Similarly, a timely decision can save people's lives in various critical situations. In order to increase the accuracy and timeliness in ICU monitoring, two major technologies can be used, namely parallel processing through vectorisation of ICU data and data communication through the Internet of Things (IoT). With our approach, we can improve efficiency and accuracy in data processing. This paper proposes a parallel decision tree algorithm in ICU data to take faster and accurate decisions on data selection. Uses of parallelised algorithms optimise the process of collecting large sets of patient information. A decision tree algorithm is used for examining and extracting knowledge-based data from large databases. Finalised information will be transferred to concerned medical experts in cases of medical emergency using the IOT. Parallel implementation of the decision tree algorithm is implemented with threads, and output data is stored in local IOT tables for further processing.
    Keywords: medical data processing; internet of things; ICU data; vectorisation; multicore architecture; parallel data processing.

  • Execution of scientific workflows on IaaS cloud by PBRR algorithm   Order a copy of this article
    by S.A. Sundararaman, T. SubbuLakshmi 
    Abstract: Job scheduling of scientific workflow applications in IaaS cloud is a challenging task. Optimal resource mapping of jobs to virtual machines is calculated considering schedule constraints such as timeline and cost. Determining the required number of virtual machines to execute the jobs is key in finding the optimal schedule makespan with minimal cost. In this paper, VMPROV algorithm has been proposed to find the required virtual machines. Priority-based round robin (PBRR) algorithm is proposed for finding the job to resource mapping with minimal makespan and cost. Execution of four real-world scientific application jobs by PBRR algorithm are compared with MINMIN, MAXMIN, MCT, and round robin algorithms execution times. The results show that the proposed algorithm PBRR can predict the mapping of tasks to virtual machines in better way compared to the other classic algorithms.
    Keywords: cloud job scheduling; virtual machine provisioning; IaaS.
    DOI: 10.1504/IJCSE.2016.10017130
  • Development and evaluation of the cloudlet technology within the Raspberry Pi   Order a copy of this article
    by Nawel Kortas, Anis Ben Arbia 
    Abstract: Nowadays, communication devices, such as laptops, computers, smartphones and personal media players, have extensively increased in popularity thanks to the rich set of cloud services that they allow users to access. This paper focuses on setting solutions of network latency for communication devices by the use of cloudlets. This work also proposes a conception of a local datacentre that allows users to connect to their data from any point and through any device by the use of the Raspberry. We also display the performance demonstration results of the resource utilisation rate, the average execution time, the latency, the throughput and the lost packets that provide the big advantage of cloudless application from local and distant connections. Furthermore, we display an evaluation of cloudless by comparing it with similar services and by setting simulation results through the CloudSim simulator.
    Keywords: cloudlets; cloud computing; cloudless; Raspberry Pi; datacentre; device communication; file-sharing services.
    DOI: 10.1504/IJCSE.2016.10008320
  • Study of runtime performance for Java-multithread PSO on multiCore machines   Order a copy of this article
    by Imed Bennour, Monia Ettouil, Rim Zarrouk, Abderrazak Jemai 
    Abstract: Optimisation meta-heuristics, such as Particle Swarm Optimization (PSO), require high-performance computing (HPC). The use of software parallelism and hardware parallelism is mandatory to achieve HPC. Thread-level parallelism is a common software solution for programming on multicore systems. The Java language, which includes important aspects such as its portability and architecture neutrality, its multithreading facilities and its distributed nature, makes it an interesting language for parallel PSO. However, many factors may impact the runtime performance: the coding styles, the threads-synchronisation levels, the harmony between the software parallelism injected into the code and the available hardware parallelism, the Java networking APIs, etc. This paper analyses the Java runtime performance on handling multithread PSO over general purpose multicore machines and networked machines. Synchronous, asynchronous, single-swarm and multi-swarm PSO variants are considered.
    Keywords: high-performance computing ; particle swarm optimisation,multicore; multithread; performance; simulation.
    DOI: 10.1504/IJCSE.2016.10015696

Special Issue on: Technologies and Applications in the Big Data Era

  • Research on implementation of digital forensics in cloud computing environment   Order a copy of this article
    by Hai-Yan Chen 
    Abstract: Cloud computing is a promising next-generation computing paradigm which integrates multiple existing and new technologies. With the maturing and wide application of cloud computing technology, there are more and more crimes occuring in the environment of cloud computing, so the effective investigations of evidence against these crimes are extremely important and of urgent need. Because of the characteristics of the virtual computing environment (mass storage and distribution of data, and multi-tenant), cloud computing sets an extremely hard condition for the investigation of evidence. For this purpose, in this paper, we propose a digital forensics reference model in the cloud environment. First, we divide cloud forensics into four steps and the implementation scheme is given respectively. Secondly, a cloud platform trusted evidence collection mechanism based on trusted evidence collection agent is put forward. Finally, methods of using various data mining algorithms in the evidences analysed are introduced. The experiment and simulation on real data show the accuracy and effectiveness of the proposed method.
    Keywords: cloud computing; digital forensics; cloud environment; digital evidence.

Special Issue on: Advanced Information Processing in Communication

  • A Manhattan distance based binary bat algorithm vs integer ant colony optimisation for intrusion detection in audit trails.   Order a copy of this article
    by Wassila Guendouzi, Abdelmadjid Boukra 
    Abstract: An intrusion detection system (IDS) is the process of monitoring and analysing security activities occurring in a computer or network systems. The detection method is the brain of IDS and it can perform either anomaly-based or misuse-based detection. The misuse mechanism aims to detect predefined attack scenarios in the audit trails, whereas the anomaly detection mechanism aims to detect deviations from normal user behaviour. In this paper, we deal with misuse detection. We propose two approaches to solve the NP-hard security audit trail analysis problem. Both rely on the Manhattan distance measure to improve the intrusion detection quality. The first proposed method, named Enhanced Binary Bat Algorithm (EBBA), is an improvement of Bat Algorithm (BA) that uses a binary coding and the fitness function defined as the global attacks risks. This fitness function is used in conjunction with the Manhattan distance. In this approach, new operators are adapted to the problem of our interest which are solution transformation, vertical permutation and horizontal permutation operators. The second proposed approach, named Enhanced Integer Ant Colony Optimisation (EIACS), is a combination of two metaheuristics: Ant Colony System (ACS), which uses a new pheromone update method, and Simulated Annealing (SA), which uses a new neighborhood generation mechanism. This approach uses an integer coding and a new fitness function based on the Manhattan distance measure. Experiments on different problem sizes (small, medium and large) are carried out to evaluate the effectiveness of the two approaches. The results indicate that for small and medium sizes the two algorithms have similar performance in term of detection quality. For large problem size the performance of EIACS is more significant than EBBA.
    Keywords: intrusion detection; security audit trail analysis; combinatorial optimisation problem; NP-hard; Manhattan distance; bat algorithm; ant colony system; simulated annealing.

Special Issue on: Computational Imaging and Multimedia Processing

  • Underwater image segmentation based on fast level set method   Order a copy of this article
    by Yujie Li, Huiliang Xu, Yun Li, Huimin Lu, Seiichi Serikawa 
    Abstract: Image segmentation is a fundamental process in image processing that has found application in many fields, such as neural image analysis, underwater image analysis. In this paper, we propose a novel fast level set method (FLSM)-based underwater image segmentation method to improve the traditional level set methods by avoiding the calculation of signed distance function (SDF). The proposed method can speed up the computational complexity without re-initialisation. We also provide a fast semi-implicit additive operator splitting (AOS) algorithm to improve the computational complex. The experiments show that the proposed FLSM performs well in selecting local or global segmentation regions.
    Keywords: underwater imaging; level set; image segmentation

  • Pseudo Zernike moments based approach for text detection and localisation from lecture videos   Order a copy of this article
    by Soundes Belkacem, Larbi Guezouli, Samir Zidat 
    Abstract: Text information embedded in videos is an important clue for retrieval and indexation of images and videos. Scene text presents challenging characteristics mainly related to acquisition circumstances and environmental changes, resulting low quality videos. In this paper, we present a scene text detection algorithm based on Pseudo Zernike Moments (PZMs) and stroke features from low resolution lecture videos. The algorithm mainly consists of three steps: slide detection, text detection and segmentation and non-text filtering. In lecture videos, the slide region is a key object carrying almost all the important information; hence the slide region has to be extracted and segmented from other scene objects considered as background for later treatments. Slide region detection and segmentation is done by applying PZMs based on RGB frames. Text detection and extraction is performed using PZM segmentation over V channel of HSV colour space, and then stroke feature is used to filter out non-text regions and remove false positives. PZMs are powerful shape descriptors; they present several strong advantages such as robustness to noise, rotation invariants, and multilevel feature representation. The PZMs based segmentation process consists of two steps: feature extraction and clustering. First, a video frame is partitioned into equal size windows, then the coordinates of each window are normalised to a polar system, then PZMs are computed over the normalised coordinates as region descriptors. Finally, a clustering step using K-means is performed in which each window is labelled for text/non-text region. The algorithm is shown to be robust to illumination, low resolution and uneven luminance from compressed videos. The effectiveness of the PZM description leads to very few false positives compared with other approaches. Moreover, resultant images can be used directly by OCR engines and no more processing is needed.
    Keywords: text localisation; text detection; pseudo Zernike moments; slide region detection.
    DOI: 10.1504/IJCSE.2016.10011674

Special Issue on: ICNC-FSKD'15 Machine Learning, Data Mining and Knowledge Management

  • Genetic or non-genetic prognostic factors for colon cancer classification   Order a copy of this article
    by Meng Pan, Jie Zhang 
    Abstract: Many researches have addressed patient classification using prognostic factors or gene expression profiles (GEPs). This study tried to identify whether a prognostic factor was genetic by using GEPs. If significant GEP difference was observed between two statuses of a factor, the factor might be genetic. If the GEP difference was largely less significant than the survival difference, the survival difference might not be due to the genes; then, the factor might be non-genetic or partly non-genetic. A practice was made in this study using public dataset GSE40967, which contains GEP data of 566 colon cancer patients, messages of tumor-node-metastasis (TNM) staging, etc. Prognostic factors T, N, M, and TNM were observed being non-genetic or partly non-genetic, which should be complement for future gene expression classifiers.
    Keywords: gene expression profiles; prognostic factor; colon cancer; classification; survival

  • A medical training system for the operation of heart-lung machine   Order a copy of this article
    by Ren Kanehira 
    Abstract: There has been a strong tendency to use Information Communication Technology (ICT) to construct various education/training systems to help students or other learners master necessary skills more easily. Among such systems the ability to obtain operational practice is particularly welcome in addition to the conventional e-learning ones mainly for obtaining textbook-like knowledge only. In this study, we propose a medical training system for the operation of heart-lung machine. Two training contents, one for basic operation and another for troubleshooting, are constructed in the system and their effects are tested.
    Keywords: computer-aided training; skill science; medical training; heart-lung machine; operation supporting; e-learning; clinic engineer.

Special Issue on: ICICS 2016 Next Generation Information and Communication Systems

  • Is a picture worth a thousand words? A computational investigation of the modality effect   Order a copy of this article
    by Naser Al Madi, Javed Khan 
    Abstract: The modality effect is a term that refers to differences in learning performance in relation to the mode of presentation. It is an interesting phenomenon that impacts education, online-learning, and marketing, among many other areas of life. In this study, we use Electroencephalography (EEG Alpha, Beta, and Theta) and computational modelling of comprehension to study the modality effect in text and multimedia. First, we provide a framework for evaluating learning performance, working memory, and emotions during learning. Second, we apply these tools to investigate the modality effect computationally focusing on text in contrast to multimedia. This study is based on a dataset that we have collected through a human experiment involving 16 participants. Our results are important for future learning systems that incorporate learning performance, working memory, and emotions in a continuous feedback system that measures and optimises learning during and not after learning.
    Keywords: modality effect; comprehension; electroencephalography; learning; education; text; multimedia; semantic networks; recall; emotions.

  • Automated labelling and severity prediction of software bug reports   Order a copy of this article
    by Ahmed Otoom, Doaa Al-Shdaifat, Maen Hammad, Emad Abdallah, Ashraf Aljammal 
    Abstract: We target two research problems that are related to bug tracking systems: bug severity prediction and automated bug labelling. Our main aim is to develop an intelligent classifier that is capable of predicting the severity and label (type) of a newly submitted bug report through a bug tracking system. For this purpose, we build two datasets that are based on 350 bug reports from the open-source community (Eclipse, Mozilla, and Gnome). These datasets are characterised by various textual features that are extracted from the summary and description of bug reports of the aforementioned projects. Based on this information, we train a variety of discriminative models that can be used for automated labelling and severity prediction of a newly submitted bug report. A boosting algorithm is also implemented for an enhanced performance. The classification performance is measured using accuracy and a set of other measures including: precision, recall, F-measure and the area under the Receiver Operating Characteristic (ROC) curve. For automated labelling, the accuracy reaches around 91% with the AdaBoost algorithm and cross-validation test. On the other hand, for severity prediction, our results show that the proposed feature set has proved successful with a classification performance accuracy of around 67% with the AdaBoost algorithm and cross-validation test. Experimental results with the variation of training set size are also presented. Overall, the results are encouraging and show the effectiveness of the proposed feature sets.
    Keywords: severity prediction; software bugs; machine learning; bug labeling.

Special Issue on: ISTA'16 Intelligent Systems Technologies and Applications

  • FS-CARS: fast and scalable context-aware news recommender system using tensor factorisation   Order a copy of this article
    by Anjali Gautam, Punam Bedi 
    Abstract: Matrix factorisation is a widely adopted approach to collaborative filtering that factorises user-item rating matrix to generate recommendations. The useritem rating matrix can be extended to incorporate users context, resulting in a rating tensor that can be factorised to generate better quality context-aware recommendations. Tensor factorisation is a computationally intensive task, and computational time can be significantly reduced using a distributed and scalable framework. This paper proposes a context-aware news recommender system that classifies news items into different categories and incorporates the users context, resulting in a rating tensor that is then factorised to generate recommendations. The news items are highly dynamic and are generated in large numbers, which can further greatly increase the computational time. To stabilise the computation time of the process, the proposed system is implemented on a distributed and scalable framework of Apache Spark using MLlib library. The proposed recommender system is evaluated for performance and computational time.
    Keywords: context-aware RS; tensor factorisation; matrix factorisation; Apache Spark.

  • Breast abnormality detection using combined texture and vascular features   Order a copy of this article
    by Sourav Pramanik, Debotosh Bhattacharjee, Mita Nasipuri 
    Abstract: This work presents an integrated approach that combines texture and vascular features for distinguishing malignancy and benignity of breast abnormalities using thermal breast images. Typically, the asymmetric isothermal pattern between right and left breasts in the thermal breast image is an indicator of the presence of an abnormality. Therefore, we have investigated the potential of the proposed integrated feature set in asymmetry analysis. A local texture descriptor, called block variance (BV), is used here to extract the texture features. Block variance (BV) uses the variation of intensities in a local region to identify the contrast-texture in the thermal breast image. On the other hand, thermo-vascular pattern based features are identified by using a series of morphological operations. Then, these two feature sets are fused together to make a final feature vector. A five-layer, feed forward, back propagation neural network (FBNN) has been implemented here as a classifier. A dataset containing 60 benign and 40 malignant cases of DMR-IR database is used here for the evaluation of the system performance. The effectiveness of the proposed fused feature set is compared against the feature sets used by Acharya et al. (2012) and Sathish et al. (2016) in terms of classification accuracy, sensitivity, specificity, PPV, and NPV. We have also investigated the potential of the lateral view breast thermal images in conjunction with a frontal view for the diagnosis of the breast abnormalities. Experimental results have shown that the proposed method detected malignant cases with 94% accuracy, while benign cases are detected with 100% accuracy. The overall system accuracy is obtained as 97.2%, which is comparatively better than other existing methods.
    Keywords: thermal breast image; texture feature; vascular feature; FBNN; lateral view breast thermogram.

  • Missing value imputation in DNA microarray gene expression data: a comparative study of an improved collaborative filtering method with decision tree based approach   Order a copy of this article
    by Sujay Saha, Anupam Ghosh, Saikat Bandopadhyay, Kashi Nath Dey 
    Abstract: A DNA microarray is used to study the expression levels of thousands of genes under various conditions simultaneously. Gene expression profiles generated by the high-throughput microarray experiments are usually in the form of large matrices with high dimensions. Unfortunately, microarray experiments can generate datasets with multiple missing values, which significantly affects the performance of subsequent statistical analysis and machine learning algorithms. Several algorithms already exist to estimate those missing values. In this work, at first we have proposed a modification to the existing imputation approach named Collaborative Filtering Based on Rough-Set Theory (CFBRST) (Wang and Tseng, 2012). This proposed approach (CFBRSTFDV) uses Fuzzy Difference Vector (FDV) along with rough set based collaborative filtering to analyse historical interactions and helps to estimate the missing values. This is a suggestion-based system that works on the principle of how the suggestion of items or products arrives at an individual while using Facebook, Twitter or looking for books in Amazon. Later on, we also propose a decision tree based approach combined with a genetic algorithm (GADTreeImpute) to impute the same missing values. We have applied our proposed algorithms on three benchmark datasets, yeast gene expression dataset of Spellman et al. (1998), human tumour cell dataset (GDS2932) and human prostate cancer dataset (GDS4824). We have first compared the performances of these two proposed approaches along with some existing state-of-the art methods by using an RMSE measure. Later on the estimation is also validated by using classification process, and the performance is measured by the metrics such as the percentage of classification accuracy, precision, recall, etc. Experiments show that the proposed approaches outperform those existing methods, particularly when we increase the number of missing values.
    Keywords: missing value estimation; DNA microarray; collaborative filtering; fuzzy set theory; rough set theory; decision tree; genetic algorithm.

  • Influence maximisation in social networks   Order a copy of this article
    by P.V. Bindu, V. Tejaswi, P. Santhi Thilagam 
    Abstract: Social networks have become a strong means of communication in the past decade owing to the large number of mobile users and easily accessible internet connectivity. Social network analysis deals with studying the structure, relationship and other attributes of the network that help to provide solutions to real world problems. Some of the significant research areas under social network analysis include recommendation systems, link prediction, community detection, and influence maximisation. Influence maximisation helps in finding a few influential entities from large social networks that can be used in marketing, election campaigns, outbreak detection, and so on. Influence maximisation deals with the problem of finding a subset of nodes, called seeds, in the given social network such that it will eventually spread maximum influence in the network. This is an NP hard problem. The aim of this paper is to provide a complete understanding of the influence maximisation problem. This paper focuses on providing a complete survey on the influence maximisation problem and covers three major aspects: i) different types of input required; ii) influence propagation models that map the spread of influence in the network; and iii) the approximation algorithms suggested for seed set selection. We also provide the state of the art and describe the open problems in this domain.
    Keywords: social networks; influence maximisation; information diffusion; approximation algorithms.

Special Issue on: IEEE ISPA-16 Parallel and Distributed Computing and Applications

  • Method of key node identification in command and control networks based on level flow betweenness   Order a copy of this article
    by Wang Yunming, Pan Cheng-Sheng, Chen Bo, Zhang Duo-Ping 
    Abstract: Key node identification in command and control (C2) networks is an appealing problem that has attracted increasing attention. Owing to the particular nature of C2 networks, the traditional algorithms for key node identification have problems with high complexity and unsatisfactory adaptability. A new method of key node identification based on level flow betweenness (LFB) is proposed, which is suitable for C2 networks. The proposed method first proved the definition of LFB by analysing the characteristics of a C2 network. Then, this method designs algorithms for key node identification based on LFB, and theoretically derives the complexity of this algorithm. Finally, a number of numerical simulation experiments are carried out, and the results demonstrate that this method reduces algorithm complexity, improves identification accuracy and enhances adaptability for C2 networks.
    Keywords: command and control network; complex network; key node identification; level flow betweenness.

  • CODM: an outlier detection method for medical insurance claims fraud   Order a copy of this article
    by Yongchang Gao, Haowen Guan, Bin Gong 
    Abstract: Data is high dimensional in medical insurance claims management, and there are both dense and sparse regions in these datasets, so traditional outlier detection methods are not suitable for these data. In this paper, we propose a novel method to detect the outliers for abnormal medical insurance claims. Our method consists of three core steps feature bagging to reduce the dimensions of data, calculating the core of the objects k-nearest neighbours, and computing the outlier score for each object by measuring the amount of movement of the core by sequentially increasing k. Experimental results demonstrate our method is promising to tackle this problem.
    Keywords: data mining; outlier detection; medical insurance claims fraud.
    DOI: 10.1504/IJCSE.2017.10008174

Special Issue on: Advanced Computer Science and Information Technology

  • MigrateSDN: efficient approach to integrate OpenFlow networks with STP-enabled networks   Order a copy of this article
    by Po-Wen Chi, Ming-Hung Wang, Jing-Wei Guo, Chin-Laung Lei 
    Abstract: Software defined networking (SDN) is a paradigm-shifting technology in networking. However, in current network infrastructures, removing existing networks to build pure SDN networks or replacing all operating network devices with SDN-enabled devices is impractical because of the time and cost involved in the process. Therefore, SDN migration, which implies the use of co-existing techniques and a gradual move to SDN, is an important issue. In this paper, we focus on how SDN networks can be integrated with legacy networks that use spanning tree protocol (STP). Our approach demonstrates three advantages. First, our approach does not require an SDN controller to apply the STP exchange on all switches but only on boundary switches. Second, our approach enables legacy networks to concurrently use multiple links that used to be blocked except one for avoiding loops. Third, our approach decreases bridge protocol data unit (BPDU) frames used in STP construction and topology change.
    Keywords: software defined networking; spanning tree protocol; network migration.

Special Issue on: PDCAT 2016 Parallel and Distributed Algorithms and Applications

  • Data grouping scheme for multi-request retrieval in MIMO wireless communication   Order a copy of this article
    by Ping He, Zheng Huo 
    Abstract: The multi-antenna data retrieval problem refers to findng an access pattern (to retrieve multiple requests by using multiple antennae, where each request has multiple data items) such that the access latency of some requests retrieved by each antenna is minimised and the total access latency of all requests retrieved by all antennae keeps balance. So it is very important that these requests are divided into multiple groups for achieving the retrieval by using each antenna in MIMO wireless communication, called the data grouping problem. There are few studies focused on data grouping schemes applied to the data retrieval problem when the clients equipped with multi-antenna send multiple requests. Therefore, this paper proposes two data grouping algorithms (HOG and HEG) that are applied in data retrieval such that the requests can be reasonably classified into multiple groups. Through experiments, the proposed schemes have currently better efficiency compared with some existing schemes.
    Keywords: mobile computing; data broadcast; indexing; data scheduling; data retrieval; data grouping.

  • Improved user-based collaborative filtering algorithm with topic model and time tag   Order a copy of this article
    by Liu Na, Lu Ying, Tang Xiao-jun, Li Ming-xia, Chunli Wang 
    Abstract: Collaborative filtering algorithms make use of interactions rates between users and items for generating recommendations. Similarity among users is calculated based on rating mostly, without considering explicit properties of users involved. Considering the number of tags of users can direct response the user preference to some extent, we propose a collaborative filtering algorithm using the topic model called UITLDA in this paper. UITLDA model consists of two parts. The first part is active user with its item. The second part is active user with its tag. We form the topic model from these two parts. The two topics constrain and integrate into a new topic distribution. This model not only increases the user's similarity, but also reduces the density of the matrix. In prediction computation, we also introduce time delay function to increase the precision. The experiments showed that the proposed algorithm achieved better performance compared with baseline on MovieLens datasets.
    Keywords: collaborative filtering; LDA; topic model; time tag.

  • Improving runtime performance and energy consumption through balanced data locality with NUMA-BTLP and NUMA-BTDM static algorithms for thread classification and thread type-aware mapping   Order a copy of this article
    by Iulia Știrb 
    Abstract: Extending compilers such as LLVM with NUMA-aware optimisations significantly improves runtime performance and energy consumption on NUMA systems. The paper presents the NUMA-BTDM algorithm, which is a compile-time thread-type dependent mapping algorithm that performs the mapping uniformly, based on the type of each thread given by NUMA-BTLP algorithm following a static analysis on the code. First, the compiler inserts in the program code architecture-dependent code that detects at runtime the characteristics of the underlying architecture for Intel processors, and then the mapping is performed at runtime (using specific functions calls from the PThreads library) depending on these characteristics following a compile-time mapping analysis which gives the CPU affinity of each thread. NUMA-BTDM allows the application to customise, control and optimise the thread mapping and achieves balanced data locality on NUMA systems for C parallel code that combine PThreads-based task parallelism with OpenMP-based loop parallelism.
    Keywords: thread mapping; task parallelism; loop parallelism; compiler optimizations; NUMA systems; performance improvements; energy consumption improvements.

  • Accumulative energy-based seam carving for image resizing   Order a copy of this article
    by Yuqing Lin, Jiawen Lin, Yuzhen Niu, Haifeng Zhang 
    Abstract: With the diversified development of the digital devices, such as computer, mobile phone and television, how to resize an image or video to adapt to different display screens has been a heated topic. Seam carving does well in image resizing at most times, however it sometimes produces discontinuity in the image content or impaired salient objects. Therefore, we propose an accumulative energy-based seam carving method for image resizing. We distribute the energy of each pixel on the seam to its adjacent eight-connected pixels to avoid the extreme concentration of seams. In addition, we add the image saliency and the edge information into the energy function to reduce the distortion. To compute more efficiently, we use parallel computing environment as well. Experimental results show that compared with the existing methods, our method can both avoid the discontinuity of image content and distortions as well as better maintain the shape of the salient objects.
    Keywords: image resizing; seam carving; optimal seam; accumulative energy; saliency detection; edge detection.
    DOI: 10.1504/IJCSE.2018.10014036

Special Issue on: Smart X 2016 Smart Everything

  • An adaptive feature combination method based on ranking order for 3D model retrieval   Order a copy of this article
    by Qiang Chen, Bin Fang, Yinong Chen, Yan Tang 
    Abstract: Directly combining several complementary features may increase the retrieval precision for 3D models. However, in most cases, we need to set the weights manually and empirically. In this paper, we propose a new schema for automatically choosing the proper weights for different features on each database. The proposed schema uses the ranking order of the retrieval results, and it is invariant to the magnitude scaling. We choose the best feature as the standard one, and the relevance values between the standard and other features are the weights for feature combination. Furthermore, we propose an improved re-ranking algorithm for further improving the retrieval performance. Experiment shows the proposed method can automatically choose the proper weights for different features, and the experiment results on the existing features exceed the benchmark.
    Keywords: 3D retrieval; re-ranking; ranking order; feature combination.

Special Issue on: Cyberspace Security Protection, Risk Assessment and Management

  • CSCAC: one constant-size CPABE access control scheme in trusted execution environment   Order a copy of this article
    by Yongkai Fan, Shengle Liu, Gang Tan, Xiaodong Lin 
    Abstract: With the popularity of versatile mobile devices, there have been increasing concerns about their security. How to protect sensitive data is an urgent issue to be solved. CPABE (Ciphertext-policy attribute-based encryption) is a practical method for encrypting data and can use a users attributes to encrypt the sensitive data. In this paper, we propose a CSCAC (Constant-size CPABE Access Control) model by using the trusted execution environment to manage the dynamic key generated by attribute. The original data is encrypted by a symmetric storage key, then the storage key is encrypted under an AND-gate access policy. Only the user who possesses a set of attributes that satisfy the access policy can recover the storage key. The security analysis shows the design of this access control scheme reduces the burden and risk in the case of one single authority. Our experiment results indicate that the proposed scheme is more secure and effective compared with the traditional access scheme.
    Keywords: constant-size ciphertext; access control; trusted execution environment; attribute-based encryption; security.

  • Recognizing continuous emotions in dialogues based on DISfluencies and non-verbal vocalisations features for a safer network environment   Order a copy of this article
    by Huan Zhao, Xiaoxiao Zhou, Yufeng Xiao 
    Abstract: With the development of networks and social media, audio and video have become a more popular way to communicate. Audio and video can spread information to create some negative effect, e.g., negative sentiment with suicide tendency or threatening messages to make people panic. In order to keep a safe network environment, it is necessary to recognise emotion in dialogue. To improve recognition of continuous emotion in dialogue, we propose to combine DISfluencies and non-verbal vocalisations (DIS-NV) features with a bidirectional long short-term memory (BLSTM) model to predict continuous emotion. DIS-NV features are effective emotion features, including filled pause features, filler features, shutter features, laughter feature and breath feature. BLSTM can learn past information and use future information. State-of-the-art recognition attains 62% accuracy. Our experimental method can increase accuracy to 76%.
    Keywords: continuous emotion; BLSTM; dialogue; knowledge-inspired features; safe network environment; DIS-NV; AVEC2012; discretisation; speech emotion recognition; LLD.

  • PM-LPDR: a prediction model for lost packets based on data reconstruction on lossy links in sensor networks   Order a copy of this article
    by Zeyu Sun, Guozeng Zhao 
    Abstract: During the data gathering process in sensor networks, lots of transmitted data packets can be lost owing to the limiting node energy and the influence of data redundancy, which seriously undermines the transmission reliability. In order to solve this problem, a prediction model is proposed for lost packets based on data reconstruction on lossy links in sensor networks. In this model, the lost packet on lossy links can be modelled as a random loss during the transmission, and the matching type of the lost packets can be further predicted. Retransmission is adopted for data recovery when a random packet loss is predicted while prediction algorithms based on time sequences can be employed for data recovery when a random packet loss cannot be predicted. It is shown via the simulation results that this model could effectively alleviate the influence of lost packets on the lossy links. The operation of the whole system can still be guaranteed with this model when the packet loss probability of the network is lower than 15%, while the relative error for data reconstruction remains between 0.17 and 0.22% when the packet loss probability is higher than 22%. It is thus proven that this prediction model exhibits a strong stability and flexibility.
    Keywords: sensor networks; lossy links; matching for lost packets; data redundancy; data reconstruction.

  • An information network security policy learning algorithm based on Sarsa with optimistic initial values   Order a copy of this article
    by Fang Wang, Renjun Feng, Haiyan Chen, Fei Zhu 
    Abstract: With the widespread applications of artificial intelligence and automation, more and more devices are monitored by computer systems. In many cases, multiple management control information systems compose a comprehensive information system network. As the scale of the network is getting larger and larger and the topology of the network is getting more and more sophisticated, it is impossible to have a fixed mode network system control policy, which was designed for small and simple networks that often lacked thee ability to deal with dynamic environment and to handle security policy tasks. Hereby a network security policy online learning algorithm based on Sarsa with the optimistic initial values is proposed. The algorithm consists of two parts, one acting as the defence agent and the other acting as the attacking agent. The defence agent learns and improves the system protection policy by fighting against simulating attacking from attacking agent. The defence agent takes advantage of Sarsa method to improve its defence policy, which uses historical experience to improve the defence policy in an online mode. The use of optimistic initial values speeds up the training time.
    Keywords: information network; optimistic initial values; Sarsa; network defence; risk control.

  • Evaluation of borrower's credit of P2P loan based on adaptive particle swarm optimisation BP neural network   Order a copy of this article
    by Sen Zhang, Yuping Hu, Chunmei Wang 
    Abstract: Personal credit assessment is the main method to reduce the credit risk of P2P online loans. In this paper, after the adaptive mutation operator is used to reinitialise the particles with a certain probability and the global search capability of the particle swarm optimisation algorithm is used to optimise the weights and thresholds of the BP neural network, a method that adopts adaptive mutation particle swarm combined with BP neural network model is proposed to evaluate a borrower's credit of P2P network loan. Results of simulation experiments show that the AMPSO-BP neural network model has higher prediction accuracy, smaller error variation range, better fitting ability and more robustness than the BP neural network model in P2P network loan borrower credit evaluation applications.
    Keywords: P2P network loan; personal credit; BP neural network; particle swarm algorithm; adaptive mutation.

  • An approach for public cloud trustworthiness assessment based on users evaluation and performance indicators   Order a copy of this article
    by Guosheng Zhou, Wei Du, Hanchao Lin, Xiaowei Yan 
    Abstract: Aiming at the problem of how to quantify the trustworthiness of public cloud services, this paper proposes a method to evaluate the trustworthiness of public clouds based on users' subjective evaluations and objective monitoring values of performance indicators. According to the objective values obtained from monitoring organisations, the objective trustworthiness is calculated by using a multi-attribute decision-making method based on the ideal point (TOPSIS). Meanwhile, qualitative evaluations by users are quantified by exploiting the cloud model. Finally, a comprehensive assessment of the trustworthiness of cloud service is formed based on the above results. To meet the personalised requirements, users could configure the weight of parameters included in the proposed algorithms, ranking cloud services for specific needs. Compared with other assessment algorithms, both the subjective source and objective source are taken into account, the customised weight scheme for users is provided, the model and the algorithms are designed, and a prototype is developed.
    Keywords: public cloud services; trustworthiness model; comprehensive trustworthiness; TOPSIS; cloud model.

  • A novel computational model for SRAM PUF min-entropy estimation   Order a copy of this article
    by Dongfang Li, Qiuyan Zhu, Hong Wang, Zhihua Feng, Jianwei Zhang 
    Abstract: Min-entropy is the standard for quantization of uncertainty of security key source under the worst case. It indicates the upper bound of length of security key that is able to be extracted from its source. The openly published min-entropy estimation methods are all based on experimental data or statistical tests to obtain the underlying probability distribution of the security key source, where huge amounts of samples are required and therefore are not feasible from an engineering perspective. Aimed at computational complexity optimization, this paper proposes a novel model for SRAM PUF min-entropy estimation based on the generic coupling relationship between entropy and average energy consumption of the SRAM cell. The model mainly investigated the way of min-entropy evaluation derived from the average energy consumption of memory cell during power-up stage via simulation. We apply the model to estimate the min-entropy of IS62WV51216BLL SRAM chip. The experimental results demonstrated that the accuracy of the proposed min-entropy estimation model is in parallel with that of conventional methods, while its computational efficiency surpasses by a large extent.
    Keywords: min-entropy; SRAM; PUF; estimation model; entropy-energy coupling.

  • Study on learning resource authentication in MOOCs based on blockchain   Order a copy of this article
    by Dai Yonghui, Li Guowei, Xu Bo 
    Abstract: In MOOCs, the learning resources authentication is a matter of great concern for the learners and teachers. Its construction is faced with the challenges of information security and privacy protection. Considering that the blockchain has the advantages of decentralisation, autonomous and non-tampering, this paper provides a solution to implement the construction based on blockchain technology, which includes the system architecture, experimental validation and key technologies, such as decentralised transaction and tamper-resistance. The results prove the technical feasibility and safety reliability of blockchain for learning resource management in MOOCs.
    Keywords: learning resource; massive open online courses; blockchain; decentralised transaction; tamper-resistant.

  • LWE-based multi-authority attribute-based encryption scheme with hidden policies   Order a copy of this article
    by Qiuting Tian, Dezhi Han, Xingao Liu, Xueshan Yu 
    Abstract: Most attribute-based encryption (ABE) schemes are based on bilinear maps, which leads to relatively high computation and storage costs, and which cannot resist quantum attacks. In view of the above problems, a LWE-based multi-authority attribute-based encryption scheme with hidden policies is proposed. Firstly, the scheme uses the lattice theory to replace the previous bilinear maps, and supports multi-authority to manage different attribute sets, and also uses the SampleLeft algorithm to extract keys for the authenticated users in the system. Secondly, the Shamir secret sharing mechanism is introduced to construct the access tree structure which can support AND, OR and THRESHOLD operations of attributes, which improves the flexibility of the system. At the same time, the access policies can be completely hidden so as to protect the privacy of users. Finally, the scheme is proved to be secure against the chosen plaintext attack under the standard model. Compared with the existing related schemes, the size of public parameters, master secret key, users private key and ciphertext are all optimised to some degree. Therefore it is more suitable for data encryption in cloud environment.
    Keywords: attribute-based encryption; learning with errors; hidden policies; lattices; standard model.

  • A secure hierarchical community detection algorithm   Order a copy of this article
    by Wei Zhu, Osama Hosam 
    Abstract: In complex networks, the security of real community is low. Besides, the structure of a community network is hierarchical and overlapped. Therefore, a community network cannot divide the secure structure accurately. To address this issue, this work presents a hierarchical community detection algorithm. First, a secure community clustering model is built. On the basis of the hierarchical structure, the bridge joint between communities can be found. After that, the secure clustering is performed. Finally, the community is detected based on the hierarchical and overlapped features. The experiments show that the proposed algorithm has improved the computation speed. The detected complex network community has obvious structure. Besides, the security performance in probability of coincidence is encouraging.
    Keywords: complex network; community; hierarchical; overlap.

  • Efficient security credential management for named data networking   Order a copy of this article
    by Bo Deng 
    Abstract: As a promising future internet architecture, Named Data Networking (NDN) shifts the network focus from where the data is to what content it carries. In NDN, the communication is driven by consumer requesting data by specifying its name or name prefix, and is secured by producer sinning the data and optionally encrypting the content. Therefore, during NDN communications, consumers may need to fetch digital certificates, as named data as well, in order to verify datas signature. A chain of certificates may be involved in verifying some signatures. Maintaining those certificates efficiently, especially when the number of certificates to maintain is large, is challenging. This paper proposes a novel mechanism to store certificates efficiently, based on NDN certificate naming convention. According to our experimental results, the proposed approach can gain a reduction on memory consumption by almost 80%, while providing faster lookup speed.
    Keywords: NDN; security; certificate; management; hash.

  • Intrusion detection approach for cloud computing based on improved fuzzy c-means clustering algorithm   Order a copy of this article
    by Xuchong Liu, Jiuchuan Lin, Xin Su, Yi Zheng 
    Abstract: Recently, cloud computing has become more and more important on the internet. Meanwhile, network attackers aim at this platform, and launch various of attacks to threaten the security of cloud computing. Some researchers have proposed fuzzy C-means clustering algorithm (FCM) to detect such attack. However, FCM contains some limitations, such as low detection accuracy, low precision, and slow convergence speed when detecting intrusions under the cloud computing scenario. In this paper, we propose an intrusion detection approach based on an objective function optimisation FCM algorithm. This approach uses kernel function to improve optimisation ability of FCM algorithm. Then, the proposed approach uses Lagrange multiplier approach to calculate cluster centre and membership matrix, which is able to optimise the objective function of the FCM algorithm and reduce algorithm complexity. The simulation experiment shows that our approach can achieve higher detection accuracy and precision in detecting intrusion into a cloud computing network, and has great advantages in performance of convergence.
    Keywords: cloud computing; intrusion detection; network attack; objective function optimization; Lagrange multiplier approach.

  • A network traffic-aware mobile application recommendation system based on network traffic cost consideration   Order a copy of this article
    by Xin Su, Yi Zheng, Jiuchuan Lin, Xuchong Liu 
    Abstract: A large amount and different types of mobile applications (or apps) are being offered to end users via app markets. Existing mobile app recommender systems generally recommend the most popular mobile apps to mobile users to facilitate the proper selection of mobile apps. However, these apps normally generate network traffic, which will consume users' mobile data plan and may even cause potential security issues. Therefore, more and more mobile users are hesitant or even reluctant to use the mobile apps that are recommended by the mobile app markets. To fill this crucial gap, this paper proposes a mobile app recommendation approach which can provide app recommendations by considering both the app popularity and their traffic cost. To achieve this goal, this paper first estimates an app network traffic score based on bipartite graph. Then, this paper proposes a flexible approach based on benefit-cost analysis, which can recommend apps by maintaining a balance between the apps' popularity and the traffic cost concern. Finally, this paper evaluates our approach with extensive experiments on a large-scale dataset collected from Google Play. The experimental results clearly validate the effectiveness and efficiency of our approach.
    Keywords: mobile apps; network traffic cost; recommendation approach.

  • Design of DDoS attack detection system based on intelligent bee colony algorithm   Order a copy of this article
    by Xueshan Yu, Dezhi Han, Gongjun Yin, Zhenxin Du, Qiuting Tian 
    Abstract: With the large data applications gaining popularity, DDoS (Distributed Denial of Service) has become increasingly a serious major network security issue. In response to the problem of DDoS attack detection in big data environment, a DDoS attack detection system based on traffic reduction and EABC_elite (intelligent artificial bee colony algorithm) is designed. The system combines the traffic reduction algorithm and the intelligent bee colony algorithm to reduce the data traffic according to the idea of abnormal extraction. It uses the traffic feature distribution entropy and the generalized likelihood comparison discrimination factor to jointly detect the characteristics of DDoS attack data streams in order to quickly and efficiently achieve DDoS attack data flow accuracy detection. The experimental results show that the demand of traffic detection in this system is greatly reduced, the algorithm time-consuming and DDoS detection accuracy are obviously better than the separate traffic reduction algorithm and traffic reduction algorithm combined with common artificial bee colony algorithm.
    Keywords: DDoS attack; intelligent bee colony algorithm; traffic feature distribution entropy; traffic segmentation algorithm; generalized likelihood comparison.

  • Hybrid design for cloud data security using a combination of AES, ECC and LSB steganography   Order a copy of this article
    by Osama Hosam, Muhammad Hammad Ahmed 
    Abstract: The ever-growing popularity of cloud systems is unleashing a revolutionary change in information technology. Parallel and flexible services offered by cloud technology are making it the ultimate solution for individuals as well as for organisations of all sizes. The grave security concerns present in the cloud must be addressed to protect the data and privacy of the huge numbers of cloud users. We present a hybrid solution to tackle the key management problem. The data in the cloud is encrypted with AES encryption with private key. The AES 256-bits key is then encrypted with ECC. The ECC encrypted key will be embedded in the users image with LSB steganography. If the user decides to share cloud data with a second user, he only needs to embed the AES key in the second users image. Using steganography, ECC and AES we can achieve strong security posture and efficient key management and distribution for multiple users.
    Keywords: cloud security; encryption; ECC; AES; steganography; public key; private key.
    DOI: 10.1504/IJCSE.2018.10016054

Special Issue on: IJCSE PDCAT'17 Parallel Computations and Applications

  • User preferences-oriented cloud service selection in multi-cloud environment   Order a copy of this article
    by Li Liu, Letian Yang, Qi Fan 
    Abstract: Service selection based on user preference is a challenge owing to the diversity of user demands and preferences in the multi-cloud environment. Few works have clearly reviewed the existing works for the user preference-oriented service selection in the multi-cloud environment. In this paper, we firstly develop a taxonomy for the user preference-oriented service selection according to the architecture and algorithms. Then, considering the actual situation of uncertain user demands and fuzzy preferences, a cloud service selection method is proposed based on user preference and credibility evaluation. The user preference is expressed by combining the semantic terms and attribute comparison. Experiments show that our method performs better in terms of the user preference and credibility.
    Keywords: multi-cloud; service selection; credibility evaluation; user preference; intuitionistic fuzzy sets.

  • The loading-aware energy saving scheme for EPON networks   Order a copy of this article
    by Chien-Ping Liu, Ho-Ting Wu, Kai-Wei Ke 
    Abstract: This paper proposes a loading-aware energy saving mechanism for Ethernet passive optical networks (EPONs), aiming at providing satisfied energy saving, delay performance and transmission efficiency jointly for the optical network unit (ONU) component in EPON networks. This energy saving scheme measures upstream and downstream traffic loading continuously to identify traffic load for each ONU, then classifies such load of each ONU to either a low or a high loading level. The proposed scheme allows ONU to transmit packets only when the system stays in high load conditions, otherwise ONU changes to one of power saving modes. Therefore, this scheme allows ONU to accumulate queued packets in low-load scenario and stay in power saving mode for longer duration. However, in order to avoid long queueing delay of high priority packets, the ONU will not switch to power saving mode if the queue of high priority packets is not empty on either upstream or downstream channel. Compared with a previous proposed tri-mode energy saving scheme, which imposed strict restriction on ONUs staying in energy saving modes, such design achieves energy saving improvement without suffering noticeable delay performance degradation. The simulation results show that the proposed scheme is able to achieve a good balance between energy saving effect and delay performance with a proper parameter setting.
    Keywords: EPON; energy saving; delay performance; loading aware.

  • Using RFID technology to develop an intelligent equipment lock management system   Order a copy of this article
    by Yeh-cheng Chen, Hun-Ming Sun, Ruey-shun Chen, S. Felix Wu 
    Abstract: The equipment lock has been acted as an important tool for the power company to protect the electricity metering equipment. However, the conventional equipment lock has two potential problems: vandalism and counterfeiting. To fulfill the control and track the potential illegal behaviour, human labour and paper are required to proceed with related operations, resulting in the consumption of a large amount of human resources and maintenance costs. This research focused on the design of RFID technology applied to the traditional equipment lock, with the mobile and electronic technology, we are able to strengthen the management/operating convenience of the lock and provide the solutions for anti-counterfeiting and spoilage detection, so that the national energy can be properly protected and fairly distributed. The integration of RFID data interface and mobile sensing devices will enhance the control of electricity meters and other metering devices for the power company, and it will also provide accurate and mobile support with interactive mode, real-time display by the system, and real-time information services. It will serve as the last-mile management tool for the electrical equipment and improve development of the intelligent electric grids.
    Keywords: radiofrequency identification; equipment lock management; near field communication; power company.

  • Managing changes to a packet-processing virtual machines instruction set architecture over time   Order a copy of this article
    by Ralph Duncan 
    Abstract: We describe an approach to deploying only those bytecodes that can be executed by the current operating system and hardware resources in an environment that combines parallelism, processor heterogeneity and software-defined networking capabilities (SDN). Packet processings escalating speed requirements necessitate parallel processing and heterogeneous, specialized processors acting as accelerators. We use bytecodes for a virtual machine to drive the dissimilar processors with interpreters running in parallel. Since processors and SDN are evolving, bytecodes must evolve as well. We must execute reliable programs for packet processing, so we need to deploy only bytecodes that the interpreters and system resources can support. Our solution combines: (a) correlating supported features, interpreter versions and hardware variants in a manifest file, (b) instrumenting a compiler to recognize key feature use, (c) carrying detected feature data in an object module section and (d) running a checking tool at various stages to prevent compiling or deploying a bytecode that cannot be correctly executed. The scheme has handled deprecating features and adding a broad variety of new features. It has been stress-tested by significant changes in hardware variants.
    Keywords: compatibility; parallel processing; network processing; bytecodes; instruction set; reliability.

  • Spectro-temporal features for environmental sound classification   Order a copy of this article
    by KhineZar Thwe, Mie Mie Thaw 
    Abstract: This paper proposes a 2N_BILBP feature extraction method based on spectro-temporal features for sound event classification. Spectro-temporal features have a similar pattern to texture features in image processing. So, the concept of texture features is used in this digital signal processing field. This papers uses two-neighbour bidirectional local binary pattern (2N_BILBP) for feature extraction. 2N_BILBP is also compared with the previous method called bidirectional local binary pattern. Firstly, the input audio is converted into spectrogram using short-time Fourier transform and then gamma tone is used. The resulting gamma-tone-like spectrogram is then used to extract features. These features are used as feature features. Finally, the input audio is labelled using this feature vector. Evaluation is tested on three benchmark datasets, called ESC-10 dataset, ESC-50 dataset and UrbanSound8K dataset.
    Keywords: local binary pattern; sound event classification; audio event classification; texture features; specto-temporal fetures; ESC-10 dataset; ESC-50 dataset; UrbanSound8K dataset;.

  • A privacy-preserving cloud-based data management system with efficient revocation scheme   Order a copy of this article
    by Shih-Chien Chang, Ja-Ling Wu 
    Abstract: There are lots of data management systems, based on various reasons, delegating their high computational workload and storage requirement to public cloud service providers. It is well known that once we entrust our tasks to a cloud server, we may face several threats, such as privacy infringement with regard to users attribute information; therefore, an appropriate privacy preserving mechanism is a must for constructing a Secure Cloud-Based Data Management System (SCBDMS). To design a reliable SCBDMS with server-enforced revocation ability is a very challenging task even if the server is working under the honest-but-curious threat model. Existing data management systems seldom provide privacy-preserving revocation services, especially when the tasks are outsourced to a third party. In this work, with the aids of oblivious transfer and the newly proposed Stateless Lazy Re-Encryption (SLREN) mechanism, a SCBDMS with secure, reliable, and efficient server-enforced attribute revocation ability is built. Compared with related works, experimental results show that, in the newly constructed SCBDMS, the storage-requirement of the cloud server and the communication overheads between the cloud server and system users are largely reduced, owing to the nature of the late involvement of SLREN.
    Keywords: privacy-preserving; lazy re-encryption; revocation.