Forthcoming articles

International Journal of High Performance Computing and Networking

International Journal of High Performance Computing and Networking (IJHPCN)

These articles have been peer-reviewed and accepted for publication but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

Register for our alerting service, which notifies you by email when new issues are published online.

Open AccessArticles marked with this Open Access icon are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.
We also offer which provide timely updates of tables of contents, newly published articles and calls for papers.

International Journal of High Performance Computing and Networking (66 papers in press)

Regular Issues

  • On Parallelisation of image dehazing with OpenMP   Order a copy of this article
    by Tien-Hsiung Weng, Yi-Siang Chen, Huimin Lu 
    Abstract: In this paper, we present our learning experience on the design and implementation of image dehazing parallel code with OpenMP developed from the existing fast sequential version. The aim of this work is to present an analysis of a case study showing the development of parallel haze removal with practical and efficient use of shared memory multi-core servers. Implementation technique and result discussions in terms of program improvements that may be needed to support parallel application developers with similar high performance goals are presented. Preliminary studies, results and experiments on haze removal application program are executed on multi-core shared memory platforms, and results show that the performance of the proposed parallel code is promising.
    Keywords: OpenMP; image haze removal; multicores; parallel programming.

  • A self-adaptive quantum steganography algorithm based on QLSb modification in watermarked quantum images   Order a copy of this article
    by Qu Zhiguo 
    Abstract: As one of important research branches of quantum information hiding, quantum steganography embeds secret information into quantum images for covert communication by integrating quantum secure communication technology and classical steganography. In this paper, based on the Novel Enhanced Quantum Representation (NEQR), a novel quantum steganography algorithm is proposed to transfer secret information by virtue of quantum watermarked images. In order to achieve this goal, the Least Significant Qubit (LSQb) of the quantum carrier image is replaced with the secret information by implementing a quantum circuit. Compared with the previous quantum steganography algorithms, the communicating parties can recover the secret information tampered, meanwhile the tamperers can be located effectively. In the experiment result, the Peak Signal-to-Noise Ratios (PSNRs) are calculated for different quantum watermarked images and quantum watermarks, which demonstrate the imperceptibility of the algorithm is good and the secret information embedded can be recovered by virtue of its self-adaptive mechanism.
    Keywords: quantum steganography; quantum least significant bit; watermarked quantum carrier image.

  • Distributed continuous KNN query over moving objects   Order a copy of this article
    by Xiaolin Yang, Zhigang Zhang, Yilin Wang, Cheqing Jin 
    Abstract: The continuous k-nearest neighbour(CKNN) queries over moving objects have been widely researched in many fields. However, existing centralised works cannot work anymore and distributed solutions suffer the problem of index maintainance, high communication cost and query latency. In this paper, we firstly propose a distributed hybrid indexing strategy that combines the SSGI (Spatial-temporal Sensitive Grid Index) and the DQI (Dynamic Quad-tree Index). The SSGI is proposed to locate the spatial range that contains the final results, and the DQI is used for data partitioning. Then, we introduce an algorithm named HDCKNN to implement the CKNN queries. In comparison of existing work, HDCKNN can achieve the final result in one round iteration, while existing methods require at least two rounds of iteration. Extensive experiments show that the performance of the proposed method is more efficient than state-of-the-art algorithms.
    Keywords: moving objects;continuous k-nearest neighbour query; distributed query processing; hybrid indexing.

  • A cross-layer QoS model for space DTN   Order a copy of this article
    by Aiguo Chen, Xuemei Li, Guoming Lu, Guangchun Luo 
    Abstract: Delay/disruption tolerant networking (DTN) technology is considered as a new solution for highly stressed communications in space environments. An IP-based DTN network can support more flexible communication services, which is one of the research hotspots of space networking. At the same time, it is still an important and difficult problem to optimise the limited network resources allocation and guarantee the QoS of different services in a DTN network. For this challenge, a cross-layer QoS model that considers application, network, and node layer QoS requirements and resource limitations is proposed in this paper. In addition, a comprehensive admission control scheme that ensures productivity and fairness is employed. The results of experiments and analyses conducted demonstrate the benefits to be derived from this approach.
    Keywords: DTN; QoS model; cross-layer; space network.
    DOI: 10.1504/IJHPCN.2017.10004929
     
  • Mimicry honeypot: an evolutionary decoy system   Order a copy of this article
    by Leyi Shi, Yuwen Cui, Han Xu, Honglong Chen, Deli Liu 
    Abstract: Motivated by the mimic-and-evolve phenomenon for species rivalry, we present a novel concept of mimicry honeypot, which can bewilder the adversaries through comprehensively exploiting the protective coloration, warning coloration and mimicry evolution according to the changes of network circumstance. The paper firstly gives the definition of the protective coloration and warning coloration for cyber defence, formalises the mimicry honeypot model, discusses the critical issues of environment perception and mimicry evolution, and implements a mimicry prototype through web service platform and genetic algorithm. Afterwards, we perform experiments with the mimicry prototype deployed both in our private campus network and on the internet. Our empirical study demonstrates that the mimicry honeypot has better efficiency than the traditional decoy system.
    Keywords: mimicry honeypot; warning coloration; protective coloration; evolution; genetic algorithm.

  • Evaluation of directive-based performance portable programming models   Order a copy of this article
    by M. Graham Lopez, Wayne Joubert, Veronica Vergara Larrea, Oscar Hernandez, Azzam Haidar, Stanimire Tomov, Jack Dongarra 
    Abstract: We present an extended exploration of the performance portability of directives provided by OpenMP 4 and OpenACC to program various types of node architecture with attached accelerators, both self-hosted multicore and offload multicore/GPU. Our goal is to examine how successful OpenACC and the newer offload features of OpenMP 4.5 are for moving codes between architectures, and we document how much tuning might be required and what lessons we can learn from these experiences. To do this, we use examples of algorithms with varying computational intensities for our evaluation, as both compute and data access efficiency are important considerations for overall application performance. To better understand fundamental compute vs. bandwidth bound characteristics, we add the compute-bound Level 3 BLAS GEMM kernel to our linear algebra evaluation. We implement the kernels of interest using various methods provided by newer OpenACC and OpenMP implementations, and we evaluate their performance on various platforms including both x86_64 and Power8 with attached NVIDIA GPUs, x86_64 multicores, self-hosted Intel Xeon Phi KNL, as well as an x86_64 host system with Intel Xeon Phi coprocessors. We update these evaluations with the newest version of the NVIDIA Pascal architecture (P100), Intel KNL 7230, Power8+, and the newest supporting compiler implementations. Furthermore, we present in detail what factors affected the performance portability, including how to pick the right programming model, its programming style, its availability on different platforms, and how well compilers can optimise and target multiple platforms.
    Keywords: OpenMP 4; OpenACC; performance portability; programming models.
    DOI: 10.1504/IJHPCN.2017.10009064
     
  • Genuine and secure public auditing scheme for the outsourced data   Order a copy of this article
    by Jianhong Zhang 
    Abstract: The most common concerns for users in cloud storage are data integrity, confidentiality and availability, so various data integrity auditing schemes for cloud storage have been proposed in the past few years, some of which achieve privacy-preserving public auditing, data sharing and group dynamic, or support data dynamic. However, as far as we know, until now yet there doesn't exist a practical auditing scheme which can simultaneously realise all the functions above; in addition, in all the existing schemes, block authentication tag (BAT) is adopted by the data owner to achieve data integrity auditing. Nevertheless, it's a arduous task to compute BATs for the resource-constrained data owner. In this paper, we propose a novel privacy-preserving public auditing scheme for shared data in the cloud, which can also support data dynamic operations and group dynamic. Our scheme has the following advantages: (1) we introduce proxy signature into the existing auditing scheme to reduce the cloud user's computation burden; (2) by introducing a Lagrange interpolating polynomial, our scheme realises the identity's privacy-preserving without increasing computation cost and communication overhead, moreover it makes group dynamic simple; (3) it can realise the practical and secure dynamic operations of shared data by combining the Merkle Hash Tree and index-switch table which is built by us; (4) to protect the data privacy and resist the active attack, the cloud storage server hides the actual proof information by inserting its private key in producing proof information process. After theoretical analysis demonstrates our scheme's security,experiment results show that our scheme not only has the low computational and communication overhead for data verification but also can complete the group dynamics with great speed.
    Keywords: cloud computing; self-certified cryptography; integrity checking; security proof; provably secure; random oracle model; cryptography.

  • Distributed admission control algorithm for random access wireless networks in the presence of hidden terminals   Order a copy of this article
    by Ioannis Marmorkos, Costas Constantinou 
    Abstract: We address the problem of admission control for wireless clients in WLANs taking into account collisions between competing access points and considering explicitly the effect of hidden terminals, which play a prominent role in optimised client association. We propose an efficient, distributed admission control algorithm, where the wireless client node decides locally on which access point it will associate with in order to maximise its link throughput. The client can choose to optimise either its uplink or downlink throughput, depending on the type of traffic it predominantly intends to exchange with the network. The proposed approach takes into account the full contention resolution of the RTS/CTS IEEE802.11 medium access control protocol and leads towards an increase of the total throughput for the whole network. Finally, an algorithm is proposed, which can serve also as the basis for the development of efficient traffic offloading protocols in heterogeneous 5G networks.
    Keywords: admission control; IEEE802.11; multi-cell; WiFi offloading.
    DOI: 10.1504/IJHPCN.2017.10013362
     
  • Modelling geographical effect of user neighborhood on collaborative web service QoS prediction   Order a copy of this article
    by Zhen Chen, Limin Shen, Dianlong You, Huihui Jia, Chuan Ma, Feng Li 
    Abstract: QoS prediction is a task to predict the unknown QoS value of an active user to a web service that he/she has not accessed previously for supporting appropriate web service recommendation. Existing studies adopt collaborative filtering methods for QoS prediction, while the inherent issues of data sparsity and cold-start in collaborative filtering have not been resolved satisfactorily, and the role of geographical context is also underestimated. Through data analysis on a public real-world dataset, we observe that there exists a positive correlation between a users QoS values and geographical neighborhoods ratings. Based on the observation, we model the geographical effect of user neighborhood on QoS prediction and propose a unified matrix factorisation model by capitalising the advantages of geographical neighborhood and latent factor approaches. Experimental results exhibit the significance of geographical context on modelling user features and demonstrate the feasibility and effectiveness of our approach on improving QoS prediction performance.
    Keywords: web service; QoS prediction; collaborative filtering; geographical effect; matrix factorisation.

  • Researches on data encryption scheme based on CP-ASBE of cloud storage   Order a copy of this article
    by Xiaohui Yang, Wenqing Ding 
    Abstract: The ciphertext-policy attribute-set based encryption (CP-ASBE) based on a single authorisation centre is easy to become the security bottleneck of the system. With the support of trusted measurement technology, a novel method of CP-ASBE based on multiple attribute authority (AA) is proposed to solve this problem, and an encryption scheme is designed for cloud storage, which includes data storage, data access, data encryption and trusted measurement scheme for AA. The security performance and time cost of the encryption scheme are simulated, and the results show that the scheme can improve the security of users' data in the cloud storage environment.
    Keywords: cloud storage; CP-ASBE; authorisation centre; attribute authority; trusted measurement.

  • Deferred virtual machine migration   Order a copy of this article
    by Xiaohong Zhang, Jianji Ren, Zhizhong Liu, Zongpu Jia 
    Abstract: The rapid growth in demand for cloud services has led to the establishment of huge virtual machines. Online Virtual Machine (VM) migration techniques offer cloud providers a means to reduce power consumption while keeping quality of service. However, the efficiency of online migration is suboptimal since it is degraded by the execution of redundant migrations. To alleviate this problem, we introduce a load-aware VM migration technique. The key idea is to defer these migrations and perform a quick analysis of the loads on target servers before launching any migration. This consolidation is applied in two steps: first, all migrations are tested and a set of candidates that are suspected to be redundant is formed and postponed for a short time. Then, the servers are analysed and only a subset of the migration candidates is activated. Our selection mechanism is conservative in the sense that it avoids selecting and activating VM migrations that are unlikely to cause a harmful overloading on the target servers. Taking this conservative migration policy leads to an overall effective execution of virtual machines. Our experiment results demonstrate the usefulness and effectiveness of our method.
    Keywords: cloud computing; virtual machine migration; virtual machine consolidation; power consumption.
    DOI: 10.1504/IJHPCN.2017.10013423
     
  • NFV deployment strategies in SDN network   Order a copy of this article
    by Chia-Wei Tseng, Po-Hao Lai, Bo-Sheng Huang, Li-Der Chou, Meng-Chiou Wu 
    Abstract: The emergence of the internet has resulted in the relative expansion of complicated network architectures. Accordingly, the traditional network architecture can no longer meet the demands of a new and rapidly changing network service. Given the emergence of software defined network (SDN) and network functions virtualisation (NFV), technologies can now transform the current and complicated network architecture into a programmable, virtualised, and standardised managed architecture. This study aims to design and implement the rapid deployment strategies of the NFV services in SDN. Six different rapid deployment technologies are addressed based on the linked clone and the full clone cases. These technologies can accelerate the speed of deployment by enabling the NFV technology with an intelligent configuration. Experimental results show that the proposed parallel clone strategy in the linked clone scenario exhibited a better performance in terms of time efficiency compared with the other strategies.
    Keywords: software defined network; network functions virtualisation; rapid deployment; network management.

  • A new localisation strategy with wireless sensor networks for tunnel space model   Order a copy of this article
    by Ying Huang, Yezhen Luo 
    Abstract: Because the three-dimensional distance vector hop (DV-Hop) localisation has some large error phenomena in the location of tunnel space model, an improved three-dimensional DV-Hop fixed node location strategy is proposed based on the wireless sensor network for the relationship between the geometric model of tunnel space. This strategy analysed the deficiency of traditional three-dimensional DV-Hop algorithm about counts hop and calculations distance on two WSN nodes, and it employed the relationship between beacon node and distance to modify the hop count of DV-Hop, then using the differential method corrects distance error from the unknown node to the beacon node. According to the tunnel space model, the selection mechanism is introduced, and three optimal beacon nodes are selected to locate the unknown nodes and further improve the location accuracy. The traditional three-dimensional DV-Hop calculation hops neglect the distance between nodes, and the improved strategy is used to correct the distance between unknown nodes and beacon nodes. The experimental results show that the improved DV-Hop localisation strategy has greatly improved the location accuracy, it can be widely used to solve the internet of things problems.
    Keywords: tunnel space model; three-dimensional distance vector hop; received signal strength indicator; localisation algorithm.

  • Path self-deployment algorithm of three-dimensional space in directional sensor networks   Order a copy of this article
    by Li Tan, Chaoyu Yang, Minghua Yang, Xiaojiang Tang 
    Abstract: In contrast to two-dimensional directional sensor networks, three-dimensional directional sensor networks increase complexity and diversity. External environment and sensor limitations impact the target monitoring and coverage. Adjustment strategies provide better auxiliary guide in the process of self-deployment, while strengthening the monitoring area coverage rate and monitoring capability of sensor nodes. We propose a path self-deployment algorithm TPSA (Three-dimensional Path Self-Deployment Algorithm) based on the above issues. The concept of virtual force is extended from two to three dimensions, and includes target path control. The node gets locational information about the monitoring target and target path in the initialisation, calculates the virtual force of them, and finally obtains the next movement location and direction. We analyse the process of self-deployment of both static and polymorphic nodes. The simulation results verify that the proposed algorithm enables better node control in the deployment process, and improves the efficiency of the sensor node deployment.
    Keywords: directional sensor networks; path self-deployment; three-dimensional deployment; virtual force.
    DOI: 10.1504/IJHPCN.2017.10013346
     
  • A secure reversible chaining watermark scheme with hidden group delimiter for wireless sensor networks   Order a copy of this article
    by Baowei Wang, Qun Ding, Xiaodu Gu 
    Abstract: Chaining watermarks are considered to be one of the most practical methods for verifying data integrity in wireless sensor networks. However, the synchronisation points (SPs) or the group delimiters (GDs), which are indispensable to keep the sender and receivers synchronised, have been the biggest bottlenecks of existing methods: 1) if the SPs are tampered, the false negative rate will be up to 50% and make the authentication meaningless; 2) the additional GDs are easily detected by adversaries. We propose a more secure reversible chaining watermark scheme, called RWC, to authenticate the data integrity in WSNs. RWC has the following characteristics: 1) fragile watermarks are embedded in a dynamic grouping chaining way to verify data integrity; 2) hidden group delimiters are designed to synchronise the sending and receiving sides in case the SPs are tampered; 3) a difference expansion-based reversible watermark algorithm can achieve lossless authentication. The experimental results show that RWC can authenticate the sensory data with free distortion and significantly improve the ability to detect various attacks.
    Keywords: chaining watermark; hidden group delimiter; reversible watermark; data integrity authentication; wireless sensor networks.

  • A new segmentation algorithm based on evolutionary algorithm and bag of words model   Order a copy of this article
    by Kangshun Li, Weiguang Chen, Ying Huang, Shuling Yang 
    Abstract: Crop disease and insect pest detection and recognition using machine vision can provide precise diagnosis and preventive suggestions. However, the complexity of agricultural pest and disease identification based on traditional bag of words (BOW) models is high, and the effect is general. This paper presents a histogram quadric segmentation algorithm based on an evolutionary algorithm to observe the features (colour, texture) of disease spots and to learn from the guided filtering algorithm. This process aims to obtain the precise positions of disease spots in images. Dense-SIFT, which can extract features, and spatial pyramid, which can map image features to high-spatial-resolution space, are simultaneously applied in the recognition of crop diseases and insect pests in the BOW model. The experimental results show that the new segmentation algorithm can effectively locate the positions of disease spots in corn images, and the improved BOW model substantially increases the recognition accuracy of crop diseases and insect pests.
    Keywords: evolutionary algorithm; disease spot segmentation; image recognition; diseases and insect pests.

  • A jamming detection method for multi-hop wireless networks based on association graph   Order a copy of this article
    by Xianglin Wei, Qin Sun 
    Abstract: Jamming attacks have been a great challenge for the researchers because such attacks can severely damage the Quality of Service (QoS) of Multi-Hop Wireless Networks (MHWNs). Therefore, how to detect and distinguish multiple jamming attacks and thus to restore network service has been a hot topic in recent years. Note that different jamming attacks will cause different network status changes in MHWN. Based on this observation, a jamming detection algorithm based on association graph is put forward in this paper. The proposed algorithm consists of two phases, i.e. learning and detection phases. At the learning phase, as different symptoms are extracted through learning from various samples collected from both jamming and jamming-free scenarios, a symptom-attack association graph is built. Then, at the detection phase, the built symptom-attack association graph is adopted to detect the jamming attacks that lead to the observed symptoms by some particular network node. A series of simulation experiments on NS3 validated that the proposed method can efficiently detect and classify the typical jamming attacks, such as reactive, random and constant jamming attacks.
    Keywords: jamming detection; multi-hop wireless network; association graph.

  • Secure deduplication of encrypted data in online and offline environments   Order a copy of this article
    by Hua Ma, Linchao Zhang, Zhenhua Liu, Enting Dong 
    Abstract: Deduplication is a very critical technology in saving cloud storage space. Especially, client-side deduplication can save both storage and bandwidth. However, there are some security risks in the existing client-side deduplication schemes, such as file proof replay attack and online/offline brute-force attack. Moreover, these schemes do not consider offline deduplication. Aiming at solving the above problems, we present a secure client-side deduplication scheme of encrypted data in online and offline environments. In our scheme, we adopt a technology, mixing the dynamic coefficient with the randomly selected original file, so that new file proof can be produced in each challenge. In the case of offline, we introduce a trusted third party as a checker to run the proof of ownership with an uploader. The main difference between online and offline deduplication is the input value, which ensures that the program can be used efficiently, so the cost of storage and design is reduced. Furthermore, the proposed scheme can resist online and offline brute-force attacks, which depend on per-client rate-limiting method and high collision hash function, respectively. Interestingly, the security of the proposed scheme relies on secure cryptographic hash function.
    Keywords: deduplication; proof of ownership; online/offline brute-force attack; file proof reply attack.

  • An efficient temporal verification algorithm for intersectant constraints in scientific workflow   Order a copy of this article
    by Lei Wu, Longshu Li, Xuejun Li, Futian Wang, Yun Yang 
    Abstract: It is usually essential to conduct temporal verification in order to ensure overall temporal correctness for the usefulness of execution results in a scientific workflow. For improving the efficiency of temporal verification, many efforts have been dedicated to the selection of more effective and efficient checkpoints along scientific workflow execution so that temporal violations can be found and handled in time. Most of checkpoint selection strategies involve temporal constraints, but the relations among the temporal constraints have been ignored. The only existing strategy considering relations among temporal constraints is for the nested situation and temporal dependency based only. However, the intersectant situation and non-dependency relationship of temporal constraints have to be considered. In this paper, a constraint adjustment strategy based algorithm for efficient temporal verification in scientific workflow involved in both temporal dependency and temporal reverse-dependency is presented for the intersectant situation. Simulations show that the temporal verification efficiency of our algorithm is significantly improved.
    Keywords: scientific workflow; temporal verification; temporal constraint; temporal consistency; intersectant situation; temporal dependency; temporal reverse-dependency.
    DOI: 10.1504/IJHPCN.2017.10016275
     
  • Load forecasting for cloud computing based on wavelet support vector machine   Order a copy of this article
    by Wei Zhong, Yi Zhuang, Jian Sun, Jingjing Gu 
    Abstract: Owing to the tasks submitted by users with random and nonlinear characteristics in the cloud computing environment, it is very difficult to forecast the load in the cloud data centre. In this paper, we combine the wavelet transform and support vector machine (SVM) to propose a wavelet support vector machine load forecast (WSVMLF) model for cloud computing. The model uses the wavelet transform to analyse the cycle and frequency of the input data while combining with the characteristics of the nonlinear regression of the SVM, so that the task load can be modelled more accurately. Then a WSVMLF algorithm is proposed, which can improve the accuracy of the cloud load prediction. Finally, the Google cloud computing centre dataset was selected to test the proposed WSVMLF model. The comparative experimental results show that the proposed algorithm has a better performance and accuracy than the similar forecasting algorithms.
    Keywords: cloud computing; wavelet transform; support vector machine; load forecasting algorithm.

  • A new method of text information hiding based on open channel   Order a copy of this article
    by Yongjun Ren, Wenyue Ma, Xiaohua Wang 
    Abstract: Text is one of most constantly and widely used information carriers. Comparing with several other carriers, a new method of text information hiding based on open channel is been put forward in this article. The public normal texts are used to transmit information directly without any changes in order to hide it. At the same time, this hidden information negotiates with the intended recipient to produce the session key. The text, including the location of hidden information and including the associated hidden rules, delivers information through the session key. Besides, this method avoids stenographic testing used by attackers unless they break the session key, which ensures the hidden informations safety. The instruction protocol of hidden information serves as a basic tool in the proposed method. In the paper, the idea of the implicit authentication in the MTI protocol families (key agreement protocol constructed by Matsumoto, Takashima, and Imai) is derived from an identity-based protocol without pairing. Moreover, this proposed protocol is provably secure in the CK model (Canetti-Krawczyk model) for text information hiding.
    Keywords: text information hiding; big text data; open channel.

  • Visual vocabulary tree-based partial-duplicate image retrieval for coverless image steganography   Order a copy of this article
    by Yan Mu, Zhili Zhou 
    Abstract: The traditional image steganographic approaches embed the secret message into covers by modifying their contents. Therefore, the modification traces left in the cover will cause some damages to the cover, especially embedding more messages in the cover. More importantly, the modification traces make successful steganalysis possible. In this paper, visual vocabulary tree-based partial-duplicate image retrieval for coverless image steganography is proposed to embed the secret messages without any modification. The main idea of our method is to retrieve a set of duplicates of a given secret image as stego-images from a natural image database. The images in the database will be divided into a number of image patches, and then indexed by the features extracted from the image patches. We search for the duplicates of the secret image in the image database to obtain the stego-images. Each of these stego-images shares one similar image patch with the secret image. When a receiver obtains those stego-images, our method can recover the secret image approximately by using the designed protocols. Experimental results show that our method not only resists the existing steganalysis tools, but also has high capacity.
    Keywords: coverless image steganography; robust hashing algorithm; vocabulary tree; image retrieval; stego-image; image database; high capacity.

  • Telecom customer clustering via glowworm swarm optimisation algorithm   Order a copy of this article
    by YanLi Liu, Mengyu Zhu 
    Abstract: The glowworm swarm optimisation (GSO) algorithm is a novel algorithm with the simultaneous computation of multiple optima of multimodal functions. Data-clustering techniques are classification algorithms that have a wide range of applications. In this paper, the GSO algorithm is used for telecom customer clustering. We extract customer consumption data by means of the RFM (Recency Frequency Monetary) model and cluster standardised data automatically using the GSO algorithms synchronous optimisation ability. Compared with the K-means clustering algorithm, the GSO approach can automatically generate the number of clusters and use the RFM model to reduce effectively the size of the data processing. The results of the experiments demonstrate that the GSO-based clustering technique is a promising technique for the data clustering problems.
    Keywords: glowworm swarm optimisation; customers' subdivision; data clustering.

  • Efficient algorithm for on-line data retrieval with one antenna in wireless networks   Order a copy of this article
    by Ping He, Shuli Luan 
    Abstract: Given a set of requested data items and a set of multiple channels, the data retrieval problem is to find a data retrieval sequence for downloading all requested data items from these channels within a reasonable time. Most existing schemes are applied in an offline environment that the client can learn with prior knowledge of wireless data broadcast, such as the set of broadcast channels and the broadcast time of data items. However, this information is not known in the online data retrieval algorithm. So this paper proposes an online algorithm (MRLR) that selects the most recent and longest unretrieved channel for retrieving all requested data items, and holds (k-1) competitive rate compared with the optimal offline data retrieval algorithm, where k is the number of channels. For solving the problem of many redundant channel switches in MRLR, the paper proposes two online Randomised Marker algorithms (RM and MRM), which add two flags to mark the channel with the requested data item and the retrieved channel without the requested data item, respectively. Finally, RM and MRM algorithms hold (log(k-1)+k/2) and k/2 competitive rate, respectively. By comparing the competitive rates and experimental results of the proposed online algorithms, we observe that the performance of online data retrieval is improved in wireless networks.
    Keywords: wireless data broadcast; on-line data retrieval; data schedule; indexing.

  • An efficient 3D point clouds covariance descriptor for object classification with mismatching correction algorithm   Order a copy of this article
    by Heng Zhang, Bin Zhuang 
    Abstract: We introduce a new covariance descriptor combining object visual (colour, gradient, depth, etc.) and geometric information (3D coordinates, normal vectors, Gaussian curvature, etc.) for mobile robot with RGB-D camera to deal with point cloud data. The improved mismatching correction algorithm is applied in the feature point mismatching correction of 3D point clouds. This descriptor is able to quickly match the feature points of the point clouds in the surrounding environment and realise the function of object classification. Experimental results show that this descriptor has an advantage of the compactness and flexibility compared with the previous descriptor, and greatly reduces the storage space required. At the same time, the instance and category recognition accuracy of the proposed descriptor for objects can respectively reach 94.6% and 86.8%, which are higher than those of the previously methods for object recognition of 3D point clouds.
    Keywords: object classification; point clouds; covariance descriptor; mismatching correction.

  • Graph-based model and algorithm for minimising big data movement in a cloud environment   Order a copy of this article
    by Yassir Samadi, Mostapha Zbakh, Claude Tadonki 
    Abstract: In this paper, we discuss load balancing and data placement strategies in heterogeneous cloud environments. Load balancing is crucial in large-scale data processing applications, especially in a distributed heterogeneous context such as the cloud. The main goal in data placement strategies is to improve the overall performance through the reduction of data movements among the participating datacentres, which are geographically distributed, taking into account their characteristics such as speed of processing, storage capacity and data dependency. Load balancing and efficient data placement on cloud systems are critical problems, which are difficult to cope with simultaneously, especially in the emerging heterogeneous clusters. In this context, we propose a threshold-based load balancing algorithm, which first balances the load between datacentres, and afterwards minimises the overhead of data exchanges. The proposed approach is divided into three phases. First, the dependencies between the datasets are identified. Second, the load threshold of each datacenter is estimated based on the processing speed and the storage capacity. Third, the load balancing between the datacentres is managed through the threshold parameters. The heterogeneity of the datacentres, together with the dependencies between the datasets, are both taken into account. Our experimental results show that our approach can efficiently reduce the frequency of data movement and keep a good load balancing between the datacentres.
    Keywords: graph model; big data; cloud computing; load balancing; data placement; data dependency.
    DOI: 10.1504/IJHPCN.2018.10013848
     
  • Processed RGB-D SLAM based on HOG-Man algorithm   Order a copy of this article
    by Yanli Liu, Mengyu Zhu 
    Abstract: SLAM (Simultaneous Localization and Mapping) of robots is the key to achieve autonomous control of robots and also a significant topic in the field of mobile robotics. Aiming at 3D modelling of an indoor complex environment, this paper presents a fast three-dimensional SLAM method for mobile robots. On the basis of the HOG-Man algorithm, which is the core of RGB-D SLAM algorithm, the open-source software combining an RGB-D sensor such as Kinect with the wheeled mobile robots is used to obtain the odometry data, and then the information of their location is matched through the image feature extraction, and in the end the map is optimised by the HOG-Man algorithm. Finally, the feasibility and effectiveness of the proposed method are verified by experiments in indoor environment.
    Keywords: RGB-D SLAM; mobile robot; HOG-Man algorithm; Kinect.

  • A new revocable reputation evaluation system based on blockchain   Order a copy of this article
    by Haoxuan Li, Hui Huang, Shichong Tan, Ning Zhang, Xiaotong Fu 
    Abstract: Reputation evaluation system, as the publisher and analyser of evaluation, is an important influencing factor for users and sellers in online business. Traditional reputation evaluation system always requires a third party to achieve the operation of analysis and publishment. However, a third party often exposes the identity of the users, and leaks the information. As far as we know, all existing revocable reputation evaluation systems are based on the third-party model. In this paper, we present a new reputation evaluation system based on blockchain. Compared with traditional reputation evaluation systems, our system removes the third party, and allows users to modify their own evaluation information. Moreover, the user's privacy also can be protected. The experiment performance demonstrates that the overhead of the system is acceptable, the system is feasible and efficient.
    Keywords: reputation system; cryptography protocol; blockchain; linkable ring signature; smart contract.

  • A comparative study on automatic parallelisation tools and methods to improve their usage   Order a copy of this article
    by S. Prema, R. Jehadeesan, B.K. Panigrahi 
    Abstract: Automatic parallelisation assists users in parallelising the serial code even without acquiring knowledge about the application. Auto-parallelisers focus on loop-level parallelisation and dependence analysis. Apart from dependences present within the loop, specific coding complexities make the code not amenable to parallelisation owing to the limitations of the parallelising tool. To overcome these programming complications, we can explore the possibility of minimal manual intervention to make the code acquiescent for parallelisation. This paper provides a study of currently available auto-parallelisers and their competence on parallelisation of different programming features. The pitfalls faced by these tools are unveiled and categorised for detailed analysis. A solution-based approach in the form of coding changes circumvents the pitfalls and achieves efficient parallelisation. It also underlines the overall capability of the tools in supporting programming features during parallelisation.
    Keywords: automatic parallelisation; OpenMP parallel programming; coding complexities; loop parallelisation.
    DOI: 10.1504/IJHPCN.2019.10018358
     
  • FollowMe: a mobile crowd-sensing platform for spatial-temporal data Sharing   Order a copy of this article
    by Mingzhong Wang 
    Abstract: Mobile crowd sensing is a promising solution for massive data collection with public participation. Besides the challenges of user incentives, and diversified data sources and quality, the requirement of sharing spatial-temporal data makes the privacy concerns of contributors one of the priorities in the design and implementation of a sound crowdsourcing platform. In this paper, FollowMe is introduced as a use case of a mobile crowd sensing platform to explain possible design guidelines and solutions to address these challenges. The incentive mechanisms are discussed according to both the quantity and quality of users contributions. Then, a k-anonymity based solution is applied to protect contributors' privacy in both scenarios of trustworthy and untrustworthy crowdsourcers. Thereafter, a reputation-based filtering solution is proposed to detect fake or malicious reports, and finally a density-based clustering algorithm is introduced to find hotspots which can help the prediction of future events. Although FollowMe is designed for a virtual world of the popular mobile game Pok
    Keywords: mobile crowd sensing; spatial-temporal data; crowdsourcing; privacy; k-anonymity; hotspot; reputation; incentive mechanism.

  • A novel graph compression algorithm for data-intensive scientific networks   Order a copy of this article
    by Xiao Lin, Haizhou Du, Shenshen Chen 
    Abstract: As one of the world's leading scientific and data-intensive computing grids, the Worldwide LHC Computing Grid (WLCG) faces the challenge of improving its computing efficiency and network utilisation. To achieve this goal, WLCG needs an important piece of information: the network topology graphs of participating computing grids. Directly collecting such information from all of the grids, however, would cause high communication overhead and raise many security issues. In this paper, we address these issues by proposing a novel algorithm to compress such a large network topology into a compact, equivalent network topology. We formally define our problem, develop a novel, efficient topology-compression algorithm and evaluate its performance using real-world network topologies. Our results show that our algorithm not only achieves a much higher topology compression ratio than state-of-the-art topology transformation algorithms, but also leads to up to 100x reduction in computation time.
    Keywords: network topology; data-intensive; compression; shortest path tree; weighted graph.

  • Outlier detection of time series with a novel hybrid method in cloud computing   Order a copy of this article
    by Qi Liu, Zhen Wang, Xiaodong Liu, Nigel Linge 
    Abstract: In the wake of the development in science and technology, cloud computing has obtained more attention in different fields. Meanwhile, outlier detection for data mining in cloud computing is playing a more and more significant role in different research domains, and massive research works have been devoted to outlier detection, which includes distance-based, density-based and clustering-based outlier detection. However, the existing available methods require high computation time. Therefore, the improved algorithm of outlier detection, which has higher performance to detect outliers, is presented. In this paper, the proposed method, which is an improved spectral clustering algorithm (SKM++), is fit for handling outliers. Then, pruning data can reduce computational complexity and combine distance-based method Manhattan distance (distm) to obtain outlier score. Finally, the method confirms the outlier by extreme analysis. This paper validates the presented method by experiments with real collected data by sensors and comparison against the existing approaches. The experimental results show that our proposed method is an improvement.
    Keywords: cloud computing; data mining; outlier detection; spectral clustering; Manhattan distance.

  • A priority-based queuing system for P2P-SIP call communications control   Order a copy of this article
    by Mourad Amad, Djamil Aissani, Razika Bouiche, Nouria Madi 
    Abstract: Regarding the shortcomings of fundamental existing solutions for VoIP communications (e.g. SIP) based on centralisation, both academia and industry have initiated research projects focused on the integration of P2P paradigms into SIP communication systems (P2P-SIP). P2P-SIP builds an overlay network to provide efficient, interoperable and flexible SIP-based services. In this paper, we propose a new model for critical calls, which takes into consideration the priority aspect of some specific requests (e.g. emergency calls). The proposed model is generic regarding the P2P underlying and physical architectures. For illustration purposes, we consider Gnutella, as a representative of unstructured P2P networks, and Chord as, as a representative of structured P2P networks. In order to validate the proposed solution, a M/M/1 queuing model is considered. Performance evaluations show that the preliminary results are globally satisfactory, and that our proposed model, under certain conditions, is relevant.
    Keywords: VoIP; P2P-SIP; calls control; priority; queuing systems.

  • Budget-aware task scheduling technique for efficient management of cloud resources   Order a copy of this article
    by Mokhtar A. Alworafi, Atyaf Dhari, Sheren A. El-Booz, Suresha Mallappa 
    Abstract: Cloud computing technology offers many services using the pay per use concept, where the user gets to specify constraints such as the budget. Task scheduling algorithms are therefore the most preferred option under a budget constraint which is used to improve some of the metrics such as makespan and cost. In this paper, we propose a Budget-Aware Scheduling (BAS) model to schedule the tasks based on the budget constraint. At first, the VMs which meet the budget are labelled and the task priority is determined. Next, the task attributes are checked and assigned to the resources that meet the budget constraint to keep the makespan as low as possible with minimal cost for resource usage. The experiments demonstrate that the proposed model outperforms other algorithms by reducing average of makespan, mean of average response time, and the cost of resources with an increase in resource usage and profit of provider.
    Keywords: cloud computing; scheduling; budget constraint; budget-aware scheduling; makespan; provider profit.
    DOI: 10.1504/IJHPCN.2018.10014669
     
  • Sparse reconstruction of piezoelectric signal for phased array structural health monitoring   Order a copy of this article
    by Yajie Sun, Feihong Gu, Sai Ji 
    Abstract: Structural health monitoring technology has been widely used in the detection and identification of plate structure damage. Ultrasonic phased array technology has become an important method for structural health monitoring because of its flexible beam scanning and strong focusing performance. However, a large number of phased array signals will be produced, which creates difficulty in storing, transmitting and processing. Therefore, under the condition of which the signal is sparse, compressive sensing theory can achieve signal acquisition with much lower sampling rate than the traditional Nyquist sampling theorem. Firstly, the sparse orthogonal transformation is used to make the sparse representation. Then, the measurement matrix is used for the projection observation. Besides, the reconstruction algorithm is used for sparse reconstruction. In this paper, the experimental verification of the antirust aluminium plate material is carried out. The experiment shows that the proposed method is useful for reconstructing the signal of phased array structure health monitoring.
    Keywords: structural health monitoring; ultrasonic phased array; compressive sensing; matching pursuit algorithm.

  • Malicious webpages detection using feature selection techniques and machine learning   Order a copy of this article
    by Dharmaraj Patil, Jayantrao Patil 
    Abstract: Today, the popularity of the World Wide Web (WWW) and its usability in online banking, e-commerce and social networking has attracted cyber-criminals who exploit vulnerabilities for illegitimate benefits. Attackers use web pages to lure different types of attack, such as drive-by downloads, phishing, spamming, and malware distribution, to exploit legitimate users and obtain their identity to misuse. In recent years, many researchers have provided significant and effective solutions to detect malicious web pages; however, owing to the ever-changing nature of cyber attacks, there are still many open issues. This paper proposes a methodology for the effective detection of malicious web pages using feature selection methods and machine learning classifiers. Basically, our methodology consists of three modules: 1) feature selection; 2) training; and 3) classification. To evaluate our proposed methodology, six state-of-the-art feature selection methods and eight supervised machine learning classifiers are used. Experiments are performed on the balanced binary dataset using the feature selection methods and machine learning classifiers. It is found that by using feature selection methods, the classifiers achieved significant detection accuracy of 94-99% and above, error-rate of 0.19-5.55%, FPR of 0.006-0.094, FNR of 0.000-0.013 and minimum system overhead. Our multi-model system using majority voting classifier and Wrapper+Naive Bayes feature selection method with GreedyStepwise search technique using only 15 features achieved a highest accuracy of 99.15%, FPR of 0.017 and FNR of 0.000. The experimental analysis shows that our approach outperforms 18 well-known anti-virus and anti-malware softwares in terms of detection accuracy with an overall accuracy of 99.15%.
    Keywords: malicious web pages; feature selection; machine learning; web security; cyber security.

  • Greedily assemble tandem repeats for next generation sequences   Order a copy of this article
    by Yongqing Jiang, Jinhua Lu, Jingyu Hou, Wanlei Zhou 
    Abstract: Eukaryotic genomes contain high volumes of intronic and intergenic regions in which repetitive sequences are abundant. These repetitive sequences represent challenges in genomic assignment of short read sequences generated through next generation sequencing and are often excluded in analysis thus losing valuable genomic information. Here we present a method, known as TRA (Tandem Repeat Assembler), for the assembly of repetitive sequences by constructing contigs directly from paired-end reads. Using an experimentally acquired dataset for human chromosome 14, tandem repeats > 200 bp were assembled. Alignment of the contigs to the human genome reference (GRCh38) revealed that 84.3% of tandem repetitive regions were correctly covered. For tandem repeats, this method outperformed state-of-the-art assemblers by generating correct N50 of contigs up to 512 bp.
    Keywords: tandem repeat; assembly; NGS.

  • GeaBase: a high-performance distributed graph database for industry-scale applications   Order a copy of this article
    by Zhisong Fu, Zhengwei Wu, Houyi Li, Yize Li, Xiaojie Chen, Xiaomeng Ye, Benquan Yu, Xi Hu 
    Abstract: Graph analytics has been gaining traction rapidly in the past few years. It has a wide array of application areas in industry, ranging from e-commerce, social network and recommendation systems to fraud detection and virtually any problem that requires insights into data connections, not just data itself. In this paper, we present {GeaBase}, a new distributed graph database that provides the capability to store and analyse graph-structured data in real-time at massive scale. We describe the details of the system and the implementation, including a novel update architecture, called {Update Center} (UC), and a new language that is suitable for both graph traversal and analytics. We also compare the performance of GeaBase to a widely used open-source graph database {Titan}. Experiments show that GeaBase is up to 182x faster than Titan in our testing scenarios. We also achieved 22x higher throughput on social network workloads in comparison.
    Keywords: graph database; distributed database; high performance.

  • Parallel big image data retrieval by conceptualised clustering and un-conceptualised clustering   Order a copy of this article
    by Ja-Hwung Su, Chu-Yu Chin, Jyun-Yu Li, Vincent S. Tseng 
    Abstract: Content-based image retrieval is a hot topic which has been studied for few decades. Although there have been a number of recent studies proposed on this topic, it is still hard to achieve a high retrieval performance for big image data. To aim at this issue, in this paper, we propose a parallel content-based image retrieval method that efficiently retrieves the relevant images by un-conceptualised clustering and conceptualised clustering. For un-conceptualised clustering, the un-conceptualised image data is automatically divided into a number of sets, while the conceptualised image data is divided into multiple sets by conceptualised clustering. Based on the clustering index, the depth-first-search strategy is performed to retrieve the relevant images by parallel comparisons. Through experimental evaluations on a large image dataset, the proposed approach is shown to improve the performance of content-based image retrieval substantially in terms of efficiency.
    Keywords: content-based image retrieval; un-conceptualised clustering; conceptualised clustering; big data; parallel computation.

  • Exponential stability of big data in networked control systems for a class of uncertain time-delay and packet dropout   Order a copy of this article
    by Huaiyu Zheng, Shigang Liu, Fengjie Sun 
    Abstract: This paper studies the problem of exponential stability for a networked control system with uncertain time-delay and packet dropout. The controller gain, designed to get better result, is assumed to have additive and multiplicative gain variations. Supposing networked control systems are uncertain time-delay which is not more than one sampling period and has packet dropout. Using the Lyapunov theory and linear matrix inequality formulation, we could obtain the sufficient condition of the asynchronous dynamical system for all admissible uncertainties and packet dropout. Finally, a simulation example illustrates the effectiveness of the approach.
    Keywords: networked control systems; data packet dropout; Lyapunov function; linear matrix inequalities; exponential stability.

  • Fault-tolerant flexible lossless cluster compression method for monitoring data in smart grid   Order a copy of this article
    by Zhijian Qu, Hanlin Wang, Xiang Peng, Ge Chen 
    Abstract: Big data in smart grid dispatch monitoring systems is susceptible to interference from processing delays and slow response times.Hence, a new fault-tolerant flexible lossless cluster compression method is proposed. This paper presents the five-tuples (S, D, O, T, M) model, and builds a monitoring data processing platform based on Hive. By deploying the dispatch host and monitoring servers under the cloud computing environment, where data nodes are respectively transformed by Deflate, Gzip, BZip2 and Lzo lossless compression method. Taking the power dispatch automation system of Long-hai line as example, experimental results show that the cluster lossless compression ratio of BZip2 is greater than 81%; when data records reach twelve million, the compression ratio can be further improved to certain extent by using RCFile storage Hive format,which has significant flexible features. Therefore, the new method proposed in this paper can improve the flexibility and fault-tolerant ability of big monitoring data processing in smart grid.
    Keywords: cloud computing; smart grid; cluster lossless compression; fault-tolerant.

  • Combined bit map representation and its applications to query processing of resource description framework on GPU   Order a copy of this article
    by Chantana Chantrapornchai, Chidchanok Choksuchat 
    Abstract: Resource Description Framework (RDF) is a common representation in semantic web context, including the web data sources and their relations in the URI form. With the growth of data accessible on the internet, the RDF data currently contains millions of relations. Thus, answering a semantic query requires going through large amounts of data relations, which is time consuming. In this work, we present a representation framework, Combined Bit Map (CBM) representation, which compactly represents RDF data while helping to speed up semantic query processing using Graphics Processing Units (GPUs). Since GPUs have limited memory size, without compaction the RDF data cannot be entirely stored in the GPU memory; the CBM structure enables more RDF data to reside in the GPU memory. Since GPUs have many processing elements, their parallel use speeds up RDF query processing. The experimental results show that the proposed representation can reduce the size of RDF data by 70%. Furthermore, the search time on this representation using the GPU is 60% faster than with conventional implementation.
    Keywords: graphic processing unit; semantic web; query processing; parallel processing; bit map.

  • A DSL for elastic component-based cloud application   Order a copy of this article
    by Saddam Hocine Hiba, Meriem Belguidoum 
    Abstract: The deployment of component-based applications in cloud system environments is becoming more and more complex. It is expected to provide elasticity in order to allow a deployed application to scale dynamically and meet variation in demand while ensuring a certain level of Quality of Service (QoS). However, there are still some open issues associated with the elasticity management. A conceptual model of elasticity management enabling the description of deployment and application constraints, properties and elasticity strategies at different levels (depending on the internal application architecture or on the cloud infrastructure and platform) in an automatic way is needed. In this paper, we propose a domain-specific language (DSL) based on a metamodel, which precisely specifies three main views: the cloud service models, the automatic elasticity management strategies and the internal cloud application architecture. We illustrate, through a case study, the MAPE-K based approach using different scenarios of automatic elasticity management.
    Keywords: cloud computing; elasticity management; component-based application; MDA; DSL; MAPE-K.

  • Selection of effective probes for an individual to identify P300 signal generated from P300 BCI speller   Order a copy of this article
    by Weilun Wang, Goutam Chakraborty 
    Abstract: P300 is a strong Event Related Potential (ERP) generated in the brain and observed on the scalp when an unusual event happens. To decipher the P300 signal, we have to use the property of P300 to distinguish P300 signal from non-P300 signal. In this work, we used data collected from P300 BCI speller with 128 probes. Conventional BCI speller uses eight probes at pre-defined locations on the skull. Though P300 is strong in the parietal region of the brain, location of the strongest signal varies from person to person. The idea is that, if we optimise probe locations for an individual, we could reduce the number of probes required. In fact, the process mode for the raw brain wave signals also will affect the classification accuracy. We designed an algorithm to analyse the raw signals. We achieved over 81% classification accuracy on average with only three probes from only one target stimulus and one non-target stimulus.
    Keywords: event related potential; probes reduction; P300 amplitude; brain computer interface.

  • An efficient approach to optimise I/O cost in data-intensive applications using inverted indexes on HDFS splits   Order a copy of this article
    by Narinder Seera, S. Taruna 
    Abstract: Hadoop is prominent for its scalable and distributed computing capabilities coupled with Hadoop Distributed File System (HDFS). Hadoop MapReduce framework is extensively used for exploratory big data analytics by business-intelligence applications and machine learning tools. The analytic queries executed by these applications often include multiple ad hoc queries and aggregate queries with some selection predicates. The cost of executing these queries grow incredibly as the size of dataset grows. The most effective strategy to improve query performance in such applications is to process only relevant data keeping irrelevant data aside, which can be done using index structures. This strategy reduces the overall cost of running applications which comes from amount of I/O to be processed or amount of data to be transferred among nodes of the cluster. This paper is an attempt to improve query performance by avoiding full scans on data files - which can be done by creating custom indexes on HDFS data. The algorithms used in this paper create inverted indexes on HDFS input splits. We show how query processing in MapReduce jobs can benefit in terms of performance by employing these custom indexes. The experiments demonstrate that queries executed using indexed data execute 1.5 times faster than the traditional queries which do not use any index structures.
    Keywords: inverted index; MapReduce; I/O cost; HDFS; input splits.

  • Generic data storage-based dynamic mobile app for standardised electronic health records database   Order a copy of this article
    by Shivani Batra, Shelly Sachdeva, Subhash Bhalla 
    Abstract: Standardisation plays an important role in making healthcare application worldwide adaptable. It uses archetypes for semantic interoperability. In addition to the interoperability, a mechanism to handle future evolution is the primary concern for market sustainability. An application should possess dynamism in terms of the front end (user interface) as well as the back end (database) to build a future proof system. Current research aims to extend the functionality of prior work on Healthsurance with a search efficient generic storage and validation support. At application level, graphical user interface is dynamically build using knowledge provided by standards in terms of archetypes. At the database level, generic storage structure is provided with improved searching capabilities to support faster access, to capture dynamic knowledge evolution and to handle sparseness. A standardised format and content helps to uplift the credibility of data and maintains a uniform, and specific set of constraints used to evaluate users health. Architecture proposed in current research enables implementation of mobile app based on an archetype paradigm that can avoid reimplementation of the systems, supports migrating databases and allows the creation of future-proof systems.
    Keywords: standardised electronic health records; generic database; sparseness; frequent evolution; mobile application.

  • A novel ECC-based lightweight authentication protocol for internet of things devices   Order a copy of this article
    by Aakanksha Tewari, Brij Gupta 
    Abstract: In spite of being a promising technology which will make our lives a lot easier, we cannot be oblivious to the fact the internet of things (IoT) is not safe from online threat and attacks. Thus, along with the growth of IoT, we also need to work on these aspects. Taking into account the limited resources that these devices have, it is important that the security mechanisms should also be less complex and do not hinder the actual functionality of the device. In this paper, we propose an ECC-based lightweight authentication for IoT devices which deploy RFID tags at the physical layer. ECC is a very efficient public key cryptography mechanism as it provides privacy and security with less computation overhead. We also present a security and performance analysis to verify the strength of our proposed approach. We have verified the security and authentication session execution of our protocol using the Promela model and the SPIN tool.
    Keywords: security; authentication; internet of things; RFID.

  • Coverless information hiding method based on the keyword   Order a copy of this article
    by Hongyong Ji, Zhangjie Fu 
    Abstract: Information hiding is an important technique to make secure the transmission of information. However, the existing information hiding method based on text is difficult to resist various steganography detection. Coverless text information hiding, a new information-hiding technique, is presented in this paper to ensure data security. Firstly the presented method selects the appropriate natural text as information carrier from the big data, and then generates the plaintext automatically by the natural text and the prior agreement. The method we propose can resist many types of existing steganalysis. The experimental results show that this method is effective.
    Keywords: coverless information hiding; big data; Chinese mathematical expression; word segmentation.
    DOI: 10.1504/IJHPCN.2019.10021104
     
  • Measurement method of carbon dioxide using spatial decomposed parallel computing   Order a copy of this article
    by Nan Liu, Weipeng Jing, Wenlong Song, Jiawei Zhang 
    Abstract: According to the carbon dioxide (CO2) characteristics of weak absorption in the ultraviolet and visible (UV-VIS) bands, a measurement method based on spatial decomposed parallel computing of traditional differential optical absorption spectroscopy is proposed to measure CO2 vertical column concentration in ambient atmosphere. Firstly, American Standard Profile is used to define solar absorption spectrum, and the spectrum acquisition of the incident light converged by the telescope is described as observed parameters. On these bases, spectrometer line model is established. Then, atmospheric radiation transmission is simulated using parallel computing, which obviously reduces the computational complexity while balancing the interference that participates in the fitting. Simulation analyses show that the proposed method can reduce the computational complexity, and run time is reduced by 1.18 s compared with IMLM and IMAP-DOAS in the same configuration. The proposed method can also increase accuracy, with its inversion error reduced by 5.3% and residual reduced by 0.8% than differential optical absorption spectroscopy. The spatial decomposed parallel computing method has advantages in processing CO2, and it can be further used in the research of carbon sink.
    Keywords: differential optical absorption spectroscopy; DOAS; ultraviolet and visible band; spatial decomposed parallel computing method; vertical column concentration; spectrometer; fitting.
    DOI: 10.1504/IJHPCN.2019.10021105
     
  • Design and implementation of an Openflow SDN controller in NS-3 discrete-event network simulator   Order a copy of this article
    by Ovidiu Mihai Poncea, Andrei Sorin Pistirica, Florica Moldoveanu, Victor Asavei 
    Abstract: The NS-3 simulator comes with the Openflow protocol and a rudimentary controller for classic layer-2 bridge simulations. This controller lacks basic functionality provided by real SDN controllers, making it inadequate for experimentation in the scientific community. In this paper we propose a new controller with an architecture and functionality similar to that of full-fledge controllers yet simple, extensible and easy to modify - all characteristics specific to simulators.
    Keywords: networking; software defined networking; SDN controller; NS33; NS-3; simulators.
    DOI: 10.1504/IJHPCN.2019.10021106
     
  • Exploring traffic condition based on massive taxi trajectories   Order a copy of this article
    by Dongjin Yu, Jiaojiao Wang, Ruiting Wang 
    Abstract: As the increasing volumes of urban traffic data become available, more and more opportunities arise for the data-driven analysis that can lead to the improvements of traffic conditions. In this paper, we focus on a particularly important type of urban traffic dataset: taxi trajectories. With GPS devices installed, moving taxis become the valuable sensors for the traffic conditions. However, analysing these GPS data presents many challenges due to their complex nature. We propose a new approach to the exploration of traffic conditions based on massive taxi trajectories. First, we match the locations of moving taxis with the road network according to the recorded GPS data. Afterwards, we transform the trajectory of each moving taxi as a document, and identify traffic topics through textual topic modelling techniques. Finally, we cluster trajectories based on these traffic topics to explore the traffic conditions. The effectiveness of our approach is illustrated by the case with a large taxi trajectory dataset acquired from 3,743 taxis in a city.
    Keywords: vehicle trajectory; map matching; traffic regions; latent Dirichlet allocation; LDA; trajectory clustering; visualisation.
    DOI: 10.1504/IJHPCN.2016.10011059
     
  • 1.25 Gbits/s-message experimental transmission utilising chaos-based fibre-optic secure communications over 143 km   Order a copy of this article
    by Hongxi Yin, Qingchun Zhao, Dongjiao Xu, Xiaolei Chen, Ying Chang, Hehe Yue, Nan Zhao 
    Abstract: Chaotic optical secure communications (COSC) are a kind of fast-speed hardware encryption technique at the physical layer. Concerning the practical applications, high-speed long-haul message transmission is always the goal to pursue. In this paper, we report experimentally a scheme of long-haul COSC, where the bit rate reaches 1.25 Gbits/s and the transmission distance up to 143 km. Besides, a distinct advantage of low-cost is guaranteed with the off-the-shelf optical components, and no dispersion compensating fibre (DCF) or forward-error correction (FEC) is required. To the best of our knowledge, this is the first experimental evidence of the longest transmission distance in the COSC system. Our results show that high-quality chaotic synchronisation can be maintained both in time- and frequency-domain, even after 143 km transmission; the bandwidth of the transmitter is enlarged by the external optical injection, which leads to the realisation of 2.5 Gbits/s-message secure transmission up to 25 km. In addition, the effects of device parameters on the COSC are discussed for supplementary detail.
    Keywords: long-haul; high-speed; chaotic optical secure communications; semiconductor laser.
    DOI: 10.1504/IJHPCN.2019.10021102
     
  • Optimisation of ANFIS using mine blast algorithm for predicting strength of Malaysian small medium enterprises   Order a copy of this article
    by Kashif Hussain, Mohd. Najib Mohd. Salleh, Abdul Mutalib Leman 
    Abstract: Adaptive neuro-fuzzy inference system (ANFIS) is popular among other fuzzy inference systems, as it is widely applied in business and economics. Many have trained ANFIS parameters using metaheuristic algorithms, but very few have tried optimising its rule-base. The auto-generated rules, using grid partitioning, comprise both the potential and the weak rules which increase the complexity of ANFIS architecture as well as adding computational cost. Therefore, pruning weak rules will optimise the rule-base. However, reducing complexity and increasing accuracy of ANFIS network needs an effective training and optimisation mechanism. This paper proposes an efficient technique for optimising ANFIS rule-base without compromising on accuracy. A newly developed mine blast algorithm (MBA) is used to optimise ANFIS. The ANFIS optimised by MBA is employed to predict strength of Malaysian small and medium enterprises (SMEs). Results prove that MBA optimised ANFIS rule-base and trained parameters more efficiently than genetic algorithm (GA) and particle swarm optimisation (PSO).
    Keywords: adaptive neuro-fuzzy inference system; neuro-fuzzy; fuzzy system; mine blast algorithm; rule optimisation; small and medium enterprises; SME; Malaysia.
    DOI: 10.1504/IJHPCN.2019.10021103
     
  • Keyword guessing on multi-user searchable encryption   Order a copy of this article
    by Zhen Li, Minghao Zhao, Han Jiang, Qiuliang Xu 
    Abstract: Multi-user searchable encryption enables the client to perform keyword search over encrypted data while supporting authorisation management. Most of these schemes are constructed using public key encryption. However, public key encryption with keyword search is vulnerable to keyword guessing attack. Consequently, a secure channel is necessarily involved for secret information transformation, which leads to extra severe burden. This vulnerability is recognised in traditional searchable encryption, but it is still undecided whether it also exists in multi-user setting. In this paper, we firstly point out that keyword guessing attack is also a problem in multi-user searchable encryption without the supposed secure channel. By an in-depth investigation of some schemes proposed recently and simulating the keyword guessing attack on them, we present that none of these schemes can resist this attack. We make a comprehensive security definition and propose some open problems.
    Keywords: cloud computing; keyword guessing; searchable encryption; multi-user.
    DOI: 10.1504/IJHPCN.2017.10007944
     
  • Semi-supervised dimensionality reduction based on local estimation error   Order a copy of this article
    by Xianfa Cai, Jia Wei, Guihua Wen, Zhiwen Yu, Yongming Cai, Jie Li 
    Abstract: The construction of a graph is extremely important in graph-based semi-supervised learning. However, it is unstable by virtue of sensitivity to the selection of neighbourhood parameter and inaccuracy of the edge weights. Inspired by the good performance of the local learning method, this paper proposes a semi-supervised dimensionality reduction based on local estimation error (LEESSDR) algorithm by utilising local learning projections (LLP) to semi-supervised dimensionality reduction. The algorithm sets the edge weights through minimising the local estimation error and can effectively preserve the global geometric structure as well as the local one of the data. Since LLP does not require its input space to be locally linear, even if it is nonlinear, LLP maps it to the feature space by using kernel functions and then obtains its local estimation error in the feature space. The effectiveness of the proposed method is verified on two popular face databases with promising classification accuracy and favourable robustness.
    Keywords: local learning projections; LLP; side-information; semi-supervised learning; graph construction.
    DOI: 10.1504/IJHPCN.2019.10021107
     
  • Securing SDN controller and switches from attacks   Order a copy of this article
    by Udaya Tupakula, Vijay Varadharajan, Preeti Mishra 
    Abstract: In this paper, we propose techniques for securing the SDN controller and the switches from malicious end-host attacks. Our model makes use of trusted computing and introspection-based intrusion detection to deal with attacks in SDN. We have developed a security application for the SDN controller to validate the state of the switches in the data plane and enforce the security policies to monitor the virtual machines at system call level and detect attacks. We have developed a feature extraction method named vector of n-grams which represents the traces in an efficient way without losing the ordering of system calls. The flows from the malicious hosts are dropped before they are processed by the switches or forwarded to the SDN controller. Hence, our model protects the switches and the SDN controller from the attacks.
    Keywords: DN security; trusted computing; virtual machine introspection; VMI; machine learning; security attack.
    DOI: 10.1504/IJHPCN.2019.10021108
     
  • Fast graph centrality computation via sampling: a case study of influence maximisation over OSNs   Order a copy of this article
    by Rui Wang, Min Lv, Zhiyong Wu, Yongkun Li, Yinlong Xu 
    Abstract: Graph centrality computation, e.g., asking for the most important vertices in a graph, may incur a high time cost with the increasing size of graphs. To address this challenge, this paper presents a sampling-based framework to speed up the computation of graph centrality. As a use case, we study the problem of influence maximisation (IM), which asks for the k most influential nodes in a graph to trigger the largest influence spread, and present an IM-RWS algorithm. We experimentally compare IM-RWS with the state-of-the-art influence maximisation algorithms IMM and IM-RW, and the results show that our solution can bring a significant improvement in efficiency as well as a certain extent of improvement in empirical accuracy. In particular, our algorithm can solve the influence maximisation problem in graphs containing millions of nodes within tens of seconds with an even better performance result in terms of influence spread.
    Keywords: random walk; sampling; graph centrality; online social networks; influence maximisation.
    DOI: 10.1504/IJHPCN.2019.10021109
     
  • Non-intrusive load monitoring and its challenges in a NILM system framework   Order a copy of this article
    by Qi Liu, Min Lu, Xiaodong Liu, Nigel Linge 
    Abstract: With the increasing of energy demand and electricity price, researchers show more and more interest in residential load monitoring. In order to feed back the individual appliance's energy consumption instead of the whole-house energy consumption, non-intrusive load monitoring (NILM) is a good choice for residents to respond to the time-of-use price and achieve electricity saving. In this paper, we discuss the system framework of NILM and analyse the challenges in every module. Besides, we study and compare the public datasets and accuracy metrics of non-intrusive load monitoring techniques.
    Keywords: non-intrusive load monitoring; data acquisition; event detection; feature extraction; load disaggregation.
    DOI: 10.1504/IJHPCN.2019.10021110
     
  • A similarity algorithm based on hamming distance used to detect malicious users in cooperative spectrum sensing   Order a copy of this article
    by Libin Xu, Pin Wan, Yonghua Wang, Ting Liang 
    Abstract: The collaborative spectrum sensing (CSS) methods have been proposed to improve the sensing performance. However, studies rarely take security into account. CSS methods are vulnerable to the potential attacks from malicious users (MUs). Most existing MU detection methods are reputation-based, it is incapable as the attack model is intelligent. In this paper, a Hamming distance check (HDC) is proposed to detect MUs. The Hamming distance between all the sensing nodes is calculated. Because the reports from MUs are different from honest users (HUs), we can find the MUs and exclude them from the fusion process. A new trust factor (TF) is proposed to increase the effects of trustworthy nodes in the final decision. The proposed algorithm can effectively detect the MUs without prior knowledge. In addition, our proposed method can perform better than the existing approaches.
    Keywords: malicious user; attack; Hamming distance; cognitive radio networks.
    DOI: 10.1504/IJHPCN.2019.10021111
     

Special Issue on: IEEE TrustCom-16 Trust Computing and Communications

  • A trust-based evaluation model for data privacy protection in cloud computing   Order a copy of this article
    by Wang Yubiao, Wen Junhao, Zhou Wei 
    Abstract: For high quality and privacy protection problems, this paper proposes a trust-based evaluation model for data privacy protection in cloud computing (TEM-DPP). In order to make the final trust evaluation values more practical, the model introduces the comprehensive trust evaluation. The comprehensive trust is composed of direct trust and recommend trust. Services attribute and combining weights-based method are used to calculate the direct trust, reflecting the direct trust timeliness and rationality. In order to protect data security, we propose a data protection method based on a normal cloud model for data privacy protection. Then, the customer satisfaction, decay time, transaction amount and penalty factor will be used to update the direct trust. Simulation results showed that cloud services trust evaluation model can not only adapt to the dynamic changes in the environment, but also ensure the actual quality of service. It can improve the service requesters' satisfaction and has certain resilience to fraud entities.
    Keywords: cloud service; trust; privacy-aware; evaluation model.

Special Issue on: Recent Advances in Security and Privacy for Big Data

  • A mathematical model for intimacy-based security protection in social networks without violation of privacy   Order a copy of this article
    by Hui Zheng, Jing He, Yanchun Zhang, Junfeng Wu 
    Abstract: Protection against spam, fraud and phishing becomes increasingly important in the applications of social networks. Online social network providers such as Facebook and MySpace collect data from users including their relation and education statuses. While these data are used to provide users with convenient services, improper use of these data such as spam advertisement can be annoying and even harmful. Even worse, if these data are somehow stolen or illegally gathered, the users might be exposed to fraud and phishing. To further protect individual privacy, we employ an intimacy algorithm without the violation of privacy. Also, we explore spammers through detecting unusual intimacy phenomenon. We, therefore, propose a mathematical model for intimacy based security protection in a social network without the violation of privacy in this paper. Moreover, the feasibility and the effectiveness of our model is testified theoretically and experimentally.
    Keywords: social network; privacy protection; intimacy; spam detection.

Special Issue on: CloudTech'17 Advances in Big Data and Cloud Computing

  • Adaptive and concurrent negotiation for an efficient cloud provisioning   Order a copy of this article
    by Aya Omezzine, Narjès Bellamine, Said Tazi, Gene Cooperman 
    Abstract: Business providers offer highly scalable applications to end-users. To run the users' requests efficiently, business providers must take the right decision about requests placement on virtual resources. An efficient provisioning that satisfies users and optimises the providers profit becomes a challenging task owing to the dynamicity of the cloud. An efficient provisioning becomes harder when considering inflexible take-it-or-leave-it service level agreement. Negotiation-based approaches are promising solutions when dealing with conflicts. Using negotiation, the users and providers may find a satisfactory schedule. However, reaching a compromise between the two parties is a cumbersome task owing to workload constraints at negotiation time. The majority of elaborated approaches reject the users' requests when negotiation fails. In this paper, we propose a novel adaptive negotiation approach that keeps renegotiating concurrently with those users based on workload changes. rnExperiments show that our approach maximises the provider's profit, increases the number of accepted users, and improves the customer satisfaction.
    Keywords: cloud computing; cloud provisioning; service level agreement; user satisfaction; adaptive negotiation; renegotiation.

Special Issue on: ICCIDS 2018 High-Performance Computing for Computational Intelligence

  • Wavelet-based arrhythmia detection of ECG signals and performance measurement using diverse classifiers   Order a copy of this article
    by Ritu Singh, Rajesh Mehta, Navin Rajpal 
    Abstract: The diagnosis of cardiovascular arrhythmias needs accurate predictive models to test abnormalities in the functioning of the heart. The proposed work manifests a comparative analysis of different classifiers, such as K-Nearest Neighbor (KNN), Support Vector Machine (SVM), Back Propagation Neural Network (BPNN), Feed Forward Neural Network (FFNN) and Radial Basis Function Neural Network (RBFNN) with Discrete Wavelet Transform (DWT) to assess an electrocardiogram (ECG). The ECG record sets of MIT-BIH dataset are employed to test the efficacy of different classifiers. For DWT, different wavelets such as daubechies, haar, symlet, biorthogonal, reverse biorthogonal and coiflet are used for feature extraction, and their performances are compared. The foremost daubechie wavelet is demonstrated in detail in this paper. SVM and RBFNN have shown 100% accuracy with reduced dataset testing time of 0.0025 s and 0.0174 s, respectively, whereas BPNN, FFNN and KNN provided 95.5%, 97.7% and 84.0% accuracy with 0.0176 s, 0.0189 s and 0.0033 s of testing time, respectively. This proposed scheme builds an efficient selection of wavelet with best-suited classifier for timely perusal of cardiac disturbances.
    Keywords: ECG; MIT-BIH; DWT; BPNN; FFNN; KNN; RBFNN; SVM.

  • A novel fuzzy convolutional neural network for recognition of handwritten Marathi numerals   Order a copy of this article
    by Deepak Mane, Uday Kulkarni 
    Abstract: Pattern classification is the approach of designing a method to map the inputs to the matching output classes. A Novel Fuzzy Convolutional Neural Network (FCNN) is proposed in this paper for recognition of handwritten Marathi numerals. FCNN uses fuzzy set hypersphere as a pattern classifier to map inputs to classes represented by the combination of the fuzzy set hypersphere. Given labelled classes, the model designed proved efficient with 100% accuracy on the training set. The two major factors that improve the learning algorithm of FCNN are: first, extract the dominant features from numeral image patterns using customised Convolutional Neural Network (CCNN); second, use supervised clustering to create a new fuzzy hypersphere based on the distance measurement learning rules of Fuzzy Hypersphere Neural Network (FHSNN) and pattern classification done by the fuzzy membership function. Performance evaluation of model is done on large datasets of Marathi numerals and its performance is found to be superior to the traditional CNN model. The obtained results demonstrate the fact that FCNN learning rules can be used as a useful representation for different classification pattern problems.
    Keywords: fuzzy hypersphere neural network; convolutional neural network; pattern classification; supervised clustering.

Special Issue on: CSS 2018 Smart Monitoring and Protection of Data-Intensive Cyber-Physical Critical Infrastructures

  • Security in the internet of things: botnet detection in software-defined networks by deep learning techniques   Order a copy of this article
    by Ivan Letteri, Giuseppe Della Penna, Giovanni De Gasperis 
    Abstract: The diffusion of the Internet of Things (IoT) is making cyber-physical smart devices an element of everyone's life, but also exposing them to malware designed for conventional web applications, such as botnets. Botnets are one of the most widespread and dangerous malwares, so their detection is an important task. Many works in this context exploit general malware detection techniques and rely on old or biased traffic samples, making their results not completely reliable. Moreover, software-defined networking (SDN), which is increasingly replacing conventional networking, especially in the IoT, limits the features that can be used to detect botnets. We propose a botnet detection methodology based on deep learning techniques, tested on a new, SDN-specific dataset with a high (up to 97%) classification accuracy. Our algorithms have been implemented on two state-of-the-art frameworks, i.e., Keras and TensorFlow, so we are confident that our results are reliable and easily reproducible.
    Keywords: cyber-physical devices; internet of things; software-defined networking; botnet detection; machine learning; neural networks; deep learning; network security.

Special Issue on: Advances in Information Security and Networks

  • Dynamic combined with static analysis for mining network protocols' hidden behaviour
    by YanJing Hu 
    Abstract: Unknown protocols' hidden behaviour is becoming a new challenge in network security. This paper takes both the captured messages and the binary code that implement the protocol as the studied objects. Dynamic Taint Analysis combined with Static Analysis is used for protocol analysing. Firstly, we monitor and analyse the process of protocol program that parses the message in the virtual platform HiddenDisc prototype system developed by ourselves, and record the protocols public behaviour, then based on our proposed hidden behaviour perception and mining algorithm, we perform static analysis of the protocols hidden behaviour trigger conditions and hidden behaviour instruction sequences. According to the hidden behaviour trigger conditions, new protocol messages with the sensitive information are generated, and the hidden behaviours are executed by dynamic triggering. HiddenDisc prototype system can sense, trigger and analyse the protocols hidden behaviour. According to the statistical analysis results, we propose the evaluation method of protocol execution security. The experimental results show that the present method can accurately mining the protocols hidden behaviour, and can evaluate an unknown protocols execution security.
    Keywords: protocol reverse analysis; protocols' hidden behaviour; protocol message; protocol software.