Forthcoming articles

International Journal of Grid and Utility Computing

International Journal of Grid and Utility Computing (IJGUC)

These articles have been peer-reviewed and accepted for publication but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

Register for our alerting service, which notifies you by email when new issues are published online.

Open AccessArticles marked with this Open Access icon are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.
We also offer which provide timely updates of tables of contents, newly published articles and calls for papers.

International Journal of Grid and Utility Computing (82 papers in press)

Regular Issues

  • A novel test case generation method based on program structure diagram   Order a copy of this article
    by Mingcheng Qu, XiangHu Wu, YongChao Tao, GuanNan Wang, ZiYu Dong 
    Abstract: At present, embedded software has the problems of test lag, non-visual and low efficiency, depending on the test design of testers. It cannot guarantee the quality of the test cases and cannot guarantee the quality of the test. In this paper, a software program structure diagram model is established to verify the model, and the test points are planned manually. Finally, we fill in the contents of the test items, and generate the corresponding set of test cases according to the algorithm, and save them into the database for management. This method can improve the reliability and efficiency of tests, ensure the visual tracking and management of the use case, and have a strong auxiliary function to the planning and generation of the test case.
    Keywords: program structure diagram; test item planning; test case generation.

  • A dynamic cloud service selection model based on trust and service level agreements in cloud computing   Order a copy of this article
    by Yubiao Wang, Junhao Wen, Quanwang Wu, Lei Guo, Bamei Tao 
    Abstract: For high quality and trusted service selection problems, we propose a dynamic cloud services selection model (DCSSM). Cloud service resources are divided into different service levels by Service-Level Agreement Management (SLAM). Each SLAM manages some cloud service registration information. In order to make the final trust evaluation values more practical, the model performs a comprehensive trust, which consists of direct trust and indirect trust. First, combined weights consist of subjective weight and objective weight. Using rough sets, an analytic hierarchy process method is used to calculate subjective weight. The direct trust also considers transaction time and transaction amount, and then gets a accurate direct trust. Secondly, indirect trust considers user trust evaluation similarity, and it contains indirect trust of friends and indirect trust of strangers. Finally, when the transaction is completed, a direct trust dynamic update is proposed. The model is simulated using CloudSim. It is compared with three other methods, and the experimental results show that the DCSSM performs better than the other three methods.
    Keywords: dynamic cloud service; trust; service-level agreement; selection model; combining weights.

  • Research on regression test method based on multiple UML graphic models   Order a copy of this article
    by Mingcheng Qu, Xianghu Wu, Yongchao Tao, Guannan Wang, Ziyu Dong 
    Abstract: Most of the existing graph-based regression testing schemes aim at a given UML graph and are not flexible in regression testing. This paper proposes a method of a universal UML for a variety of graphical model modifications, and obtains a UML graphics module structure modified regression testing that must be retested, determined by the domain of influence analysis on the effect of UML modification on the graphical model test case generated range analysis, finally re auto generate test cases. This method has been proved to have a high logical coverage rate. In order to fully consider all kinds of dependency, it cannot limit the type modification of UML graphics, and has higher openness and comprehensiveness.
    Keywords: regression testing; multiple UML graphical models; domain analysis.

  • Logic programming as a service in multi-agent systems for the Internet of Things   Order a copy of this article
    by Roberta Calegari, Enrico Denti, Stefano Mariani, Andrea Omicini 
    Abstract: The widespread diffusion of low-cost computing devices, along with improvements of cloud computing platforms, is paving the way towards a whole new set of opportunities for Internet of Things (IoT) applications and services. Varying degrees of intelligence are required for supporting adaptation and self-management: yet, they should be provided in a lightweight, easy to use and customise, highly-interoperable way. In this paper we explore Logic Programming as a Service (LPaaS) as a novel and promising re-interpretation of distributed logic programming in the IoT era. After introducing the reference context and motivating scenarios of LPaaS as an effective enabling technology for intelligent IoT, we define the LPaaS general architecture, and discuss two different prototype implementations - as a web service and as an agent in a multi-agent system (MAS), both built on top of the tuProlog system, which provides the required interoperability and customisation. We finally showcase the LPaaS potential through two case studies, designed as simple examples of the motivating scenarios.
    Keywords: IoT; logic programming; multi-agent systems; pervasive computing; LPaaS; artificial intelligence; interoperability.

  • Cognitive workload management on globally interoperable network of clouds   Order a copy of this article
    by Giovanni Morana, Rao Mikkilineni, Surendra Keshan 
    Abstract: A new computing paradigm using distributed intelligent managed elements (DIME) and DIME network architecture (DNA) is used to demonstrate a globally interoperable public and private cloud network deploying cloud agnostic workloads. The workloads are cognitive and capable to adjust autonomously their structure and maintain desired quality of service. DNA is designed to provide a control architecture for workload self-management of non-functional requirements to address rapid fluctuations, either in workload demand or in available resources. Using DNA, a transaction-intensive three-tier workload is migrated from a physical server to a virtual machine hosted in a public cloud without interrupting the service transactions. After migration, cloud agnostic inter-cloud and intra-cloud auto-scaling, auto-failover and live migration are demonstrated again, without disrupting the user experience or losing transactions.
    Keywords: cloud computing; datacentre; manageability; DIME; DIME network architecture; cloud agnostic; cloud native.

  • Towards autonomous creation of service chains on cloud markets   Order a copy of this article
    by Benedikt Pittl, Irfan Ul-Haq, Werner Mach, Erich Schikuta 
    Abstract: Today, cloud services, such as virtual machines, are traded directly at fixed prices between consumers and providers on platforms, e.g. Amazon EC2. The recent development of Amazon's EC2 spot market shows that dynamic cloud markets are gaining popularity. Hence, autonomous multi-round bilateral negotiations, also known as bazaar negotiations, are a promising approach for trading cloud services on future cloud markets. They play a vital role for composing service chains. Based on a formal description we describe such service chains and derive different negotiation types. We implement them in a simulation environment and evaluate our approach by executing different market scenarios. Therefore, we developed three negotiation strategies for cloud resellers. Our simulation results show that cloud resellers as well as their negotiation strategies have a significant impact on the resource allocation of cloud markets. Very high as well as very low markups reduce the profit of a reseller.
    Keywords: cloud computing; cloud marketplace; IaaS; bazaar negotiation; SLA negotiation; cloud service chain; cloud reseller; multi-round negotiation; cloud economics.

  • Cache replication for information centric networks through programmable networks   Order a copy of this article
    by Erick B. Nascimento, Edward David Moreno, Douglas D. J. De Macedo 
    Abstract: Software Defined Networking (SDN) is a new approach that decouples the control from the data transmission function and is directly programmable through the programming language. In parallel, Information Centric Network (ICN) influences the use of information through network caching and multipart communication. Moreover, owing to programmable characteristics, these projects are developed to flexibilise, solve traffic problems, and transfer content through a scalable network structure with simple management. The premise of SDN that contemplates the ICN besides decoupling, is the flexibility of the network configurations to reduce overhead of segments due to retransmission of duplicate files by the same segment. Based on this information, an architecture is designed to provide reliable content, which can be replicated in the network \cite{Trajano2016}. The ICN network architecture of this proposal stores the information through a logical volume for later access and has the possibility of connection with remote controllers to store files reliably in cloud environments.
    Keywords: software defined networking; information centric network; programmability; flexibility; management; storage; controller.

  • Winning the war on terror: using social networking tools and GTD to analyse the regularity of terrorism activities   Order a copy of this article
    by Xuan Guo, Fei Xu, Zhi-ting Xiao, Hong-guo Yuan, Xiaoyuan Yang 
    Abstract: In order to grasp the temporal and spatial characteristics and activity patterns of terrorism attacks in China, so as to make effective counter-terrorism strategies, two kinds of different intelligence sources were analysed by means of social network analysis and mathematical statistics. Firstly, using the social network analysis tool ORA, we build the terrorist activity meta-network for the text information, extract the four categories of person, places, organisations and time, and analyse the characteristics of the key nodes of the network, then the meta-network is decomposed into person-organisation, person-location, organisation-location, and organisation-time four binary subnets to analyse the temporal and spatial characteristics of terrorist activities. Then, using the GTD dataset to analyse the characteristics of China's terrorist attacks from 1989 to 2015, the geo-spatial distribution and time distribution of terrorist events are summarised. Combined with the data visualisation method, the previous results of social network analysis using open source text are verified and compared. Finally, the paper puts forward some suggestions on counter-terrorism prevention strategy in China.
    Keywords: social network analysis; GTD; meta-network; ORA; counter-terrorism; terrorism activities.

  • Model-based deployment of secure multi-cloud applications   Order a copy of this article
    by Valentina Casola, Alessandra De Benedictis, Massimiliano Rak, Umberto Villano, Erkuden Rios, Angel Rego, Giancarlo Capone 
    Abstract: The wide diffusion of cloud services, offering functionalities in different application domains and addressing different computing and storage needs, opens up to the possibility of building multi-cloud applications that rely upon heterogeneous services, offered by multiple cloud service providers (CSPs). This flexibility not only enables an efficient usage of existing resources, but also allows, in some cases, to cope with specific requirements in terms of security and performance. On the downside, resorting to multiple CSPs requires a huge amount of time and effort for application development. The MUSA framework enables a DevOps approach to develop multi-cloud applications with desired Security Service Level Agreements (SLAs). This paper describes the MUSA Deployer models and tools, which aim at decoupling the multi-cloud application modelling and development from application deployment and cloud services provisioning. With MUSA tools, application designers and developers are able to express easily and to evaluate the security requirements and, successively, to deploy automatically the application, by acquiring cloud services and by installing and configuring software components on them.
    Keywords: cloud security; multi-cloud deployment; automated deployment.

  • Improving the MXFT scheduling algorithm for a cloud computing context   Order a copy of this article
    by Paul Moggridge, Na Helian, Yi Sun, Mariana Lilley, Vito Veneziano, Martin Eaves 
    Abstract: In this paper, the Max-min Fast Track (MXFT) scheduling algorithm is improved and compared against a selection of popular algorithms. The improved versions of MXFT are called Min-min Max-min Fast Track (MMMXFT) and Clustering Min-min Max-min Fast Track (CMMMXFT). The key difference is using min-min for the fast track. Experimentation revealed that despite min-mins characteristic of prioritising small tasks at the expense of overall makespan, the overall makespan was not adversely effected and the benefits of prioritising small tasks were identified in MMMXFT. Experiments were conducted using a simulator, with the exception of one real world experiment. The real world experiment identified challenges faced by algorithms that rely on accurate execution time prediction.
    Keywords: cloud computing; scheduling algorithms; max-min.

  • Novel algorithms, emergent approaches and applications for distributed computing   Order a copy of this article
    by Ilias Savvas, Douglas Dyllon Jeronimo De Macedo 
    Abstract: The track on the Convergence of Distributed Clouds, Grids and their Management (CDCGM2017) started in 2009 to discuss the evolution of cloud computing w.r.t. the infrastructure providers who started creating next generation hardware that is service friendly, and service developers who started embedding business service intelligence in their computing infrastructure. During the last years, the state of the art of cloud computing architecture has evolved toward more complexity by piling up new layers of management over the many layers that already exist. Moreover, in the last ten years, the problem of scaling and managing distributed applications in the cloud has taken a new dimension, especially in requirement of tolerance to workload variation and proactive scaling of available computing resource pools, with particular emphasis on big data management and recent technologies. This article represents an effort to present significant extensions of the interesting papers presented at CDCGM 2017. These papers describe advances in current distributed and cloud computing practices dealing with modern techniques of parallel computing, Cognitive Workload Management for Cloud Computing, emergent Cloud XaaS, information service networks and IoT.
    Keywords: cloud computing; grid computing; SLA; data science.

  • An intelligent water drops based approach for workflow scheduling with balanced resource utilisation in cloud computing   Order a copy of this article
    by Mala Kalra, Sarbjeet Singh 
    Abstract: The problem of finding optimal solutions for scheduling scientific workflows in cloud environment has been thoroughly investigated using various nature-inspired algorithms. These solutions minimise the execution time of workflows, but they may result in severe load imbalance among Virtual Machines (VMs) in cloud data centres. Cloud vendors desire the proper utilisation of all the VMs in the data centres to have efficient performance of the overall system. Thus load balancing of VMs becomes an important aspect while scheduling tasks in a cloud environment. In this paper, we propose an approach based on the Intelligent Water Drops (IWD) algorithm to minimise the execution time of workflows while balancing the resource utilisation of VMs in the cloud computing environment. The proposed approach is compared with a variety of well-known heuristic and meta-heuristic techniques using three real-time scientific workflows, and experimental results show that the proposed algorithm performs better than these existing techniques in terms of makespan and load balancing.
    Keywords: workflow scheduling; intelligent water drops algorithm; cloud environment; evolutionary computation; directed acyclic graphs; load balancing; balanced resource utilisation; optimisation technique.

  • The energy consumption laxity based algorithm to perform computation processes in virtual machine environments   Order a copy of this article
    by Tomoya Enokido, Dilawaer Duolikun, Makoto Takizawa 
    Abstract: In information systems, server cluster systems equipped with virtual machines are widely used to realise scalable and high performance computing systems, such as cloud computing systems. In order to satisfy application requirements, such as deadline constraint for each application process, the processing loads of virtual machines have to balance with one another in a server cluster. In addition to achieving the performance objectives, the total electric energy of a server cluster to perform application processes has to be reduced, as discussed in green computing. In this paper, the energy consumption laxity based (ECLB) algorithm is proposed to allocate computation type application processes to virtual machines in a server cluster so that the total electric energy of the server cluster and the response time of each process can be reduced. We evaluate the ECLB algorithm in terms of the total electric energy of a server cluster and response time of each process compared with the basic round-robin (RR) algorithm. Evaluation results show the average total electric energy of a server cluster and average response time of each process in the ECLB algorithm can both be more reduced in the RR algorithm, respectively.
    Keywords: green computing; virtual machines; energy-efficient server cluster systems; power consumption models; energy-efficient load-balancing algorithms.

  • A new bimatrix game model with fuzzy payoffs in credibility space   Order a copy of this article
    by Cunlin Li, Ming Li 
    Abstract: Uncertain theory based on experts evaluation and non-additive measure is introduced to explore the bimatrix game with the uncertain payoffs. The uncertainty space based on the axiom of the uncertain measures is presented. Some basic characteristics of uncertain events are described and the expected values of the uncertain variables are given in uncertainty space. A new model of the bimatrix game with uncertain payoffs is established and its equivalent strategy is given. Then, we develop an expected model of uncertain bimatrix games and define the uncertain equilibrium strategy of uncertain bimatrix games. By using the expected values of uncertain variables, we transform the model into a linear programming, the expected equilibrium strategy of the uncertain bimatrix games identified through solving linear equations.
    Keywords: bimatrix game; uncertain measure; expected Nash equilibrium strategy.

  • Data analysis of CSI 800 industry index by using factor analysis model   Order a copy of this article
    by Chunfen Xu 
    Abstract: This paper studies the linkage among the industries based on CSI 800 industry index, which provides mass complicated data for industry research. Factor analysis, a useful data analysis tool, allows researchers to investigate concepts that are not easily measured directly by collapsing a large number of variables into a few interpretable underlying factors. Firstly, data of ten industries in the period from September 2009 to March 2017 is collected from CSI 800 Index and correlational analyses are conducted. Secondly, this paper establishes an appropriate evaluation system, and then uses factor analysis to do dimension reduction. Finally, some characteristics and trends in various industries are obtained.
    Keywords: CSI 800 index; correlation analysis; factor analysis; dimension reduction.

  • A real-time matching algorithm using sparse matrix   Order a copy of this article
    by Aomei Li, Wanji Jiang, Po Ma, Jiahui Guo, Dehui Dai 
    Abstract: Aiming at the shortcomings of the traditional image feature matching algorithm, which is computationally expensive and time-consuming, this paper presents a real-time feature matching algorithm. Firstly, the algorithm constructs sparse matrices by Laplace operator and the Laplace weighting is carried out. Then, the feature points are detected by the FAST feature point detection algorithm. The SURF algorithm is used to assign the direction and descriptor to the feature for rotation invariance, We then use the Gaussian pyramid to make it scalable invariance. Secondly, the match pair is extracted by the violent matching method, and the matching pair is purified by Hamming distance and symmetry method. Finally, the RANSAC algorithm is used to get the optimal matrix, and the affine invariance check is used to match the result. The algorithm is compared with the classical feature point matching algorithm, which proves that the method has high real-time performance under the premise of guaranteeing the matching precision.
    Keywords: sparse matrices; Laplace weighted; FAST; SURF; symmetry method; affine invariance check.

  • How do checkpoint mechanisms and power infrastructure failures impact on cloud applications?   Order a copy of this article
    by Guto Leoni Santos, Demis Gomes, Djamel Sadok, Judith Kelner, Elisson Rocha, Patricia Takako Endo 
    Abstract: With the growth of cloud computing usage by commercial companies, providers of this service are looking for ways to improve and estimate the quality of their services. Failures in the power subsystem represent a major risk of cloud data centre unavailability at the physical level. At same time, software-level mechanisms (such as application checkpointing) can be used to maintain the application consistency after a downtime and also improve its availability. However, understanding how failures at the physical level impact on the application availability, and how software-level mechanisms can improve the data centre availability is a challenge. This paper analyses the impact of power subsystem failures on cloud application availability, as well as the impact of having checkpoint mechanisms to recover the system from software-level failure. For that, we propose a set of stochastic models to represent the cloud power subsystem, the cloud application, and also the checkpoint-based retrieval mechanisms. To evaluate data centre performance, we also model requests arrival and time to process as a queue, and feed this model with real data acquired from experiments done in a real testbed. To verify which components of the power infrastructure most impact on the data centre availability we perform sensitivity analysis. The results of the stationary analysis show that the selection of a given type of checkpoint mechanism does not present a significant impact on the observed metrics. On the other hand, improving the power infrastructure implies performance and availability gains.
    Keywords: cloud data centre; checkpoint mechanisms; availability; performance; stochatic models.

  • A review of intrusion detection approaches in cloud security systems   Order a copy of this article
    by Satyapal Singh, Mohan Kubendiran, Arun Kumar Sangaiah 
    Abstract: Cloud computing is a technology that allows the conveyance of services, storage, network, computing power, etc., over the internet. The on-demand and ubiquitous nature of this technology allows for its ease of use and availability. However, for this very reason, the cloud services, platform or infrastructure are targets for attackers. The most common attack is the one that tries taking control of one or more virtual machine instances running on the cloud. Since cloud and networking technologies go side by side, it is inevitable to keep such malicious attempts at bay. Nevertheless, intrusion detection systems are software that can detect potential intrusions within or outside a secure cloud environment. In this paper, a study of different intrusion detection systems that have been previously proposed in order to mitigate or in best case eliminate the possible threats posed by such intrusions has been done.
    Keywords: cloud computing; virtualisation; intrusion detection; networking; cyber attacks; Blockchain.

  • A novel web image retrieval method: bagging weighted hashing based on local structure information   Order a copy of this article
    by Li Huanyu 
    Abstract: Hashing is widely used in ANN searching problems, especially in web image retrieval. A excellent hashing algorithm can help the users to search and retrieve their web images more conveniently, quickly and accurately. In order to conquer several deficiencies of ITQ in image retrieval problem, we use ensemble learning to deal image retrieval problem. A elastic ensemble framework has been proposed to guide the hashing design, and three important principles have been proposed, named high precision, high diversity, and optimal weight prediction. Basing on this, we design a novel hashing method called BWLH. In BWLH, first, the local structure information of the original data is extracted to construct the local structure data, thus to improve the similarity-preserve ability of hash bits. Second, a weighted matrix is used to balance the variance of different bits. Third, bagging is exploited to expand diversity in different hash tables. Sufficient experiments show that BWLH can handle the image retrieval problem effectively, and perform better than several state-of-the-art methods at same hash code length on dataset CIFAR-10 and LabelMe. Finally, search by image, a web-based use case scenario of the proposed hashing BWLH, is given to detail how the proposed method can be used in a web-based environment.
    Keywords: web image retrieval; hashing; ensemble learning; local structure information; weighted.

  • A fog computing model for pervasive connected healthcare in smart environments   Order a copy of this article
    by Philip Moore, Hai Van Pham 
    Abstract: Healthcare provision faces many challenges resulting from: the growing demand for health services, advances in medical technologies, and the availability of increasingly complex treatment options. The challenges are exacerbated by the increasing complexity in medical conditions experienced by an ageing demographic and the related demand for social care provision. Addressing these challenges requires effective patient management which this may be achieved using autonomic health monitoring systems, however such monitoring has been limited to `smart-homes'. `Smart-homes' may exist within `smart-cities', here we consider `smart-homes' and `smart-cities' as these concepts relate to the healthcare domain. We propose extending the `smart-home' to a wider `smart-environment' which conflates `smart-homes' with the `smart-city' (in an interconnected environment) based on the fog computing paradigm. We introduce our Fog Computing Model which incorporates fog and cloud-based computing for low latency systems which enable comprehensive `real-time' patient monitoring with situational awareness, pervasive consciousness, and related data analytic solutions. Illustrative scenarios predicated on the monitoring of patients with dementia are presented along with a `real-world' example of a proposed `smart-environment'. Context-awareness and decision support is considered with a proposed implementation strategy and a `real-world' case study in the healthcare domain. While the proposed fog model and implementation strategy is predicated on healthcare domain, we argue that the proposed Fog Computing Model will generalise to other medical conditions and domains of interest.
    Keywords: fog computing; connected health; e-hospital; smart-health; smart environments; context-awareness; situational awareness; pervasive systems; decision-support systems.

  • An integrated incentive and trust-based optimal path identification in ad hoc on-demand multipath distance vector routing for MANET   Order a copy of this article
    by Abrar Omar Alkhamisi, Seyed M. Buhari, George Tsaramirsis, Mohammed Basheri 
    Abstract: A Mobile Ad hoc Network (MANET) can exist and work well only when the mobile nodes behave cooperatively in packet routing. To reduce the hazards from malicious nodes and enhance the security of the network, this paper extends an Adhoc On-Demand Multipath Distance Vector (AOMDV) routing protocol, named as An Integrated Incentive and Trust based optimal path identification in AOMDV (IIT-AOMDV) for MANET. To improve the security and reliability of packet forwarding over multiple routes in the presence of potentially malicious nodes, the proposed IIT-AOMDV routing protocol integrates an Intrusion Detection System (IDS) with the Bayesian Network (BN) based trust and payment model. The IDS uses the empirical first- and second-hand trust information of BN, and it underpins the cuckoo search algorithm to map the QoS and trust value into a single fitness metric, tuned according to the presence of malicious nodes in the network. Moreover, the payment system stimulates the nodes to cooperate in routing effectively and improves the routing performance. Finally, the simulation results show that the IIT-AOMDV improves the detection accuracy and throughput by 20% and 16.6%, respectively, more than that of existing AOMDV integrated with the IDS (AID).
    Keywords: mobile ad hoc network; intrusion detection system; trust; attack; optimal path identification; isolation.

  • Performance analysis of data fragmentation techniques on a cloud server   Order a copy of this article
    by Nelson Santos, Salvatore Lentini, Enrico Grosso, Bogdan Ghita, Giovanni Masala 
    Abstract: The advances in virtualisation and distributed computing have allowed the cloud paradigm to become very popular among users and resources. It allows companies to save costs on infrastructure and maintenance and to focus on the development of products. However, this fast-growing paradigm has brought some concerns from users, such as the integrity and security of the data, particularly in environments where users rely entirely on providers to secure their data. This paper explores different techniques to fragment data on the cloud and prevent direct unauthorized access to the data. It explores their performance on a cloud instance, where the total time to perform the operation, including the upload, download and reconstruction of the data, is considered. Results from this experiment indicate that fragmentation algorithms show better performance compared to encryption. Moreover, when combining encryption with fragmentation, there is an increase in the security, with the trade-off of the performance.
    Keywords: cloud security; data fragmentation; data security; privacy in cloud computing.

  • Large scale data processing software and performance instabilities within HEP grid environments   Order a copy of this article
    by Olga Datskova, Wedong Shi 
    Abstract: The scale of large processing tasks being run within the scientific grid and cloud environments has introduced a need for stability guarantees from geographically spanning resources, to ensure that failures are detected and handled pre-emptively. Performance inefficiencies within stacked service environments are a challenge to detect, where failures may stem causes, often requiring expert intervention. Reliability guarantees for such systems become paramount as recovery costs from failures reach a prohibitive scale. While individual services implement fault-tolerance and recovery procedures, the behaviour of interacting failures within tightly-coupled systems is not well-defined. Online reporting and classification of performance fluctuations can aid experts, central services and users to target service areas where optimisations can be introduced. This paper describes an approach for modelling performance states for production tasks running within the ALICE grid. We first provide an overview for the ALICE data and software workflow, focusing on the production job computational profile. A data centre event state is then developed, based on data centre job, computing, storage and user behaviour. With across site analysis, we then train groups to classify service domain states. This work addresses the question of analysing failures within the context of operational instabilities, occurring within production grid environments running large-scale data processing tasks. The results show that operational issues can be detected and described according to the principle service layers involved. This can guide users, central and data centre experts to take action in advance of service failure effects.
    Keywords: HEP; grid computing; large scale data processing; performance modelling; failure; fault-tolerance; software development.

  • The analysis of man at the end attack behaviour in software defined network   Order a copy of this article
    by Abeer Eldewahi, Alzubair Hassan, Khalid ElBadawi, Bazara Barry 
    Abstract: Software defined network (SDN) is an emerging technology that decouples the control plane from the data plane in its network architecture. This architecture exposes new threats that are absent in the traditional IP network. The man at the end attack (MATE) is one of the serious attacks against SDN controllers. The MATE attacker does his/her malicious activities by exploiting the nature of messages between the controller and switches, which are involved in requests and replies. This paper proposes a new detection method for MATE attack. We also used the spoofing, tampering, repudiation, information disclosure, denial of service and elevation of privilege (STRIDE) model in the classification of a four-dimensional model to determine which attacks can be considered as MATE. Furthermore, we determine the behaviour of MATE attacker in SDN after control has been taken from the controller to help in the detection and prevention of the MATE attack.
    Keywords: software defined network; MATE attack behaviour; four-dimensional model; STRIDE model.

  • Detection and mitigation of collusive interest flooding attack on content centric networking   Order a copy of this article
    by Tetsuya Shigeyasu, Ayaka Sonoda 
    Abstract: According to the development of ICT (Information and Communications Technology), deployments of consumer devices, such as note PC, smartphone, and other information devices, make it easy for users to access to the internet. Users having these devices use services of e-mail and SNS (Social Network Service). NDN (Named Data Networking), which is the most popular network architecture, has been proposed to realise the concept of CCN. However, it has been also reported that the NDN is vulnerable to CIFA (Collusive Interest Flooding Attack). In this paper, we propose a novel distributed algorithm for detecting CIFA for keeping availabilities of NDN. The results of computer simulations confirm that our proposal can detect and mitigate the effects of CIFA, effectively.
    Keywords: named data networking; content centric data acquisition; collusive interest flooding attack; malicious prediction.

  • A new overlay P2P network for efficient routing in group communication with regular topologies   Order a copy of this article
    by Abdelhalim Hacini, Mourad Amad 
    Abstract: This research paper gives a new overlay P2P network to provide a performant and optimised lookup process. The lookup process of the proposed solution reduces the overlay hops and consequently the latency for content lookup, between any pair of nodes. The overlay network is constructed on top of physical networks without any centralised control and with a hierarchy of two levels. The architecture is based on regular topologies, which are the pancake graphs and the skip graphs. The focus of all topology construction schemes is to reduce the cost of the lookup process (number of hops and delay) and consequently improve the search performance for P2P applications deployed on the overlay network. Performance evaluations of our proposed scheme show that results obtained are globally satisfactory.
    Keywords: P2P networking; pancake graphs; skip graphs; routing optimisation.

  • A smart networking and computing-aware orchestrator to enhance QoS on cloud-based multimedia services   Order a copy of this article
    by Rodrigo Moreira, Flavio Silva, Pedro Frosi Rosa, Rui Aguiar 
    Abstract: Rich-media applications deployed on cloud lead the use of the internet by people and organisations around the world. Networking and computing resource management has become an important requirement to achieve high user QoS. The advent of software-defined networking and networking function virtualisation brings new possibilities to address carrier environment challenges making QoS enhancement possible. The literature does not show a smart and flexible solution that brings scalability with a holistic view of networking and computing resources taking into account different ways to enhance QoS. In this work, we present a smart orchestrator capable of interacting with the network and computing resources and applications hosted on a cloud. By providing support to different ML algorithms, our solution provides better QoS by improvements in aspects such as network resilience, bandwidth allocation based on real-time traffic patterns, and end-to-end QoS mechanism to event-driven scenarios. The solution interacts in an agnostic way with different applications, cloud operating systems, and the network. As a separate control plane entity, the orchestrator is capable of operating across different domains. The solution orchestrates applications, virtual functions, and cloud resources, providing elastic and network enhancing QoS. Our experimental evaluation in a large-scale testbed shows the orchestrator's capability to provide a smart jitter decrease using AI techniques.
    Keywords: software-defined networking; network function virtualisation; QoS; machine learning; cloud computing.

  • An efficient content sharing scheme using file splitting and differences between versions in hybrid peer-to-peer networks   Order a copy of this article
    by Toshinobu Hayashi, Shinji Sugawara 
    Abstract: This paper proposes an efficient content sharing strategy using file splitting and difference between versions in hybrid Peer-to-Peer (P2P) networks. In this strategy, when a user requests a content item, he/she can get it from the network by retrieving the other version of the content item and the difference from the requested version, if the obtaining cost of the requested version is expensive. This way of content sharing can be expected to accomplish effective and flexible operation. Furthermore, efficient use of a peer's storage capacity is achieved by splitting each replica of a content item into several small blocks and storing them separately in the plural peers.
    Keywords: content sharing; file splitting; difference of versions; hybrid peer-to-peer.

  • Hardware support for thread synchronisation in an experimental manycore system   Order a copy of this article
    by Alessandro Cilardo, Mirko Gagliardi, Daniele Passaretti 
    Abstract: This paper deals with the problem of thread synchronisation in manycore systems. In particular, it considers the open-source GPU-like architecture developed within the MANGO H2020 project. The thread synchronisation hardware relies on a distributed master and on a lightweight control unit to be deployed within the core. It does not rely on memory access for exchanging synchronisation information since it uses hardware-level messages. The solution supports multiple barriers for different application kernels possibly being executed simultaneously. The results for different NoC sizes provide indications about the reduced synchronisation times and the area overheads incurred by our solution.
    Keywords: networks on chip; synchronisation; manycore systems.

  • Identifying journalistically relevant social media texts using human and automatic methodologies   Order a copy of this article
    by Nuno Guimaraes, Filipe Miranda, Alvaro Figueira 
    Abstract: Social networks have provided the means for constant connectivity and fast information dissemination. In addition, real-time posting allowed a new form of citizen journalism, where users can report events from a witness perspective. Therefore, information propagates through the network at a faster pace than traditional media reports it. However, relevant information is a small percentage of all the content shared. Our goal is to develop and evaluate models that can automatically detect journalistic relevance. To do it, we need solid and reliable ground-truth data with a significantly large amount of annotated posts, so that the models can learn to detect relevance in all its spectrum. In this article, we present and confront two different methodologies: an automatic and a human approach. Results on a test dataset labelled by experts show that the models trained with automatic methodology tend to perform better in contrast to the ones trained using human annotated data.
    Keywords: relevance detection; machine learning; text mining; crowdsourcing task.

  • Dijkstra algorithm based ray tracing for tunnel-Like structures   Order a copy of this article
    by Kazunori Uchida 
    Abstract: This paper deals with ray tracing in a closed space, such as tunnel or underground, by using a newly developed simulation method based on the Dijkstra algorithm (DA). The essence of this method is to modify the proximity-node matrix obtained by DA in terms of three procedures, path-selection, path-linearisation and line of sight (LOS) check. The proposed method can be applied to ray tracing in complicated structures ranging from an open space such as random rough surface (RRS) or urban area to a closed space such as tunnel or underground. In case of a closed space, however, more detailed discussions are required than in case of an open space, since especially at a grazing angle of incidence, we have to take account of the effects of floor, ceiling and side walls not only locally but also globally. In this paper we propose an effective procedure for LOS check to solve this difficult situation. Numerical examples are shown for traced rays as well as total link-cost distributions in sinusoidal and cross-type tunnels.
    Keywords: Dijkstra algorithm; discrete ray tracing; LOS check; propagation in closed space.

  • Fog computing with original data reference function   Order a copy of this article
    by Tsukasa Kudo 
    Abstract: In recent years, since large amounts of data are being transferred to cloud servers because of the evolution of the Internet of Things, problems such as the network bandwidth restrictions and sensor feedback control delays have been arisen. For these limitations, fog computing, in which the primary processing of sensor data is performed at the fog node, and only the results are transferred to the cloud server, has been proposed. However, in this method, when the original sensor data are necessary for the analysis in the cloud server, the data are missing. For this problem, this paper proposes a data model in which the original sensor data are stored at the fog node with a distributed database. Furthermore, the performance of this data model is evaluated, showing the original data reference from the cloud server can be executed efficiently, particularly in the case of installing multiple fog nodes.
    Keywords: Internet of Things; fog computing; edge computing; distributed database; NoSQL database; MongoDB; data model.

  • Dont lose the point, check it: is your cloud application using the right strategy?   Order a copy of this article
    by Demis Gomes, Glauco Gonçalves, Patricia Endo, Moises Rodrigues, Judith Kelner, Djamel Sadok, Calin Curescu 
    Abstract: Users pay to run their applications in cloud infrastructure, and in return they expect high availability and minimal data loss in case of failure. From the cloud providers' perspective, any hardware or software failure must be detected and recovered as quick as possible to maintain users' trust and avoid financial losses. From the users' perspective, failures must be transparent and not impact on their applications' running. To be able to recover a failed application, cloud providers need to perform checkpoints, to periodically save application data, which can be recovered during a failover process. Currently, a checkpoint service can be implemented in many different ways; each one has its own features and presents different performance results. In this way, the main question to be answered is: what is the best checkpoint strategy according to the users' requirements? In this paper, we performed experiments with different checkpoint service strategies to understand how these are affected by the computing resources. We also provide a discussion about the relation between service availability and checkpoint service.
    Keywords: checkpoint; failover; performance evaluation; SAF standard.

  • Implementation of a high presence immersive traditional crafting system with remote collaborative work support   Order a copy of this article
    by Tomoyuki Ishida, Yangzhicheng Lu, Akihiro Miyakawa, Kaoru Sugita, Yoshitaka Shibata 
    Abstract: A high presence immersive traditional crafting system was developed to provide users, who interact with the system through head-mounted displays, with a highly realistic traditional crafting presentation experience that allows moving functions, such as free walk-through and teleportation. Users can also interactively operate traditional craft objects in space. In addition, the system supports collaborative work in a virtual space shared by remote users. To evaluate the effectiveness of this system, a questionnaire survey was administered to 124 subjects, who provided overwhelmingly positive responses regarding all functions. However, there is still room for improvement in the operability and relevancy of the system.
    Keywords: collaborative virtual environment; head-mounted display; Japanese traditional crafts; interior simulation.

  • A configurable and executable model of Spark Streaming on Apache YARN   Order a copy of this article
    by Jia-Chun Lin, Ming-Chang Lee, Ingrid Chieh Yu, Einar Broch Johnsen 
    Abstract: Streams of data are produced today at an unprecedented scale. Efficient and stable processing of these streams requires a careful interplay between the parameters of the streaming application and of the underlying stream processing framework. Today, finding these parameters happens by trial and error on the complex, deployed framework. This paper shows that high-level models can help to determine these parameters by predicting and comparing the performance of streaming applications running on stream processing frameworks with different configurations. To demonstrate this approach, this paper considers Spark Streaming, a widely used framework to leverage data streams on the fly and provide real-time stream processing. Technically, we develop a configurable and executable model to simulate both the streaming applications and the underlying Spark stream processing framework. Furthermore, we model the deployment of Spark Streaming on Apache YARN, which is a popular open-source distributed software framework for big data processing. We show that the developed model provides a satisfactory accuracy for predicting performance by means of empirical validation.
    Keywords: modelling; simulation; Spark Streaming; Apache YARN; batch processing; stream processing; ABS.

  • Models for hyper-converged cloud computing infrastructure planning   Order a copy of this article
    by Carlos Melo, Jamilson Dantas, Jean Araujo, Paulo Maciel, Rubens Matos, Danilo Oliveira, Iure Fé 
    Abstract: The data centre concept has evolved, mainly due to the need to reduce expenses with the required physical space to store, provide and maintain large computational infrastructures. The software-defined data centre (SDDC) is a result of this evolution. Through SDDC, any service can be hosted by virtualising more reliable and easier-to-keep hardware resources. Nowadays, many services and resources can be provided in a single rack, or even a single machine, with similar availability, considering the deployment costs of silo-based environments. One of the ways to apply the SDDC into a data centre is through hyper-convergence. Among the main contributions of this paper are the behavioral models developed for availability and capacity-oriented availability evaluation of silo-based, converged and hyper-converged cloud computing infrastructures. The obtained results may help stakeholders to select between converged and hyper-converged environments, owing to their similar availability but with the particularity of having lower deployment costs.
    Keywords: Hyper-convergence; Dependability Models; Dynamical Reliability Block Diagrams; SDDC; DRBD; virtualisation; capacity-oriented availability; deployment cost; redundancy; cloud computing; OpenStack.

  • Architecture for diversity in the implementation of dependable and secure services using the state machine replication approach   Order a copy of this article
    by Caio Costa, Eduardo Alchieri 
    Abstract: The dependability and security properties of a system could be impaired by a system failure or by an opponent that exploits its vulnerabilities, respectively. An alternative to mitigate this risk is the implementation of fault- and intrusion-tolerant systems, in which the system properties are ensured even if some of its components fail (e.g., because a software bug or a failure in the runtime environment) or are compromised by a successful attack. State Machine Replication (SMR) is widely used to implement these systems. In SMR, servers are replicated and client requests are deterministically executed in the same order by all replicas in a way that the system behaviour remains correct even if some of them are compromised since the correct replicas mask the misbehaviour of the faulty ones. Unfortunately, the proposed solutions for SMR do not consider diversity in the implementation and all replicas execute the same software. Consequently, the same attack or software bug could compromise all the system. Trying to circumvent this problem, this work proposes an architecture to allow diversity in the implementation of dependable and secure services using the SMR approach. The goal is not to implement different versions of a SMR library for different programming languages, which demands a lot of resources and is very expensive. Instead, the proposed architecture uses an underlying SMR library and provides means to implement and execute service replicas (the application code) in different programming languages. The main problems addressed by the proposed architecture are twofold: (1) communication among different languages; and (2) data representation. The proposed architecture was integrated in the SMR library BFT-SMaRt and a set of experiments showed its practical feasibility.
    Keywords: diversity; security; dependability; state machine replication.

  • Target exploration by Nomadic Levy walk on unit disk graphs   Order a copy of this article
    by Kouichirou Sugihara, Naohiro Hayashibara 
    Abstract: Random walks play an important role in computer science, covering a wide range of topics in theory and practice, including networking, distributed systems, and optimisation. Levy walk is a family of random walks whose distance of a walk is chosen from the power law distribution. There are lots of recent reports of Levy walk in the context of target detection in swarm robotics, analysing human walk patterns, and modelling the behaviour of animal foraging . According to these results, it is known as an efficient method to search in a two-dimensional plane. However, most of the works assume a continuous plane. In this paper, we propose a variant of Homesick Levy walk, called Nomadic Levy walk, and analyse the behaviour of the algorithm regarding the cover ratio on unit disk graphs. We also show the comparison of Nomadic Levy walk and Homesick Levy walk regarding the target search problem. Our simulation results indicate that the proposed algorithm is significantly efficient for sparse target detection on unit disk graphs compared with Homesick Levy walk, and it also improves the cover ratio. Moreover, we analyse the impact of the movement of the sink (home position) on the efficiency on the target exploration.
    Keywords: random walk; Levy walk; target search; unit disk graphs; DTN; autonomic computing; bio-inspired algorithms.

  • Resource auto-scaling for SQL-like queries in the cloud based on parallel reinforcement learning   Order a copy of this article
    by Mohamed Mehdi Kandi, Shaoyi Yin, Abdelkader Hameurlain 
    Abstract: Cloud computing is a technology that provides on-demand services in which the number of assigned resources can be automatically adjusted. A key challenge is how to choose the right number of resources so that the overall monetary cost is minimised. This problem, known as auto-scaling, was addressed in some existing works but most of them are dedicated to web applications. In these applications, it is assumed that the queries are atomic and each of them uses a single resource for a short period of time. However, this assumption cannot be considered for database applications. A query, in this case, contains many dependent and long tasks so several resources are required. We propose in this work an auto-scaling method based on reinforcement learning. The method is coupled with placement-scheduling. In the experimental section, we show the advantage of coupling the auto-scaling with the placement-scheduling by comparing our work with an existing auto-scaling method.
    Keywords: cloud computing; auto-Scaling; resource allocation; parallel reinforcement learning.

  • A cloud-based approach to dynamically manage service contracts for local public transportation   Order a copy of this article
    by Antonella Longo, Marco Zappatore, Mario Bochicchio 
    Abstract: Public contracts regulate how public services are managed by the stakeholders. However, the current technological trend is creating a significant bias between the pace at which service data are produced and contracts change. This increased availability of service data can be exploited in public procurement processes by fostering novel approaches to manage contracts, making them more dynamic and improving the Quality of Service (QoS) delivered to customers. In this paper, a cloud-based approach for assessing the QoS in Local Transportation Services (LTSs) in Apulia Region (Southern Italy) is proposed. Service Level Agreements (SLAs) between providers and the Regional Authority, as well as the minimal guaranteed QoS levels between providers and passengers, are modelled as contracts enacted via a cloud-based system, which gathers data from sensors and passengers. In this way, changes in contract conditions for improving the perceived and delivered QoS can be fixed and facilitated based on data. In order to validate the pilot case, a set of quality indicators and service levels grounded in European and Italian regulatory frameworks has been considered.
    Keywords: public contracts; cloud computing; quality of service; quality of experience; local public transportation.
    DOI: 10.1504/IJGUC.2020.10021245
  • On the design and development of emulation platforms for NFV-based infrastructures   Order a copy of this article
    by Vinicius Fulber Garcia, Thales Nicolai Tavares, Leonardo Da Cruz Marcuzzo, Carlos Raniery Paula Dos Santos, Giovanni Venancio De Souza, Elias Procopio Duarte Junior, Muriel Figueredo Franco, Lucas Bondan, Lisandro Zambenedetti Granville, Alberto Egon Schaeffer-Filho, Filip De Turck 
    Abstract: Network Functions Virtualisation (NFV) presents several advantages over traditional network architectures, such as flexibility, security, and reduced CAPEX/OPEX. In traditional middleboxes, network functions are usually executed on specialised hardware (e.g., firewall, DPI). Virtual Network Functions (VNFs) on the other hand, are executed on commodity hardware, employing Software Defined Networking (SDN) technologies (e.g., OpenFlow, P4). Although platforms for prototyping NFV environments have emerged in recent years, they still have limitations that hinder the evaluation of NFV scenarios such as fog computing and heterogeneous networks. In this work, we present NIEP, which is a platform for designing and testing NFV-based infrastructures and VNFs. NIEP consists of a network emulator and a platform for Click-based VNFs development. NIEP provides a complete NFV emulation environment, allowing network operators to test their solutions in a controlled scenario prior to deployment in production networks.
    Keywords: NFV; VNF; emulation; platform; infrastructure; Click; Mininet; network.

  • Evaluation of navigation based on system optimal traffic assignment for connected cars   Order a copy of this article
    by Weibin Wang, Minoru Uehara, Haruo Ozaki 
    Abstract: Recently, many cars have become connected to the internet. In the near future, almost all cars will be connected cars. Such a connected car will automatically drive according to a navigation system. Conventional car navigation systems are based on user equilibrium (UE) traffic assignment. However, system optimal (SO) traffic assignment is better than UE traffic assignment. To realise SO traffic assignment, complete traffic information is required. When connected cars become ubiquitous, all traffic information will be gathered into the cloud. Therefore, a cloud-based navigation system can provide SO-based navigation to connected cars. An SO-based navigation method in which the cloud collects traffic information from connected cars, computes SO traffic assignments, and recommends SO routes to users was recently proposed. In this paper, we evaluate this SO-based navigation method in detail.
    Keywords: system optimal traffic assignment; connected cars; intelligent transportation system.

  • Towards a secure and lightweight network function virtualisation environment   Order a copy of this article
    by Marco De Benedictis, Antonio Lioy, Paolo Smiraglia 
    Abstract: Cloud computing has deeply affected the structure of modern ICT infrastructures. It represents an enabling technology for novel paradigms, such as Network Function Virtualisation (NFV), which proposes the virtualisation of network functions to enhance the flexibility of networks and to reduce the costs of infrastructure management. Besides potential benefits, NFV inherits the limitations of traditional virtualisation where the isolation of resources comes at the cost of a performance overhead. Lightweight forms of virtualisation, such as containers, aim to mitigate this limitation. Furthermore, they allow the agile composition of complex services. These characteristics make containers a suitable technology for NFV environment. A major concern towards the exploitation of containers is security. Since containers provide less isolation than virtual machines, they can expose the whole host to vulnerabilities. In this work, we investigate container-related threats and propose a secure design for a virtual network function deployed in a lightweight NFV environment.
    Keywords: security; lightweight virtualisation; container; network function virtualisation; NFV; mandatory access control; selinux; docker.

  • A spatial access method approach to continuous k-nearest neighbour processing for location-based services   Order a copy of this article
    by Wendy Osborn 
    Abstract: In this paper, two strategies for handing continuous k-nearest neighbour queries for location-based services are proposed. CKNN1 and CKNN2 use a validity (i.e. safe) region approach for minimising the number of query requests that need to be sent to the server. They also use a two-dimensional spatial access method for both validity region selection and in-structure searching. The latter feature ensures that new searches for a validity region are not required to begin from the root. An evaluation and comparison of both strategies is performed against repeated nearest neighbour search. Both random and exponentially distributed point sets are used. Results show that both approaches achieve significant performance gains, especially with respect to reducing the number of queries that must be sent from the client to the server.
    Keywords: location-based services; continuous nearest neighbour queries; spatial access methods.

  • Scheduling communication-intensive applications on Mesos   Order a copy of this article
    by Alessandro Di Stefano, Antonella Di Stefano, Giovanni Morana 
    Abstract: In recent years, the widespread use of container technologies has significantly altered the interactions between cloud service providers and their customers when developing and offering services. The shift away from virtual private server scenarios in infrastructure-as-a-service environments requires drastic changes to the deployment strategies adopted by service providers. This also opens many questions as to what information must be supplied by customers and how to improve the performance of user applications, especially in the case of communication-intensive applications. In this work, the authors propose the adoption of a new framework for Mesos clusters that aims to improve the deployment strategies of communication intensive applications. Coope4M is based on the partitioning of the user application graph via the isolation index parameter obtained through user-knowledge on the degree of the communication between its components.
    Keywords: Mesos; cluster placement strategy; containers deployment strategy; containers; isolation index; cloud computing;.

  • Assessing distributed collaborative recommendations in different opportunistic network scenarios   Order a copy of this article
    by Lucas Nunes Barbosa, Jonathan Gemmell, Miller Horvath, Tales Heimfarth 
    Abstract: Mobile devices are common throughout the world, even in countries with limited internet access and even when natural disasters disrupt access to a centralised infrastructure. This access allows for the exchange of information at an incredible pace and across vast distances. However, this wealth of information can frustrate users as they become inundated with irrelevant or unwanted data. Recommender systems help alleviate this burden. In this work, we propose a recommender system where users share information via an opportunistic network. Each device is responsible for gathering information from nearby users and computing its own recommendations. An exhaustive empirical evaluation was conducted on two different datasets. Scenarios with different node densities, velocities and data exchange parameters were simulated. Our results show that in a relatively short time when a sufficient number of users are present, an opportunistic distributed recommender system achieves results comparable to that of a centralised architecture.
    Keywords: opportunistic networks; recommender systems; mobile ad hoc networks.

  • A methodology for automated penetration testing of cloud applications   Order a copy of this article
    by Valentina Casola, Alessandra De Benedictis, Massimiliano Rak, Umberto Villano 
    Abstract: Security assessment is a very time- and money-consuming activity. It needs specialised security skills and, furthermore, it is not fully integrated into the software development life-cycle. One of the best solutions for the security testing of an application relies on the use of penetration testing techniques. Unfortunately, penetration testing is a typically human-driven procedure that requires a deep knowledge of the possible attacks and of the hacking tools that can be used to launch the tests. In this paper, we present a methodology that enables the automation of penetration testing techniques based on both application-level models, used to represent the application architecture and its security properties in terms of applicable threats, vulnerabilities and weaknesses, and on system-level models, adopted to automatically generate and execute the penetration testing activities. The proposed methodology can be easily integrated into a continuous integration development process and aid software developers in evaluating security.
    Keywords: cloud application security assessment; cloud application penetration testing; automated penetration testing modelling; automated penetration testing execution.

  • Towards to virtual infrastructure allocation on multiple IaaS providers with survivability and reliability requirements   Order a copy of this article
    by Anderson Schwede Raugust, Wilton Jaciel Loch, Felipe Rodrigo De Souza, Maurício Aronne Pillon, Charles Christian Miers, Guilherme Piêgas Koslovski 
    Abstract: The cloud computing paradigm consolidated the on-demand provisioning of virtual resources. However, the diversity of services, prices, data centres, and geographical footprints has turned the clouds into a complex and heterogeneous environment. There are several Infrastructure as a Service (IaaS) providers differentiated by the provisioning costs and availability figures. Owing to management complexity, the survivability and reliability aspects are often disregarded by tenants, eventually resulting in heavy losses due to unavailability of services that are hosted on Virtual Infrastructures (VIs). We present an alternative to improve VIs' survivability and reliability, which considers the use of replicas and the spreading of virtual resources atop providers, regions, and zones. Replicas are used to achieve a user-defined reliability level while the controlled spreading of VI components decreases the probability of full outages. In addition, our proposal performs a cost-effective allocation. We formulate the VI allocation with survivability and reliability requirements as a Mixed Integer Program (MIP). Then, three strategies to solve the formulation are proposed. First, the binary constraints are relaxed to obtain a Linear Program (LP), and the LP solution is given as input for the Simulated Annealing (SA) technique. Likewise, two GPU-accelerated algorithms are proposed to speed up the allocation of large-scale scenarios. Simulation results with different reliability requests indicate an increase in survivability without inflating costs. With regard to allocation runtime when processing real data from multiple providers, the GPU-tailored algorithms found solutions in less than 1 second.
    Keywords: allocation; reliability; survivability; availability; virtual infrastructure; IaaS providers.

  • Preferential charging for government authorised emergency electrical vehicles   Order a copy of this article
    by Raziq Yaqub, Fahd Shifa, Fasih-Ud Din 
    Abstract: The proliferation of Electrical Vehicles (EVs) is exponential. However, the power grid is not able to provide simultaneous charging of several EVs owing to limited power production capabilities and old distribution infrastructure. Scheduled charging is one of the most advocated solutions. However, it is not viable for emergency vehicles. This paper proposes to provide priority charging service for government authorised emergency EVs. For enablement of this proposal, a complete solution that includes the architecture, as well as the protocols suite, is suggested. To realise such a service, the paper suggests a major functional entity called a Priority Charging Server, i.e. a database server where authorised emergency EVs IDs are registered, and their record is maintained. The paper also proposes modifications in the IEC15118 and IEC 61850 protocol suits. These protocols provide communication between the vehicle and the grid. The solution also includes roaming as well as non-roaming scenarios, i.e. a priority charging request may be originated by an authorised emergency EV from a Home Utility Network, as well as, Visiting Utility Network. The paper is concluded with a MATLAB-based proof-of-concept simulation.
    Keywords: priority charging; roaming; non-roaming; electric vehicle; protocols; priority server; AAA server.

  • Testing of network security systems through DoS, SQL injection, reverse TCP and social engineering attacks   Order a copy of this article
    by Arianit Maraj, Ermir Rogova, Genc Jakupi 
    Abstract: Cyber-attacks are happening with an ever-increasing frequency to organisations with the goal of gaining access to their sensitive information. These attacks can cause huge damage to various governmental, non-governmental, healthcare, financial and other organisations. Nowadays, it is web applications that are being used to access sensitive information, hence they have become a preferred target for attackers through which to try to access sensitive data. Therefore, it has become of a paramount importance for organisations to implement robust security policies in order to protect sensitive data from being compromised. First and foremost, measures should be taken to prevent these attacks. The best way to prevent cyber-attacks is to test security systems before attacks happen. The most frequent types of attack are: SQL (Structured Query Language) injection, DoS (Denial of Service), reverse TCP (Transmission Control Protocol) and social engineering attacks. In this paper, we use penetration testing techniques for testing security issues of computer systems and networks. We analyse firewalls and other protective systems and their role in security. Various scenarios are used for testing security systems through DoS, SQL injection and reverse TCP. Using penetration testing techniques, we try to find out what is the best solution for protecting sensitive data within the governmental network of Kosovo. We also tackle the issue of social engineering attacks on networks.
    Keywords: cyber-security; denial of service; SQL injection; reverse TCP; social engineering; penetration testing.

  • Research on the relationship between geomantic omen and housing choice in the big data era   Order a copy of this article
    by Lin Cheng 
    Abstract: In order to make the optimal decision of housing choice based on geomantic omen, the modern information technology in big data era is applied to confirm the relationship between the geomantic omen and housing choice. Firstly, geomantic theory and residential district planning decision are discussed. The function, core content and goal of geomantic theory are analysed, and the importance of geomantic theory on the site selection, orientation and spacing and indoor environment of residential region is analysed. The indoor environment of an urban residence includes the following elements, which are road, water body, plant and environmental elements. The geomantic theory can make the distribution trend of road system humane based on four principles. The flow direction of water, distribution of dynamic and static water, water area and layout and composition of water body should be designed based on geomantic theory. Secondly, the big data-mining algorithm based on grey relational theory is studied. The linear big data is pretreated, and the grey relational theory is used to construct the big data-mining algorithm. The selection procedure of weight is designed. Thirdly, the big data relational analysis algorithm is put forward. The analysis procedure includes three aspects, which are preprocessing of original data, procession of environmental parameters, and calculation of relational degree. Finally, three residential districts are used as examples to carry out the grey relational analysis for the geomantic theory and housing choice, and the results verify the effectiveness of the big mining algorithm. In addition, geomantic culture is more important for residents' satisfaction than housing choice, and development of good commercialised living population can be achieved based on geomantic theory.
    Keywords: big data; housing choice; grey relational analysis.

  • Research on hardware-in-the-loop simulation of single point suspension system based on fuzzy PID   Order a copy of this article
    by Jie Yang, Tao Gao, Shengli Yuan, Heng Shi, Zhenli Zhang 
    Abstract: The stability control of a maglev train is one of the core problems in the research of maglev train technology, and the realisation of this goal is of great scientific value in the field of magnetic levitation. Based on this, the research on the suspension control strategy of a single point maglev system is founded, and the control strategy is verified by experiments on a magnetic levitation ball system (MLBS). Aiming at the structure of multi-group independent control system of maglev train suspension frame, in order to improve the overall stability control ability of the suspension frame, the coupling relationship between the subsystems is established by introducing the suspension response deviation compensator. Finally, the effect of the single point suspension control system is discussed, and the cooperative control of suspension frame is carried out on MATLAB. Simulation and analysis show that each subsystem has good anti-jamming ability, and the suspension system realises the balanced and stable control under different interference signals, which provides a certain reference value for further study of maglev train suspension control.
    Keywords: magnetic suspension; fuzzy PID; maglev train bogie; composite control.

  • A study on fog computing architectures and energy consumption approaches regarding QoS requirements   Order a copy of this article
    by Amel Ksentini, Maha Jebalia, Sami Tabbane 
    Abstract: The Internet of Things (IoT) promotion is increasing, for both individuals and businesses. Data is gathered for treatment from locals, machines, smart objects, vehicles, healthcare devices, remote surveillance camera, predictive maintenance, real-time customer information, etc. Cloud computing is providing suitable hardware and software for data processing, such as storage and computing. Thus, the integration of IoT with cloud capabilities may offer several benefits for many applications. However, challenges persist for some use-cases, such as for delay-sensitive services, owing to the huge amount of information collected by IoT devices and to be processed by cloud servers. Fog computing has attracted many researchers in past years since it is expected to overcome several limits and challenges in cloud computing concerning the quality of service (QoS) requirements, such as latency, real-time processing, bandwidth and location awareness. This is due to the fact that data processing may be located at the edge of the network when fog computing is invoked instead of sending information for a longer round-trip to the cloud servers. Nevertheless, researchers still have to deal with several issues, namely the architectural level and the energy aspect. In this paper, we investigate fog system architectures and energy consumption reported in the literature, while considering QoS requirements in the synthesis. A system model is then introduced with a potential solution for QoS management for the fog computing environment
    Keywords: fog computing; IoT; architecture; QoS; energy consumption.

  • Study on NVH robustness evaluation method of high mileage automobile based on systematic sampling   Order a copy of this article
    by Jianqiang Xiong, Le Yuan, Dingding Liao, Jun Wu 
    Abstract: At present, automobile riding comfort is primarily focused on the study of the performance of new automobile NVH, and less research on how to analyse and evaluate the NVH characteristics of high mileage automobile. Based on the principle of statistics, this paper presents a robust evaluation method based on systematic sampling for the stability of high mileage automotive NVH characteristics and expounds the method. The basic idea and the main implementation steps focus on the analysis of the NVH characteristics of high mileage automobile and how to evaluate the robustness of high mileage automobile NVH, and provide a new direction for research into automobile riding comfort.
    Keywords: automobile vibration and noise; evaluation method; high mileage automobile; systematic sampling.

  • Success factor analysis for cloud services: a comparative study on software as a service   Order a copy of this article
    by Dietmar Nedbal, Mark Stieninger 
    Abstract: The emergence of cloud computing has been triggering fundamental changes in the information technology landscape for years. The proliferation of cloud services gave rise to novel types of business model, the complexity of which results from numerous different factors critical to a successful adoption. However, when it comes to improvement activities by cloud service providers, owing to their multifacetedness, the challenge lies in figuring out where to start. Furthermore, the acuteness of actions to be taken varies among different settings. Thus, we propose success factor analysis as an approach to prioritise improvement activities according to their acuteness, which is thereby indicated by the gap between the priority and the actual performance of a particular factor. Results show that the factors with the overall highest gap are security and safety, trust, and costs. Overall, the strengths of cloud services are seen in technical features leading to a good ease of use, a positively perceived usefulness, and a broad availability.
    Keywords: success factor analysis; cloud computing; software as a service; cloud services; survey.

  • Data access control algorithm based on fine-grained cloud storage   Order a copy of this article
    by Qiaoge Xu 
    Abstract: With development of network storage and cloud computation, the cloud storage security has become the critical problem of cloud security technology. The data confidentiality of customer should be ensured in unbelievable storage environment, and the legal data of customer should be protected from tampering. In order to ensure the cloud storage security and achieve fine-grained data access control, a new fine-grained data access control algorithm is established based on CP-ABE algorithm. The basic theory of CP-ABE algorithm is studied in depth, the flowchart of CP-ABE algorithm is put forward. Then the fine-grained cloud storage controlling scheme based on digital envelop is put forward. The structure of new fine-grained cloud storage controlling scheme is designed, the trusted third party mainly generates the public parameters and main password of system, the data owner possesses the original plaintext data of client, the normal user can read digital envelopes stored in cloud storage server, and the cloud service provider (CSP) can offer data storage for the user. The new scheme construction process is given, and then the corresponding algorithm is designed. The new scheme can reduce user management complexity of CSP, and the new scheme also keeps the access controlling fine-grained degree and flexible of original scheme. The fine-grained degree access privilege tree is also designed to to improve the robustness of the fine-grained data access control algorithm and to describe the encryption strategy. Simulation analysis is carried out, and results show that the proposed data access control algorithm can effectively improve the searching efficiency of cipher text, and achieve fine-grained access under cloud storage environment.
    Keywords: data access control algorithm; fine-grained cloud storage; searching efficiency.

  • Multi-objective optimisation of traffic signal control based on particle swarm optimisation   Order a copy of this article
    by Jian Li 
    Abstract: In order to relieve traffic jams, an improved particle swarm optimisation is applied in multiple objective optimisation of traffic signal control. Firstly, a multiple objective optimal model of traffic signal is constructed considering the queue length, vehicle delay, and exhaust emission. The multiple optimal function is transferred to single optimal function through three weighted indexes. The vehicle delay and queue length model under the control of traffic signal is constructed through combining the Webster model and the high capacity manual delay model. The vehicle exhaust emission model under the control of traffic signal is also constructed. The objective function and constraint conditions are confirmed. Secondly, the improved particle swarm optimisation algorithm is established through combining the traditional particle swarm algorithm and genetic algorithm. The mathematics of the particle swarm algorithm is studied in depth, and particles are endowed with hybrid probability, which is random and has no fitness degree value. In every iteration, a number of particles are selected based on the hybrid probability to put them into pool. The location of subparticles can be calculated based on the weighted location of the mother particle. The value of the inertia factor can be regulated based on the following nonlinear inertia weight decrement function. Finally, the simulation analysis is carried out using an intersection as the research objective, the flow of straight road ranges from 300 to 450 pcu, the flow of left turn road ranges from 250 to 380 pcu. The optimal performance index is obtained, and the new multiple objective optimisation model can give better optimal results than the traditional multiple objective optimisation algorithm. A better traffic control effect is obtained.
    Keywords: particle swarm optimisation; traffic signal control; intersection.

  • Policies and mechanisms for enhancing the resource management in cloud computing: a performance perspective   Order a copy of this article
    by Mitali Bansal, Sanjay Kumar Malik, Sanjay Kumar Dhurandher, Issac Woungang 
    Abstract: Resource management is among the critical challenges in cloud computing since it can affect its performance, cost, and functionality. In this paper, a survey of the policies and mechanisms for enhancing the resource management in cloud computing is proposed. From a performance perspective, several resource management schemes for cloud computing are investigated and qualitatively compared in terms of various different parameters, such as performance, response time, scalability, pricing factor, throughput, and accuracy, providing a fundamental knowledge base for researchers in the cloud computing area. We also classified various cloud computing techniques based on various policies, such as capacity allocation, admission control, load balancing and energy optimisation. Furthermore, we divided defined techniques on the basis of various parameters, such as low, medium, high time span time techniques, reliability, performance, and availability, to name a few.
    Keywords: cloud computing; resource management; load balancing; policies and mechanisms; performance perspective.

  • Domo Farm 4.0   Order a copy of this article
    by Silvia Angeloni 
    Abstract: The paper explains and discusses an innovative agricultural appliance, based on vertical farming and hydroponics. The innovative and smart model was launched by a brilliant woman, winner of several awards. Applying her engineering skills, the female entrepreneur has set up a modern company, where technology and agriculture are perfectly integrated in a sustainable way to prevent negative and damaging environmental effects. Recently, the company has developed an automatic hydroponic greenhouse appliance for empowering individuals to grow crops at home. The household hydroponic appliance is based on sensors and smart technologies. The environmental and economic benefits and potentiality of the innovative appliance are highlighted.
    Keywords: big data; Domo Farm 4.0; internet of things; RobotFarm; sensors; smart energy management; smart farm; sustainability.

  • A hybrid collaborative filtering recommendation algorithm: integrating content information and matrix factorisation   Order a copy of this article
    by Jing Wang, Arun Sangaiah, Wei Liu 
    Abstract: Matrix factorisation is one of the most popular techniques in recommendation systems. However, matrix factorisation still suffers from the cold start problem. Moreover, there are too many parameters in the matrix factorisation model, producing a complicated computation. In this paper, we present a hybrid recommendation algorithm, that integrates user and item content information and matrix factorisation. First, based on user or item content information, similar user or item neighbour sets can be generated. Through these neighbour sets, user or item rating preference can be evaluated in advance. Incorporating user and item preference into the matrix factorisation model, we obtain the final prediction model. Finally, the momentum stochastic gradient descent method is used to optimize parameter learning. Experimental results on a real dataset have shown our algorithm yield the best performance in terms of MAE and RMSE when compared with other classical matrix factorisation recommendation algorithms.
    Keywords: recommender system; collaborative filtering; matrix factorisation; momentum stochastic gradient descent.

  • Classification of cognitive algorithms for managing services used in cloud computing   Order a copy of this article
    by Lidia Ogiela, Makoto Takizawa, Urszula Ogiela 
    Abstract: This paper presents a new idea of cognitive systems dedicated to cloud computing, especially backgrounds, introduction and description of service management procedures and algorithms dedicated to cloud computing and infrastructure. Cognitive methods are based on semantic description and interpretation procedures. The described idea will be dedicated to secure service management procedures, especially in the cloud and fog stages. The proposed algorithms of cognitive service management will be presented and described by use of semantic aspects. Semantic analysis is used to extract the meaning of the analysed data. Also, in management processes, it is possible to analyse meaning aspects. These kinds of analysis can be used in different application areas. This paper presents services management protocols in the cloud and in the fog. In both of the cloud and fog stages it is possible to realise management procedures by application of secure methods and protocols. This paper presents the sharing techniques for data security in cloud computing.
    Keywords: cognitive algorithms; fog and cloud computing; service management protocols; cognitive data security.

  • Performance analysis of StaaS on IoT devices in fog computing environment using embedded systems   Order a copy of this article
    by José Dos Santos Machado, Danilo Souza Silva, Raphael Fontes, Adauto Menezes, Edward Moreno, Admilson Ribeiro 
    Abstract: This work presents the concept of fog computing, its theoretical contextualisation, and related works, and performs an analysis of fog computing to provide StaaS (Storage as a Service) on IoT devices using embedded systems platforms, in addition to comparing its results with those obtained by a high-performance server. In this article, we use OwnCloud and Benchmark SmashBox (for data transfer analysis). The results showed that the implementation of this service in embedded systems devices can be a good alternative to reduce one of these problems, in this case the storage of data, which currently affects IoT devices.
    Keywords: fog computing; cloud computing; IoT; embedded systems; StaaS.

  • Model for generation of social network considering human mobility and interaction   Order a copy of this article
    by Naoto Fukae, Hiroyoshi Miwa, Akihiro Fujihara 
    Abstract: The structure of an actual network in the real world has often the scale-free property that the degree distribution follows the power law. As for a generation mechanism of a human relations network, it is necessary to consider human mobility and interactions, because, in general, a person moves around, meets another person, and makes human relation stochastically. However, there are few models considering human mobility so far. In this paper, we propose a mathematical model generating a human relations network for the purpose of fundamental research on the usage model for the utility computing. We show by the numerical experiments that a network generated by the proposed model has the scale-free property, the clustering coefficient follows the power law, and the average distance is small. This means that the proposed model can explain the mechanism generating an actual human relations network.
    Keywords: human relations network; scale-free; human mobility; human interactions; homesick Levy walk; network generation model.

  • Algorithmic node classification in AND/OR mobile workflow graph   Order a copy of this article
    by Ihtisham Ali, Susmit Bagchi 
    Abstract: Next-generation data-intensive applications in various fields of science and engineering employ complex workflow graph execution models in dynamic networks. However, in dynamic networks, heterogeneity and the mobility of nodes result in low efficiency owing to end-to-end delay in execution in complex workflow graphs. Supporting such data-intensive workflows and optimising their performance require analysis of complex workflow graphs in order to reach the objectives such as deadlines and fast execution etc. A major limitation of the current workflow models is the lack of structural stability to visualise a complex workflow graph having the mobility of nodes. In this paper, we address this problem by proposing a hybrid AND/OR mobile workflow graph (MWG) model to visualise a fully conditioned complex workflow graph having the mobility of nodes. Moreover, this paper proposes nodes validity detection (NVD) algorithm for classifying the total number of nodes in the AND/OR MWG. Furthermore, nodes criticality detection (NCD) algorithm is also proposed to identify the set of critical nodes in the AND/OR MWG. The proposed algorithms will enable efficiently analysing, mapping and scheduling of complex workflow graphs in a dynamic network environment. The NVD and NCD algorithms are implemented in Java language and evaluated on the testbed. The regression analysis of projected algorithmic performance is presented. A detailed comparative analysis considering matrix elements is presented in this paper.
    Keywords: workflow graph; dynamic networks; mobile node; nodes classification; critical node.

Special Issue on: Recent Developments in Parallel, Distributed and Grid Computing for Big Data

  • GPU accelerated video super-resolution using transformed spatio-temporal exemplars   Order a copy of this article
    by Chaitanya Pavan Tanay Kondapalli, Srikanth Khanna, Chandrasekaran Venkatachalam, Pallav Kumar Baruah, Kartheek Diwakar Pingali, Sai Hareesh Anamandra 
    Abstract: Super-resolution (SR) is the method of obtaining high resolution (HR) image or image sequence from one or more low-resolution (LR) images of a scene. Super-resolution has been an active area of research in recent years owing to its applications to defence, satellite imaging, video surveillance and medical diagnostics. In a broad sense, SR techniques can be classified into external database driven and internal database driven approaches. The training phase in the first approach is computationally intensive as it learns the LR-HR patch relationships from huge datasets, and the test procedure is relatively fast. In the second approach, the super-resolved image is directly constructed from the available LR image, eliminating the need for any learning phase but the testing phase is computationally intensive. Recently, Huang et al. (2015) proposed a transformed self-exemplar internal database technique which takes advantage of the fractal nature in an image by expanding patch search space using geometric variations. This method fails if there is no patch redundancy within and across image scales and also if there is a failure in detecting vanishing points (VP), which are used to determine perspective transformation between LR image and its subsampled form. In this paper, we expand the patch search space by taking advantage of the temporal dimension of image frames in the scene video and also use an efficient VP detection technique by Lezama et al. (2014). We are thereby able to successfully super-resolve even the failure cases of Huang et al. (2015) and achieve an overall improvement in PSNR. We also focused on reducing the computation time by exploiting the embarrassingly parallel nature of the algorithm. We achieved a speedup of 6 on multi-core, up to 11 on GPU, and around 16 on hybrid platform of multi-core and GPU by parallelising the proposed algorithm. Using our hybrid implementation, we achieved 32x super-resolution factor in limited time. We also demonstrate superior results for the proposed method compared with current state-of-the-art SR methods.
    Keywords: super-resolution; self-exemplar; perspective geometry; temporal dimension; vanishing point; GPU; multicore.

  • Energy-efficient fuzzy-based approach for dynamic virtual machine consolidation   Order a copy of this article
    by Anita Choudhary, Mahesh Chandra Govil, Girdhari Singh, Lalit K. Awasthi, Emmanuel S. Pilli 
    Abstract: In cloud environment the overload leads to performance degradation and Service Level Agreement (SLA) violation while underload results in inefficient use of resources and needless energy consumption. Dynamic Virtual Machine (VM) consolidation is considered as an effective solution to deal with both overload and underload problems. However, dynamic VM consolidation is not a trivial solution as it can also lead to violation of negotiated SLA owing to runtime overheads in VM migration. Further, dynamic VM consolidation approaches need to answer many questions, such as (i) when to migrate a VM? (ii) which VM is to be migrated? and (iii) where to migrate the selected VM? In this work, efforts are made to develop a comprehensive approach to achieve better solutions to such problems. In the proposed approach, future forecasting methods for host overload detection are explored; a fuzzy logic based VM selection approach that enhances the performance of VM selection strategy is developed; and a VM placement algorithm based on destination CPU use is also developed. The performance evaluation of the proposed approaches is carried out on CloudSim toolkit using PlanetLab data set. The simulation results have exhibited significant improvement in the number of VM migrations, energy consumption, and SLA violations.
    Keywords: cloud computing; virtual machines; dynamic virtual machine consolidation; exponential smoothing; fuzzy logic.

Special Issue on: Emergent Peer-to-Peer Network Technologies for Ubiquitous and Wireless Networks

  • An improved energy efficient multi-hop ACO-based intelligent routing protocol for MANET   Order a copy of this article
    by Jeyalaxmi Perumaal, Saravanan R 
    Abstract: A Mobile Ad-hoc Network (MANET) consists of group of mobile nodes, and the communication among them is done without any supporting centralised structure. Routing in a MANET is a difficult because of its dynamic features, such as high mobility, constrained bandwidth, link failures due to energy loss, etc., The objective of the proposed work is to implement an intelligent routing protocol. Selection of the best hops is mandatory to provide good throughput in the network, therefore Ant Colony Optimisation (ACO) based intelligent routing is proposed. Selecting the best intermediate hop for intelligent routing includes ACO technique, which greatly reduces the network delay and link failures by validating the co-ordinator nodes. Best co-ordinator nodes are selected as good intermediate hops in the intelligent routing path. The performance is evaluated using the simulation tool NS2, and the metrics considered for evaluation are delivery and loss rate of sent data, throughput and lifetime of the network, delay and energy consumption.
    Keywords: ant colony optimisation; intelligent routing protocol; best co-ordinator nodes; MANET.

  • Analysis of spectrum handoff schemes for cognitive radio networks considering secondary user mobility   Order a copy of this article
    by K.S. Preetha, S. Kalaivani 
    Abstract: There has been a gigantic spike in the usage and development of wireless devices since wireless technology came into existence. This has contributed to a very serious problem of spectrum unavailability or spectrum scarcity. The solution to this problem comes in the form of cognitive radio networks, where secondary users (SUs), also known as unlicensed users, make use of the spectrum in an opportunistic manner. The SU uses the spectrum in a manner such that the primary or the licensed user (PU) doesnt face interference above a threshold level of tolerance. Whenever a PU comes back to reclaim its licensed channel, the SU using it needs to perform a spectrum handoff (SHO) to another channel that is free of PU. This way of functioning is termed as spectrum mobility. Spectrum mobility can be achieved by means of SHO. Initially, the SUs continuously sense the channels to identify an idle channel. Errors in the sensing channel are possible. A detection theory is put forth to analyse the spectrum sensing errors with the receiver operating characteristic (ROC) considering false alarm probability, miss detection and detection probability. In this paper, we meticulously investigate and analyse the probability of spectrum handoff (PSHO), and hence the performance of spectrum mobility, with Lognormal-3 and Hyper-Erlang distribution models considering SU call duration and residual time of availability of spectrum holes as measurement metrics designed for tele-traffic analysis.
    Keywords: cognitive radio networks; detection probability; probability of a miss; SNR; false alarm probability; primary users; secondary users.

  • Link survivability rate-based clustering for QoS maximisation in VANET   Order a copy of this article
    by D. Kalaivani, P.V.S.S.R. Chandra Mouli Chandra Mouli 
    Abstract: The clustering technique is used in VANET to manage and stabilise topology information. The major requirement of this technique is data transfer through the group of nodes without disconnection, node coordination, minimised interference between number of nodes, and reduction of hidden terminal problem. The data communication among each node in the cluster is performed by a cluster head (CH). The major issues involved in the clustering approaches are improper definition of cluster structure, maintenance of cluster structure in dynamic network. To overcome these issues in the clustering technique, the link- and weight-based clustering approach is developed along with a distributed dispatching information table (DDIT) to repeatedly use the significant information for avoiding data transmission failure. In this paper, the clustering algorithm is designed on the basis of relative velocity value of two same directional vehicles by forming a cluster with number of nodes in a VANET network. Then, the CH is appropriately selected based on the link survival rate of the vehicle to provide the emergency message towards different vehicles in the cluster, along with the stored data packet information in the DDIT table for fault prediction. Finally, the efficient medium access control (MAC) protocol is used to provide a prioritised message for avoiding spectrum shortage of emergency messages in the cluster. The comparative analysis between the proposed link-based CH selection with DDIT (LCHS-DDIT) with the existing methods, such as clustering-based cognitive MAC (CCMAC), multichannel CR ad-hoc network (MCRAN), and dedicative short range communication (DSRC), proves the effectiveness of LCHS-DDIT regarding the throughput, packet delivery ratio, routing control overhead with minimum transmission delay.
    Keywords: vehicular ad-hoc networks; link survival rate; control channel; service channel; medium access control; roadside unit; on-board unit.

Special Issue on: Emerging Scalable Edge Computing Architectures and Intelligent Algorithms for Cloud-of-Things and Edge-of-Things

  • A survey on fog computing and its research challenges   Order a copy of this article
    by Jose Dos Santos Machado, Edward David Moreno, Admilson De Ribamar Lima Ribeiro 
    Abstract: This paper reviews the new paradigm of distributed computing, which is fog computing, and it presents its concept, characteristics and areas of performance. It performs a literature review on the problem of its implementation and analyses its research challenges, such as security issues, operational issues and their standardisation. We show and discuss that many questions need to be researched in academia so that their implementation will become a reality, but it is clear that their adherence is inevitable for the internet of the future.
    Keywords: fog computing; edge computing; cloud computing; IoT; distributed computing; cloud integration to IoT.

  • Hybrid coherent encryption scheme for multimedia big data management using cryptographic encryption methods   Order a copy of this article
    by Stephen Dass, J. Prabhu 
    Abstract: In todays world of technology, data has been playing an imperative role in many different technical areas. Data confidentiality, integrity and data security over the internet from different media and applications are challenging tasks. Data generation from multimedia and IoT data is another huge source of big data on the internet. When sensitive and confidential data are accessed by attacks this lead to serious countermeasures to security and privacy. Data encryption is the mechanism to forestall this issue. Many encryption techniques are used for multimedia and IoT, but when massive data are developed it there are more computational challenges. This paper designs and proposes a new coherent encryption algorithm that addresses the issue of IoT and multimedia big data. The proposed system can cause a strong cryptographic effect without holding much memory and easy performance analysis. Handling huge data with the help of GPU is included in the proposed system to enhance the data processing more efficiently. The proposed algorithm is compared with other symmetric cryptographic algorithms such as AES,DES,3-DES, RC6 and MARS based on architecture, flexibility, scalability, security level and also based on computational running time, and throughput for both encryption and decryption processes. An avalanche effect is also calculated for the proposed system to be 54.2%. The proposed framework better secures the multimedia against real time attacks when compared with the existing system.
    Keywords: big data; symmetric key encryption; analysis; security; GPU; IoT; multimedia big data.

  • A study on data deduplication schemes in cloud storage   Order a copy of this article
    by Priyan Malarvizhi Kumar, Usha Devi G, Shakila Basheer, Parthasarathy P 
    Abstract: Digital data is growing at immense rates day by day, and finding efficient storage and security mechanisms is a challenge. Cloud storage has already gained popularity because of the huge data storage capacity in storage servers made available to users by the cloud service providers. When lots of users upload data in cloud there can be too many redundant data as well and this can waste storage space as well as affect transmission bandwidth. To promise efficient storage handling of this redundant data is very important, which is done by the concept of deduplication. The major challenge for deduplication is that most users upload data in encrypted form for privacy and security of data. There are many prevailing mechanisms for deduplication, some of which handle encrypted data as well. The purpose of this paper is to conduct a survey of the existing deduplication mechanisms in cloud storage and to analyse the methodologies used by each of them.
    Keywords: deduplication; convergent encryption; cloud storage.

Special Issue on: INCOS 2018 Applied Soft Computing for Optimisation and Parallel Applications

  • An enhanced jaya algorithm for solving nurse scheduling problem   Order a copy of this article
    by Ahmed Ali, Walaa El-Ashmawi 
    Abstract: The Nurse Scheduling Problem (NSP) is one of the main optimisation problems that require an efficient assignment of a number of nurses to a number of shifts in order to cover the hospital's planning horizon demands. NSP is an NP-hard problem subject to a set of hard and soft constraints. Such problems can be efficiently solved by optimisation algorithms, such as meta-heuristic algorithms. In this paper, we enhance one of the most recent meta-heuristic algorithms, which is called jaya, for solving the NSP. The enhanced algorithm is called EJNSP (Enhanced Jaya for Nurse Scheduling Problem). EJNSP focus on maximising the nurses' preferences about shift requests and minimising the under- and over-staffing. EJNSP has two main strategies. First, it randomly generates an initial effective scheduling that satifies a set of constraints. Second, it uses swap operators in order to satisfy the set of soft constraints to achieve an effective scheduling. A set of experiments have been applied to a set of benchmark dataset with a different number of nurses and shifts. The experimental results demonstrated that EJNSP algorithm achieved effective results for solving NSP in order to minimise the under- and over-staffing and satisfy the nurses' preferences.
    Keywords: nurse scheduling problem; meta-heuristic algorithms; jaya optimisation algorithm.

  • An adaptive technique for cost reduction in cloud data centre environment   Order a copy of this article
    by Hesham Elmasry, Ayman Khedr, Mona Nasr 
    Abstract: The developing interest for using cloud computing has expanded the energy consumption of data centres, which has become a critical issue. High energy consumption not only translates to high operational cost but also reduces the profit margin for the cloud providers and leads to high carbon emissions which are not environmentally friendly. Therefore we need energy-saving solutions to minimise the negative impact of cloud computing. In order to do this we propose an Energy Saving Load Balancing (ESLB) technique that can save energy in the cloud server while keeping up the Service Level Agreement (SLA), which includes quality of service and resource usage between cloud service provider and cloud customers. This paper presents an ESLB technique to enhance the response time and the resource usage, and reduce both energy consumption and carbon dioxide emission to mitigate the negative impact of cloud computing on the environment.
    Keywords: data centre; energy efficiency; green cloud computing; load balancing; quality of service.

  • Novel mobile palmprint databases for biometric authentication   Order a copy of this article
    by Mahdieh Izadpanahkakhk, Seyyed Mohammad Razavi, Mehran Taghipour-Gorjikolaie, Seyyed Hamid Zahiri, Aurelio Uncini 
    Abstract: Mobile palmprint biometric authentication has attracted a lot of attention as an interesting analytics tool for representing discriminative features. Despite the advances in this technology, there are some challenges including lack of enough data and invariant templates to the rotation, illumination, and translation. In this paper, we provide two mobile palmprint databases and we can address the aforementioned challenges via deep convolutional neural networks. In the best of our knowledge, this paper is the first study in which mobile palmprint images were acquired in some special views and then were evaluated via deep learning training algorithms. To evaluate our mobile palmprint images, some well-known convolutional neural networks are applied for verification task. By using these architectures, the best-achieved results are classification cost of 0.118 in the training phase and classification accuracy of 0.925 in the test phase obtained in the 1-to-1 matching procedure.
    Keywords: training algorithms; biometric authentication; palmprint verification; mobile devices; deep learning; convolutional neural network; feature extraction.
    DOI: 10.1504/IJGUC.2019.10019524
  • IoT-based intensive care secure framework for patient monitoring and tracking   Order a copy of this article
    by Lamia Omran, Kadry Ezzat, Alaa Bayoumi, Ashraf Darwich, Aboul Hassanien 
    Abstract: This paper aims to design a prototype of a real-time patient control system. The proposed framework is used to quantify the physical parameters of the patient, such as the temperature of the body, rate of heartbeat and ECG, observing with the assistance of sensors. The collected data are sent to the cloud, then to the nurse station, specialist and the patient tablet or the web application. In this framework, the patient's health is checked consistently and the data obtained through the networks are transmitted. If any irregularity is noticed in the patient's signs, it will be sent to nurses and doctors for any suggestions to help the patient. The system is implemented through Arduino advanced controller, and simulation results are obtained. The smart intensive care unit (SICU) provides a new way for health monitoring of patients in order to improve healthcare systems and patients' care and safety. The cloud system is provided by a group of micro-services hosted in many servers that simulate a small cloud system. The patients data is secured through this framework by using OAuth server to authenticate the users and the sensors and generate the tokens. This system can assist doctors and nurses to achieve their missions and improve the healthcare system.
    Keywords: intensive care unit; internet of things; patient health monitoring; cloud; Arduino; security.

  • An autonomic mechanism based on ant colony pattern for detecting the source of incidents in complex enterprise systems   Order a copy of this article
    by Kamaleddin Yaghoobirafi, Eslam Nazemi 
    Abstract: The variability and complexity of modern information systems have motivated many research studies on autonomic computing and self-adaptation paradigms. Although the final goal of these paradigms were to be applied at the level of business and enterprise solutions, this has not been addressed by the majority of current solutions owing to the challenges of attaining this level of self-adaptation. In complex enterprises, various events such as failure of a server, malfunction of an application, inconsistency of data items and etc. may happen. In many cases, these events are cause by a change or an incident in a resource that belongs to another layer of information technology architecture. This makes the detection of complex events very difficult. In this paper, a mechanism is proposed for detecting the main source of failures and deficiencies in any position in the information technology architectural layers and recognizing appropriate alternative solutions. This mechanism uses pheromone deposition approach which is inspired by Ant Colony Optimisation (ACO). It is expected that the mechanism which is proposed in this paper, facilitates the coordination and detection of events between the resources that are not digitally represented. For sake of evaluation, two case studies are considered that investigate the inter-layer adaptation scenarios. The results show that the adaptation process can be done in less time and with more scalability by the use of the proposed mechanism in comparison with classic approaches. This enhancement is due to the possibility of direct recognition of an affected dependency path.
    Keywords: autonomic computing; ant colony pattern; interoperability; complex enterprise systems; coordination.

  • Evaluation prediction techniques to achieve optimal biomedical analysis   Order a copy of this article
    by Samaher Al_Janabi, Muhammed Abaid Mahdi 
    Abstract: Prediction techniques in data mining have been widely used to support optimising future decision-making in many different fields, including healthcare and medical diagnoses (HMD). Obtaining valuable information is a significant task of the prediction process. By this means, data is analysed and summarised using various perspectives. This work not only presents and explores the previous related works in the field of prediction techniques in the HMD sector, but also analyses their main techniques. These techniques include Chi-squared Automatic Interaction Detection (CHAID), Exchange Chi-squared Automatic Interaction Detection (ECHAID), Random Forest Regression & Classification (RFRC), Multivariate Adaptive Regression Splines (MARS), and Boosted Tree Classifiers and Regression (BTCR). This paper is going to present the general properties, summary, advantages, and disadvantages of each one. Most importantly, the analysis depends on the parameters that have been used for building a prediction model for each one. Besides, classifying those techniques according to their main and secondary parameters is another task. Furthermore, the presence and absence of parameters are also compared in order to identify the better sharing of those parameters among the techniques. Therefore, the main and optional steps of the prediction procedure are comparatively analysed and produced in this paper. As a result, the techniques with no randomness and mathematical basis are the most powerful and fast compared with the others. This work will allow the HMD sector to have a better way to look at their future decisions.
    Keywords: biomedical analysis; data mining; prediction techniques; healthcare problem; parameters.
    DOI: 10.1504/IJGUC.2019.10020511
  • Facial expression recognition using geometric features and modified hidden Markov model   Order a copy of this article
    by Mayur Rahul, Narendra Kohli, Rashi Agarwal, Sanju Mishra 
    Abstract: This work proposes a geometric feature-based descriptor for efficient Facial Expression Recognition (FER) that can be used for better human-computer interaction. Although lots of research has been focused on descriptor-based FER, there are still different problems to be solved regarding noise, recognition rate, time and error rates. The JAFFE datasets help to make an FER more reliable and efficient as pixels are distributed uniformly. The FER system depends on the feature extraction techniques. The proposed system introduces novel geometric features to extract robust features from the images and layered Hidden Markov Model(HMM) as a classifier. This new HMM consists of two layers: bottom layer and upper layer. The bottom layer represents the atomic expressions and the upper layer represents the combinations of atomic expressions. Then the extracted features from feature extraction steps are used to train different facial expressions. Finally, a training HMM is used to recognise seven facial expressions i.e. anger, disgust, fear, joy, sadness, surprise and neutral. The proposed framework is compared with existing systems where the proposed system proves its superiority with the recognition rate of 84.7% with the others 85%. Our proposed framework is also tested in terms of recognition rates, processing time and error rates and found to be superior to the other existing systems.
    Keywords: geometric features; hidden Markov model; Baum-Welch method; Viterbi method; forward method; state sequences; probability; human-computer interaction.

Special Issue on: Current Trends in Ambient Intelligence-Enabled Internet of Things and Web of Things Interface Vehicular Systems

  • Hybrid energy-efficient and QoS-aware algorithm for intelligent transportation system in internet of things   Order a copy of this article
    by N.N. Srinidhi, G.P. Sunitha, S. Raghavendra, S.M. Dilip Kumar, Victor Chang 
    Abstract: The Internet of Things (IoT) consists of a large number of energy compel devices that are prefigured to progress the effective competence of several industrial applications. It is essential to reduce the energy use of every device deployed in the IoT network without compromising the quality of service (QoS) for intelligent transportation systems. Here, the difficulty of providing the operation between the QoS allocation and the energy competence for the intelligent transportation system is deliberated. To achieve this objective, a multi-objective optimisation problem to accomplish the aim of estimating the outage performance of the clustering process and the network lifetime is devised. Subsequently, a Hybrid Energy-Efficient and QoS-Aware (HEEQA) algorithm that is a combination of quantum particle swarm optimisation (QPSO) along with improved non-dominated sorting genetic algorithm (NGSA) to achieve energy balance among the devices is proposed, and later the MAC layer parameters are tuned to reduce further the energy consumption of the devices. NSGA is applied to solve the problem of multi-objective optimisation and the QPSO algorithm is used to find the optimal cooperative nodes and cluster head in the clusters. The simulation outcome has put forward that the HEEQA algorithm has attained better operation balance between the energy competence and the QoS provisioning in the clustering process by minimising the energy consumption, delay, transmission overhead and maximising network lifetime, throughput and delivery ratio and is best suited for intelligent transportation application.
    Keywords: energy efficiency; intelligent transportation system; IoT; network lifetime; QoS.

  • Analysing control plane scalability issue of software-defined wide area network using simulated annealing technique   Order a copy of this article
    by Kshira Sahoo, Somula Ramasubbareddy, B. Balamurugan, B. Vikram Deep 
    Abstract: In Software Defined Networks (SDN), the decoupling of the control logic from the data plane enables vendor-independent policies, programmability, and provide other numerous advantages. However, since its inception, SDN is a subject of a wide range of criticism mainly related to the scalability issues of the control plane. To address these limitations, recent architectures have supported the implementation of multiple SDN controllers. Usage of multiple controllers in the network arises controller placement problem (CPP). The placement problem is a major issue for wide area networks because, while placing the controllers, significant strategies need to be considered. In most of the placement strategies, authors focused on propagation latency, because it is a critical factor in real networks. In this paper, the placement problem has formulated as an optimisation problem and the Simulated Annealing (SA) technique has been used to analyse the problem. This technique is a probabilistic single-solution-based search method that has influenced the annealing process of metallurgy engineering. Further, we investigate the behaviour of SA with four different neighboring solution techniques. The effectiveness of the algorithms was carried out on TataNld topology and implemented using MATLAB simulator.
    Keywords: software-defined networks; scalability; controller placement problem; simulated annealing.

  • Energy-aware multipath routing protocol for Internet of Things using network coding techniques   Order a copy of this article
    by S. Sankar, P. Srinivasan, Somula Ramasubbareddy, B. Balamurugan 
    Abstract: Energy conservation is a significant challenge in the Internet of Things (IoT), as it connects resource-constrained devices. The routing plays a vital role in transferring the data packets from the source to the destination. In Low Power and Lossy Networks (LLN), the existing routing protocols use the single routing metric, composite routing metric and opportunistic routing technique, to select the parent for the data transfer. However, the packet loss occurs, owing to the bottleneck of nodes nearby the sink and data traffic during the data transfer. In this paper, we propose an energy-aware multipath routing protocol (EAM-RPL) to prolong the network lifetime. The multipath model establishes multiple paths from the source node to the sink. In EAM-RPL, the source node applies the randomised linear network coding to encode the data packets and it transmits the data packets into the next level of cluster nodes. The intermediate nodes receive the encoded data packets from the source node and it forwards to the next cluster of nodes. Finally, the sink node receives the data packets and it decodes the original data packet sent by the source node. The simulation is conducted using COOJA network simulator. The effectiveness of EAM-RPL is compared with the RPL protocol. The simulation result shows that the proposed EAM-RPL improves the packet delivery ratio by 3-5% and prolongs the network lifetime by 5-10%.
    Keywords: Internet of Things; network coding; IPv6 routing protocol; low power and lossy networks; multipath routing.