Forthcoming articles

 


International Journal of Grid and Utility Computing

 

These articles have been peer-reviewed and accepted for publication in IJGUC, but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

 

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

 

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

 

Articles marked with this Open Access icon are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.

 

Register for our alerting service, which notifies you by email when new issues of IJGUC are published online.

 

We also offer RSS feeds which provide timely updates of tables of contents, newly published articles and calls for papers.

 

International Journal of Grid and Utility Computing (73 papers in press)

 

Regular Issues

 

  • Enriching folksonomy for online videos   Order a copy of this article
    by Hiroki Sakaji, Masaki Kohana, Akio Kobayashi, Hiroyuki Sakai 
    Abstract: We propose a method that enriches folksonomy by using user comments on online videos. Folksonomy is a process in which users tag videos so that they can be searched for easily. On some videos, users can post tags and comments. A tag corresponds to folksonomy. One such online sharing website is Nico Nico Douga; however, users cannot post more than 12 tags on a video. Therefore, there are some important tags that could be posted but are sometimes not. We present a method for acquiring some of these missing tags by choosing new tags that score well in a scoring method developed by us. The method is based on information theory and a novel algorithm for estimating new tags by using distributed databases constructed by us.
    Keywords: text mining; distributed database; information extraction.

  • A web platform for oral exam of programming class   Order a copy of this article
    by Masaki Kohana, Shusuke Okamoto 
    Abstract: We develop a system to help an oral exam for a programming class. Our programming class has a problem about the waiting time for the students. We assume that the waiting time can be reduced when a teacher can check a source code and a result of the program smoothly. A student uploads C++ source codes and registers to a waiting list. The system compiles the code to an HTML and a JavaScript file using Emscripten. The compiled program can run on a Web browser. A teacher can check the order of the students for the oral exam. At this time, the teacher also can see the source code and the result. Our system provides a waiting list of the oral exam to keep fairness. Also, the teacher does not overlook an invalid code. It helps a teacher to grade a student correctly.
    Keywords: Oral Exam; Programming; Runtime Environment.

  • A novel test case generation method based on program structure diagram   Order a copy of this article
    by Mingcheng Qu, XiangHu Wu, YongChao Tao, GuanNan Wang, ZiYu Dong 
    Abstract: At present, embedded software has the problems of test lag, non-visual and low efficiency, depending on the test design of testers. It cannot guarantee the quality of the test cases and cannot guarantee the quality of the test. In this paper, a software program structure diagram model is established to verify the model, and the test points are planned manually. Finally, we fill in the contents of the test items, and generate the corresponding set of test cases according to the algorithm, and save them into the database for management. This method can improve the reliability and efficiency of tests, ensure the visual tracking and management of the use case, and have a strong auxiliary function to the planning and generation of the test case.
    Keywords: program structure diagram; test item planning; test case generation.

  • A dynamic cloud service selection model based on trust and service level agreements in cloud computing   Order a copy of this article
    by Yubiao Wang, Junhao Wen, Quanwang Wu, Lei Guo, Bamei Tao 
    Abstract: For high quality and trusted service selection problems, we propose a dynamic cloud services selection model (DCSSM). Cloud service resources are divided into different service levels by Service-Level Agreement Management (SLAM). Each SLAM manages some cloud service registration information. In order to make the final trust evaluation values more practical, the model performs a comprehensive trust, which consists of direct trust and indirect trust. First, combined weights consist of subjective weight and objective weight. Using rough sets, an analytic hierarchy process method is used to calculate subjective weight. The direct trust also considers transaction time and transaction amount, and then gets a accurate direct trust. Secondly, indirect trust considers user trust evaluation similarity, and it contains indirect trust of friends and indirect trust of strangers. Finally, when the transaction is completed, a direct trust dynamic update is proposed. The model is simulated using CloudSim. It is compared with three other methods, and the experimental results show that the DCSSM performs better than the other three methods.
    Keywords: dynamic cloud service; trust; service-level agreement; selection model; combining weights.

  • Research on regression test method based on multiple UML graphic models   Order a copy of this article
    by Mingcheng Qu, Xianghu Wu, Yongchao Tao, Guannan Wang, Ziyu Dong 
    Abstract: Most of the existing graph-based regression testing schemes aim at a given UML graph and are not flexible in regression testing. This paper proposes a method of a universal UML for a variety of graphical model modifications, and obtains a UML graphics module structure modified regression testing that must be retested, determined by the domain of influence analysis on the effect of UML modification on the graphical model test case generated range analysis, finally re auto generate test cases. This method has been proved to have a high logical coverage rate. In order to fully consider all kinds of dependency, it cannot limit the type modification of UML graphics, and has higher openness and comprehensiveness.
    Keywords: regression testing; multiple UML graphical models; domain analysis.

  • Involving users in energy conservation: a case study in scientific clouds   Order a copy of this article
    by David Guyon, Anne-Cécile Orgerie, Christine Morin, Deb Agarwal 
    Abstract: Services offered by cloud computing are convenient to users for reasons such as their ease of use, flexibility, and financial model. Yet data centres used for their execution are known to consume massive amounts of energy. The growing resource utilisation following the cloud success highlights the importance of the reduction of its energy consumption. This paper investigates a way to reduce the footprint of HPC cloud users by varying the size of the virtual resources they request. We analyse the influence of concurrent applications with different resources sizes on the system's energy consumption. Simulation results show that resources with larger size are more energy consuming regardless of faster completion of applications. Although smaller-sized resources offer energy savings, it is not always favourable in terms of energy to reduce too much the size. High energy savings depend on the user profiles' distribution.
    Keywords: cloud computing; green computing; HPC applications; energy savings; users' involvment.

  • Distributed and multi-core version of k-means algorithm   Order a copy of this article
    by Ilias Savvas, Dimitrios Tselios, Georgia Garani 
    Abstract: Nowadays, huge quantities of data are generated by billions of machines and devices. Numerous methods have been employed in order to make use of this valuable resource: some of them are altered versions of established known algorithms. One of the most seminal methods, in order to mine from data sources, is clustering, and $k$-means is a key algorithm that clusters data according to a set of attributes. However, its main shortcoming is the high computational complexity which makes the $k$-means very inefficient to perform on big datasets. Although $k$-means is a very well used algorithm, a functional distributed variant combining the multi-core power of contemporary machines has not been accepted yet by the researchers. In this work, a three phase distributed/multi-core version of $k$-means and the analysis of its results are presented. The obtained experimental results are in line with the theoretical outcomes and prove the correctness, the efficiency, and the scalability of the proposed technique.
    Keywords: parallel algorithm; clustering; multi-core; distributed; $k$-means; OpenMP; MPI.

  • Logic programming as a service in multi-agent systems for the Internet of Things   Order a copy of this article
    by Roberta Calegari, Enrico Denti, Stefano Mariani, Andrea Omicini 
    Abstract: The widespread diffusion of low-cost computing devices, along with improvements of cloud computing platforms, is paving the way towards a whole new set of opportunities for Internet of Things (IoT) applications and services. Varying degrees of intelligence are required for supporting adaptation and self-management: yet, they should be provided in a lightweight, easy to use and customise, highly-interoperable way. In this paper we explore Logic Programming as a Service (LPaaS) as a novel and promising re-interpretation of distributed logic programming in the IoT era. After introducing the reference context and motivating scenarios of LPaaS as an effective enabling technology for intelligent IoT, we define the LPaaS general architecture, and discuss two different prototype implementations - as a web service and as an agent in a multi-agent system (MAS), both built on top of the tuProlog system, which provides the required interoperability and customisation. We finally showcase the LPaaS potential through two case studies, designed as simple examples of the motivating scenarios.
    Keywords: IoT; logic programming; multi-agent systems; pervasive computing; LPaaS; artificial intelligence; interoperability.

  • Cognitive workload management on globally interoperable network of clouds   Order a copy of this article
    by Giovanni Morana, Rao Mikkilineni, Surendra Keshan 
    Abstract: A new computing paradigm using distributed intelligent managed elements (DIME) and DIME network architecture (DNA) is used to demonstrate a globally interoperable public and private cloud network deploying cloud agnostic workloads. The workloads are cognitive and capable to adjust autonomously their structure and maintain desired quality of service. DNA is designed to provide a control architecture for workload self-management of non-functional requirements to address rapid fluctuations, either in workload demand or in available resources. Using DNA, a transaction-intensive three-tier workload is migrated from a physical server to a virtual machine hosted in a public cloud without interrupting the service transactions. After migration, cloud agnostic inter-cloud and intra-cloud auto-scaling, auto-failover and live migration are demonstrated again, without disrupting the user experience or losing transactions.
    Keywords: cloud computing; datacentre; manageability; DIME; DIME network architecture; cloud agnostic; cloud native.

  • Towards autonomous creation of service chains on cloud markets   Order a copy of this article
    by Benedikt Pittl, Irfan Ul-Haq, Werner Mach, Erich Schikuta 
    Abstract: Today, cloud services, such as virtual machines, are traded directly at fixed prices between consumers and providers on platforms, e.g. Amazon EC2. The recent development of Amazon's EC2 spot market shows that dynamic cloud markets are gaining popularity. Hence, autonomous multi-round bilateral negotiations, also known as bazaar negotiations, are a promising approach for trading cloud services on future cloud markets. They play a vital role for composing service chains. Based on a formal description we describe such service chains and derive different negotiation types. We implement them in a simulation environment and evaluate our approach by executing different market scenarios. Therefore, we developed three negotiation strategies for cloud resellers. Our simulation results show that cloud resellers as well as their negotiation strategies have a significant impact on the resource allocation of cloud markets. Very high as well as very low markups reduce the profit of a reseller.
    Keywords: cloud computing; cloud marketplace; IaaS; bazaar negotiation; SLA negotiation; cloud service chain; cloud reseller; multi-round negotiation; cloud economics.

  • Cache replication for information centric networks through programmable networks   Order a copy of this article
    by Erick B. Nascimento, Edward David Moreno, Douglas D. J. De Macedo 
    Abstract: Software Defined Networking (SDN) is a new approach that decouples the control from the data transmission function and is directly programmable through the programming language. In parallel, Information Centric Network (ICN) influences the use of information through network caching and multipart communication. Moreover, owing to programmable characteristics, these projects are developed to flexibilise, solve traffic problems, and transfer content through a scalable network structure with simple management. The premise of SDN that contemplates the ICN besides decoupling, is the flexibility of the network configurations to reduce overhead of segments due to retransmission of duplicate files by the same segment. Based on this information, an architecture is designed to provide reliable content, which can be replicated in the network \cite{Trajano2016}. The ICN network architecture of this proposal stores the information through a logical volume for later access and has the possibility of connection with remote controllers to store files reliably in cloud environments.
    Keywords: software defined networking; information centric network; programmability; flexibility; management; storage; controller.

  • Winning the war on terror: using social networking tools and GTD to analyse the regularity of terrorism activities   Order a copy of this article
    by Xuan Guo, Fei Xu, Zhi-ting Xiao, Hong-guo Yuan, Xiaoyuan Yang 
    Abstract: In order to grasp the temporal and spatial characteristics and activity patterns of terrorism attacks in China, so as to make effective counter-terrorism strategies, two kinds of different intelligence sources were analysed by means of social network analysis and mathematical statistics. Firstly, using the social network analysis tool ORA, we build the terrorist activity meta-network for the text information, extract the four categories of person, places, organisations and time, and analyse the characteristics of the key nodes of the network, then the meta-network is decomposed into person-organisation, person-location, organisation-location, and organisation-time four binary subnets to analyse the temporal and spatial characteristics of terrorist activities. Then, using the GTD dataset to analyse the characteristics of China's terrorist attacks from 1989 to 2015, the geo-spatial distribution and time distribution of terrorist events are summarised. Combined with the data visualisation method, the previous results of social network analysis using open source text are verified and compared. Finally, the paper puts forward some suggestions on counter-terrorism prevention strategy in China.
    Keywords: social network analysis; GTD; meta-network; ORA; counter-terrorism; terrorism activities.

  • Model-based deployment of secure multi-cloud applications   Order a copy of this article
    by Valentina Casola, Alessandra De Benedictis, Massimiliano Rak, Umberto Villano, Erkuden Rios, Angel Rego, Giancarlo Capone 
    Abstract: The wide diffusion of cloud services, offering functionalities in different application domains and addressing different computing and storage needs, opens up to the possibility of building multi-cloud applications that rely upon heterogeneous services, offered by multiple cloud service providers (CSPs). This flexibility not only enables an efficient usage of existing resources, but also allows, in some cases, to cope with specific requirements in terms of security and performance. On the downside, resorting to multiple CSPs requires a huge amount of time and effort for application development. The MUSA framework enables a DevOps approach to develop multi-cloud applications with desired Security Service Level Agreements (SLAs). This paper describes the MUSA Deployer models and tools, which aim at decoupling the multi-cloud application modelling and development from application deployment and cloud services provisioning. With MUSA tools, application designers and developers are able to express easily and to evaluate the security requirements and, successively, to deploy automatically the application, by acquiring cloud services and by installing and configuring software components on them.
    Keywords: cloud security; multi-cloud deployment; automated deployment.

  • Improving the MXFT scheduling algorithm for a cloud computing context   Order a copy of this article
    by Paul Moggridge, Na Helian, Yi Sun, Mariana Lilley, Vito Veneziano, Martin Eaves 
    Abstract: In this paper, the Max-min Fast Track (MXFT) scheduling algorithm is improved and compared against a selection of popular algorithms. The improved versions of MXFT are called Min-min Max-min Fast Track (MMMXFT) and Clustering Min-min Max-min Fast Track (CMMMXFT). The key difference is using min-min for the fast track. Experimentation revealed that despite min-mins characteristic of prioritising small tasks at the expense of overall makespan, the overall makespan was not adversely effected and the benefits of prioritising small tasks were identified in MMMXFT. Experiments were conducted using a simulator, with the exception of one real world experiment. The real world experiment identified challenges faced by algorithms that rely on accurate execution time prediction.
    Keywords: cloud computing; scheduling algorithms; max-min.

  • Novel algorithms, emergent approaches and applications for distributed computing   Order a copy of this article
    by Ilias Savvas, Douglas Dyllon Jeronimo De Macedo 
    Abstract: The track on the Convergence of Distributed Clouds, Grids and their Management (CDCGM2017) started in 2009 to discuss the evolution of cloud computing w.r.t. the infrastructure providers who started creating next generation hardware that is service friendly, and service developers who started embedding business service intelligence in their computing infrastructure. During the last years, the state of the art of cloud computing architecture has evolved toward more complexity by piling up new layers of management over the many layers that already exist. Moreover, in the last ten years, the problem of scaling and managing distributed applications in the cloud has taken a new dimension, especially in requirement of tolerance to workload variation and proactive scaling of available computing resource pools, with particular emphasis on big data management and recent technologies. This article represents an effort to present significant extensions of the interesting papers presented at CDCGM 2017. These papers describe advances in current distributed and cloud computing practices dealing with modern techniques of parallel computing, Cognitive Workload Management for Cloud Computing, emergent Cloud XaaS, information service networks and IoT.
    Keywords: cloud computing; grid computing; SLA; data science.

  • An intelligent water drops based approach for workflow scheduling with balanced resource utilisation in cloud computing   Order a copy of this article
    by Mala Kalra, Sarbjeet Singh 
    Abstract: The problem of finding optimal solutions for scheduling scientific workflows in cloud environment has been thoroughly investigated using various nature-inspired algorithms. These solutions minimise the execution time of workflows, but they may result in severe load imbalance among Virtual Machines (VMs) in cloud data centres. Cloud vendors desire the proper utilisation of all the VMs in the data centres to have efficient performance of the overall system. Thus load balancing of VMs becomes an important aspect while scheduling tasks in a cloud environment. In this paper, we propose an approach based on the Intelligent Water Drops (IWD) algorithm to minimise the execution time of workflows while balancing the resource utilisation of VMs in the cloud computing environment. The proposed approach is compared with a variety of well-known heuristic and meta-heuristic techniques using three real-time scientific workflows, and experimental results show that the proposed algorithm performs better than these existing techniques in terms of makespan and load balancing.
    Keywords: workflow scheduling; intelligent water drops algorithm; cloud environment; evolutionary computation; directed acyclic graphs; load balancing; balanced resource utilisation; optimisation technique.

  • The energy consumption laxity based algorithm to perform computation processes in virtual machine environments   Order a copy of this article
    by Tomoya Enokido, Dilawaer Duolikun, Makoto Takizawa 
    Abstract: In information systems, server cluster systems equipped with virtual machines are widely used to realise scalable and high performance computing systems, such as cloud computing systems. In order to satisfy application requirements, such as deadline constraint for each application process, the processing loads of virtual machines have to balance with one another in a server cluster. In addition to achieving the performance objectives, the total electric energy of a server cluster to perform application processes has to be reduced, as discussed in green computing. In this paper, the energy consumption laxity based (ECLB) algorithm is proposed to allocate computation type application processes to virtual machines in a server cluster so that the total electric energy of the server cluster and the response time of each process can be reduced. We evaluate the ECLB algorithm in terms of the total electric energy of a server cluster and response time of each process compared with the basic round-robin (RR) algorithm. Evaluation results show the average total electric energy of a server cluster and average response time of each process in the ECLB algorithm can both be more reduced in the RR algorithm, respectively.
    Keywords: green computing; virtual machines; energy-efficient server cluster systems; power consumption models; energy-efficient load-balancing algorithms.

  • A new bimatrix game model with fuzzy payoffs in credibility space   Order a copy of this article
    by Cunlin Li, Ming Li 
    Abstract: Uncertain theory based on experts evaluation and non-additive measure is introduced to explore the bimatrix game with the uncertain payoffs. The uncertainty space based on the axiom of the uncertain measures is presented. Some basic characteristics of uncertain events are described and the expected values of the uncertain variables are given in uncertainty space. A new model of the bimatrix game with uncertain payoffs is established and its equivalent strategy is given. Then, we develop an expected model of uncertain bimatrix games and define the uncertain equilibrium strategy of uncertain bimatrix games. By using the expected values of uncertain variables, we transform the model into a linear programming, the expected equilibrium strategy of the uncertain bimatrix games identified through solving linear equations.
    Keywords: bimatrix game; uncertain measure; expected Nash equilibrium strategy.

  • Data analysis of CSI 800 industry index by using factor analysis model   Order a copy of this article
    by Chunfen Xu 
    Abstract: This paper studies the linkage among the industries based on CSI 800 industry index, which provides mass complicated data for industry research. Factor analysis, a useful data analysis tool, allows researchers to investigate concepts that are not easily measured directly by collapsing a large number of variables into a few interpretable underlying factors. Firstly, data of ten industries in the period from September 2009 to March 2017 is collected from CSI 800 Index and correlational analyses are conducted. Secondly, this paper establishes an appropriate evaluation system, and then uses factor analysis to do dimension reduction. Finally, some characteristics and trends in various industries are obtained.
    Keywords: CSI 800 index; correlation analysis; factor analysis; dimension reduction.

  • A real-time matching algorithm using sparse matrix   Order a copy of this article
    by Aomei Li, Wanji Jiang, Po Ma, Jiahui Guo, Dehui Dai 
    Abstract: Aiming at the shortcomings of the traditional image feature matching algorithm, which is computationally expensive and time-consuming, this paper presents a real-time feature matching algorithm. Firstly, the algorithm constructs sparse matrices by Laplace operator and the Laplace weighting is carried out. Then, the feature points are detected by the FAST feature point detection algorithm. The SURF algorithm is used to assign the direction and descriptor to the feature for rotation invariance, We then use the Gaussian pyramid to make it scalable invariance. Secondly, the match pair is extracted by the violent matching method, and the matching pair is purified by Hamming distance and symmetry method. Finally, the RANSAC algorithm is used to get the optimal matrix, and the affine invariance check is used to match the result. The algorithm is compared with the classical feature point matching algorithm, which proves that the method has high real-time performance under the premise of guaranteeing the matching precision.
    Keywords: sparse matrices; Laplace weighted; FAST; SURF; symmetry method; affine invariance check.

  • How do checkpoint mechanisms and power infrastructure failures impact on cloud applications?   Order a copy of this article
    by Guto Leoni Santos, Demis Gomes, Djamel Sadok, Judith Kelner, Elisson Rocha, Patricia Takako Endo 
    Abstract: With the growth of cloud computing usage by commercial companies, providers of this service are looking for ways to improve and estimate the quality of their services. Failures in the power subsystem represent a major risk of cloud data centre unavailability at the physical level. At same time, software-level mechanisms (such as application checkpointing) can be used to maintain the application consistency after a downtime and also improve its availability. However, understanding how failures at the physical level impact on the application availability, and how software-level mechanisms can improve the data centre availability is a challenge. This paper analyses the impact of power subsystem failures on cloud application availability, as well as the impact of having checkpoint mechanisms to recover the system from software-level failure. For that, we propose a set of stochastic models to represent the cloud power subsystem, the cloud application, and also the checkpoint-based retrieval mechanisms. To evaluate data centre performance, we also model requests arrival and time to process as a queue, and feed this model with real data acquired from experiments done in a real testbed. To verify which components of the power infrastructure most impact on the data centre availability we perform sensitivity analysis. The results of the stationary analysis show that the selection of a given type of checkpoint mechanism does not present a significant impact on the observed metrics. On the other hand, improving the power infrastructure implies performance and availability gains.
    Keywords: cloud data centre; checkpoint mechanisms; availability; performance; stochatic models.

  • A review of intrusion detection approaches in cloud security systems   Order a copy of this article
    by Satyapal Singh, Mohan Kubendiran, Arun Kumar Sangaiah 
    Abstract: Cloud computing is a technology that allows the conveyance of services, storage, network, computing power, etc., over the internet. The on-demand and ubiquitous nature of this technology allows for its ease of use and availability. However, for this very reason, the cloud services, platform or infrastructure are targets for attackers. The most common attack is the one that tries taking control of one or more virtual machine instances running on the cloud. Since cloud and networking technologies go side by side, it is inevitable to keep such malicious attempts at bay. Nevertheless, intrusion detection systems are software that can detect potential intrusions within or outside a secure cloud environment. In this paper, a study of different intrusion detection systems that have been previously proposed in order to mitigate or in best case eliminate the possible threats posed by such intrusions has been done.
    Keywords: cloud computing; virtualisation; intrusion detection; networking; cyber attacks; Blockchain.

  • A novel web image retrieval method: bagging weighted hashing based on local structure information   Order a copy of this article
    by Li Huanyu 
    Abstract: Hashing is widely used in ANN searching problems, especially in web image retrieval. A excellent hashing algorithm can help the users to search and retrieve their web images more conveniently, quickly and accurately. In order to conquer several deficiencies of ITQ in image retrieval problem, we use ensemble learning to deal image retrieval problem. A elastic ensemble framework has been proposed to guide the hashing design, and three important principles have been proposed, named high precision, high diversity, and optimal weight prediction. Basing on this, we design a novel hashing method called BWLH. In BWLH, first, the local structure information of the original data is extracted to construct the local structure data, thus to improve the similarity-preserve ability of hash bits. Second, a weighted matrix is used to balance the variance of different bits. Third, bagging is exploited to expand diversity in different hash tables. Sufficient experiments show that BWLH can handle the image retrieval problem effectively, and perform better than several state-of-the-art methods at same hash code length on dataset CIFAR-10 and LabelMe. Finally, search by image, a web-based use case scenario of the proposed hashing BWLH, is given to detail how the proposed method can be used in a web-based environment.
    Keywords: web image retrieval; hashing; ensemble learning; local structure information; weighted.

  • A fog computing model for pervasive connected healthcare in smart environments   Order a copy of this article
    by Philip Moore, Hai Van Pham 
    Abstract: Healthcare provision faces many challenges resulting from: the growing demand for health services, advances in medical technologies, and the availability of increasingly complex treatment options. The challenges are exacerbated by the increasing complexity in medical conditions experienced by an ageing demographic and the related demand for social care provision. Addressing these challenges requires effective patient management which this may be achieved using autonomic health monitoring systems, however such monitoring has been limited to `smart-homes'. `Smart-homes' may exist within `smart-cities', here we consider `smart-homes' and `smart-cities' as these concepts relate to the healthcare domain. We propose extending the `smart-home' to a wider `smart-environment' which conflates `smart-homes' with the `smart-city' (in an interconnected environment) based on the fog computing paradigm. We introduce our Fog Computing Model which incorporates fog and cloud-based computing for low latency systems which enable comprehensive `real-time' patient monitoring with situational awareness, pervasive consciousness, and related data analytic solutions. Illustrative scenarios predicated on the monitoring of patients with dementia are presented along with a `real-world' example of a proposed `smart-environment'. Context-awareness and decision support is considered with a proposed implementation strategy and a `real-world' case study in the healthcare domain. While the proposed fog model and implementation strategy is predicated on healthcare domain, we argue that the proposed Fog Computing Model will generalise to other medical conditions and domains of interest.
    Keywords: fog computing; connected health; e-hospital; smart-health; smart environments; context-awareness; situational awareness; pervasive systems; decision-support systems.

  • An integrated incentive and trust-based optimal path identification in ad hoc on-demand multipath distance vector routing for MANET   Order a copy of this article
    by Abrar Omar Alkhamisi, Seyed M. Buhari, George Tsaramirsis, Mohammed Basheri 
    Abstract: A Mobile Ad hoc Network (MANET) can exist and work well only when the mobile nodes behave cooperatively in packet routing. To reduce the hazards from malicious nodes and enhance the security of the network, this paper extends an Adhoc On-Demand Multipath Distance Vector (AOMDV) routing protocol, named as An Integrated Incentive and Trust based optimal path identification in AOMDV (IIT-AOMDV) for MANET. To improve the security and reliability of packet forwarding over multiple routes in the presence of potentially malicious nodes, the proposed IIT-AOMDV routing protocol integrates an Intrusion Detection System (IDS) with the Bayesian Network (BN) based trust and payment model. The IDS uses the empirical first- and second-hand trust information of BN, and it underpins the cuckoo search algorithm to map the QoS and trust value into a single fitness metric, tuned according to the presence of malicious nodes in the network. Moreover, the payment system stimulates the nodes to cooperate in routing effectively and improves the routing performance. Finally, the simulation results show that the IIT-AOMDV improves the detection accuracy and throughput by 20% and 16.6%, respectively, more than that of existing AOMDV integrated with the IDS (AID).
    Keywords: mobile ad hoc network; intrusion detection system; trust; attack; optimal path identification; isolation.

  • Performance analysis of data fragmentation techniques on a cloud server   Order a copy of this article
    by Nelson Santos, Salvatore Lentini, Enrico Grosso, Bogdan Ghita, Giovanni Masala 
    Abstract: The advances in virtualisation and distributed computing have allowed the cloud paradigm to become very popular among users and resources. It allows companies to save costs on infrastructure and maintenance and to focus on the development of products. However, this fast-growing paradigm has brought some concerns from users, such as the integrity and security of the data, particularly in environments where users rely entirely on providers to secure their data. This paper explores different techniques to fragment data on the cloud and prevent direct unauthorized access to the data. It explores their performance on a cloud instance, where the total time to perform the operation, including the upload, download and reconstruction of the data, is considered. Results from this experiment indicate that fragmentation algorithms show better performance compared to encryption. Moreover, when combining encryption with fragmentation, there is an increase in the security, with the trade-off of the performance.
    Keywords: cloud security; data fragmentation; data security; privacy in cloud computing.

  • Exploiting big data processing platforms for efficient privacy preserving matching   Order a copy of this article
    by Alexandros Karakasidis, Georgia Koloniari 
    Abstract: A characteristic of the digitally connected world we have created is that each of us generates daily a significant amount of data. While potentially useful, these may not be directly interconnected as required before processing. Additionally, most of the time, such data include sensitive personal information giving rise to privacy preservation issues. To this end, privacy preserving record linkage techniques that link or interconnect corresponding data while preserving their privacy and revealing only the actually matching records. The linking process is usually performed on large corpora of data. It would be beneficial, in terms of processing cost, to be able to outsource this kind of computations to cloud infrastructures exploiting the power of modern parallel computing engines. In this paper, we propose an approach based on the use of phonetic codes for privacy preserving string matching that exploits the powerful Apache Spark processing engine. We present a parallel algorithm tailored for Spark, which evolved from a corresponding sequential algorithm for privacy preserving matching. Our experimental results show the significant time benefits the parallel approach achieves over its sequential counterpart.
    Keywords: record linkage; parallel processing; privacy; string matching.

  • Large scale data processing software and performance instabilities within HEP grid environments   Order a copy of this article
    by Olga Datskova, Wedong Shi 
    Abstract: The scale of large processing tasks being run within the scientific grid and cloud environments has introduced a need for stability guarantees from geographically spanning resources, to ensure that failures are detected and handled pre-emptively. Performance inefficiencies within stacked service environments are a challenge to detect, where failures may stem causes, often requiring expert intervention. Reliability guarantees for such systems become paramount as recovery costs from failures reach a prohibitive scale. While individual services implement fault-tolerance and recovery procedures, the behaviour of interacting failures within tightly-coupled systems is not well-defined. Online reporting and classification of performance fluctuations can aid experts, central services and users to target service areas where optimisations can be introduced. This paper describes an approach for modelling performance states for production tasks running within the ALICE grid. We first provide an overview for the ALICE data and software workflow, focusing on the production job computational profile. A data centre event state is then developed, based on data centre job, computing, storage and user behaviour. With across site analysis, we then train groups to classify service domain states. This work addresses the question of analysing failures within the context of operational instabilities, occurring within production grid environments running large-scale data processing tasks. The results show that operational issues can be detected and described according to the principle service layers involved. This can guide users, central and data centre experts to take action in advance of service failure effects.
    Keywords: HEP; grid computing; large scale data processing; performance modelling; failure; fault-tolerance; software development.

  • The analysis of man at the end attack behaviour in software defined network   Order a copy of this article
    by Abeer Eldewahi, Alzubair Hassan, Khalid ElBadawi, Bazara Barry 
    Abstract: Software defined network (SDN) is an emerging technology that decouples the control plane from the data plane in its network architecture. This architecture exposes new threats that are absent in the traditional IP network. The man at the end attack (MATE) is one of the serious attacks against SDN controllers. The MATE attacker does his/her malicious activities by exploiting the nature of messages between the controller and switches, which are involved in requests and replies. This paper proposes a new detection method for MATE attack. We also used the spoofing, tampering, repudiation, information disclosure, denial of service and elevation of privilege (STRIDE) model in the classification of a four-dimensional model to determine which attacks can be considered as MATE. Furthermore, we determine the behaviour of MATE attacker in SDN after control has been taken from the controller to help in the detection and prevention of the MATE attack.
    Keywords: software defined network; MATE attack behaviour; four-dimensional model; STRIDE model.

  • Detection and mitigation of collusive interest flooding attack on content centric networking   Order a copy of this article
    by Tetsuya Shigeyasu, Ayaka Sonoda 
    Abstract: According to the development of ICT (Information and Communications Technology), deployments of consumer devices, such as note PC, smartphone, and other information devices, make it easy for users to access to the internet. Users having these devices use services of e-mail and SNS (Social Network Service). NDN (Named Data Networking), which is the most popular network architecture, has been proposed to realise the concept of CCN. However, it has been also reported that the NDN is vulnerable to CIFA (Collusive Interest Flooding Attack). In this paper, we propose a novel distributed algorithm for detecting CIFA for keeping availabilities of NDN. The results of computer simulations confirm that our proposal can detect and mitigate the effects of CIFA, effectively.
    Keywords: named data networking; content centric data acquisition; collusive interest flooding attack; malicious prediction.

  • A new overlay P2P network for efficient routing in group communication with regular topologies   Order a copy of this article
    by Abdelhalim Hacini, Mourad Amad 
    Abstract: This research paper gives a new overlay P2P network to provide a performant and optimised lookup process. The lookup process of the proposed solution reduces the overlay hops and consequently the latency for content lookup, between any pair of nodes. The overlay network is constructed on top of physical networks without any centralised control and with a hierarchy of two levels. The architecture is based on regular topologies, which are the pancake graphs and the skip graphs. The focus of all topology construction schemes is to reduce the cost of the lookup process (number of hops and delay) and consequently improve the search performance for P2P applications deployed on the overlay network. Performance evaluations of our proposed scheme show that results obtained are globally satisfactory.
    Keywords: P2P networking; pancake graphs; skip graphs; routing optimisation.

  • A smart networking and computing-aware orchestrator to enhance QoS on cloud-based multimedia services   Order a copy of this article
    by Rodrigo Moreira, Flavio Silva, Pedro Frosi Rosa, Rui Aguiar 
    Abstract: Rich-media applications deployed on cloud lead the use of the internet by people and organisations around the world. Networking and computing resource management has become an important requirement to achieve high user QoS. The advent of software-defined networking and networking function virtualisation brings new possibilities to address carrier environment challenges making QoS enhancement possible. The literature does not show a smart and flexible solution that brings scalability with a holistic view of networking and computing resources taking into account different ways to enhance QoS. In this work, we present a smart orchestrator capable of interacting with the network and computing resources and applications hosted on a cloud. By providing support to different ML algorithms, our solution provides better QoS by improvements in aspects such as network resilience, bandwidth allocation based on real-time traffic patterns, and end-to-end QoS mechanism to event-driven scenarios. The solution interacts in an agnostic way with different applications, cloud operating systems, and the network. As a separate control plane entity, the orchestrator is capable of operating across different domains. The solution orchestrates applications, virtual functions, and cloud resources, providing elastic and network enhancing QoS. Our experimental evaluation in a large-scale testbed shows the orchestrator's capability to provide a smart jitter decrease using AI techniques.
    Keywords: software-defined networking; network function virtualisation; QoS; machine learning; cloud computing.

  • An efficient content sharing scheme using file splitting and differences between versions in hybrid peer-to-peer networks   Order a copy of this article
    by Toshinobu Hayashi, Shinji Sugawara 
    Abstract: This paper proposes an efficient content sharing strategy using file splitting and difference between versions in hybrid Peer-to-Peer (P2P) networks. In this strategy, when a user requests a content item, he/she can get it from the network by retrieving the other version of the content item and the difference from the requested version, if the obtaining cost of the requested version is expensive. This way of content sharing can be expected to accomplish effective and flexible operation. Furthermore, efficient use of a peer's storage capacity is achieved by splitting each replica of a content item into several small blocks and storing them separately in the plural peers.
    Keywords: content sharing; file splitting; difference of versions; hybrid peer-to-peer.

  • Hardware support for thread synchronisation in an experimental manycore system   Order a copy of this article
    by Alessandro Cilardo, Mirko Gagliardi, Daniele Passaretti 
    Abstract: This paper deals with the problem of thread synchronisation in manycore systems. In particular, it considers the open-source GPU-like architecture developed within the MANGO H2020 project. The thread synchronisation hardware relies on a distributed master and on a lightweight control unit to be deployed within the core. It does not rely on memory access for exchanging synchronisation information since it uses hardware-level messages. The solution supports multiple barriers for different application kernels possibly being executed simultaneously. The results for different NoC sizes provide indications about the reduced synchronisation times and the area overheads incurred by our solution.
    Keywords: networks on chip; synchronisation; manycore systems.

  • Identifying journalistically relevant social media texts using human and automatic methodologies   Order a copy of this article
    by Nuno Guimaraes, Filipe Miranda, Alvaro Figueira 
    Abstract: Social networks have provided the means for constant connectivity and fast information dissemination. In addition, real-time posting allowed a new form of citizen journalism, where users can report events from a witness perspective. Therefore, information propagates through the network at a faster pace than traditional media reports it. However, relevant information is a small percentage of all the content shared. Our goal is to develop and evaluate models that can automatically detect journalistic relevance. To do it, we need solid and reliable ground-truth data with a significantly large amount of annotated posts, so that the models can learn to detect relevance in all its spectrum. In this article, we present and confront two different methodologies: an automatic and a human approach. Results on a test dataset labelled by experts show that the models trained with automatic methodology tend to perform better in contrast to the ones trained using human annotated data.
    Keywords: relevance detection; machine learning; text mining; crowdsourcing task.

  • Dijkstra algorithm based ray tracing for tunnel-Like structures   Order a copy of this article
    by Kazunori Uchida 
    Abstract: This paper deals with ray tracing in a closed space, such as tunnel or underground, by using a newly developed simulation method based on the Dijkstra algorithm (DA). The essence of this method is to modify the proximity-node matrix obtained by DA in terms of three procedures, path-selection, path-linearisation and line of sight (LOS) check. The proposed method can be applied to ray tracing in complicated structures ranging from an open space such as random rough surface (RRS) or urban area to a closed space such as tunnel or underground. In case of a closed space, however, more detailed discussions are required than in case of an open space, since especially at a grazing angle of incidence, we have to take account of the effects of floor, ceiling and side walls not only locally but also globally. In this paper we propose an effective procedure for LOS check to solve this difficult situation. Numerical examples are shown for traced rays as well as total link-cost distributions in sinusoidal and cross-type tunnels.
    Keywords: Dijkstra algorithm; discrete ray tracing; LOS check; propagation in closed space.

  • Fog computing with original data reference function   Order a copy of this article
    by Tsukasa Kudo 
    Abstract: In recent years, since large amounts of data are being transferred to cloud servers because of the evolution of the Internet of Things, problems such as the network bandwidth restrictions and sensor feedback control delays have been arisen. For these limitations, fog computing, in which the primary processing of sensor data is performed at the fog node, and only the results are transferred to the cloud server, has been proposed. However, in this method, when the original sensor data are necessary for the analysis in the cloud server, the data are missing. For this problem, this paper proposes a data model in which the original sensor data are stored at the fog node with a distributed database. Furthermore, the performance of this data model is evaluated, showing the original data reference from the cloud server can be executed efficiently, particularly in the case of installing multiple fog nodes.
    Keywords: Internet of Things; fog computing; edge computing; distributed database; NoSQL database; MongoDB; data model.

  • Dont lose the point, check it: is your cloud application using the right strategy?   Order a copy of this article
    by Demis Gomes, Glauco Gonçalves, Patricia Endo, Moises Rodrigues, Judith Kelner, Djamel Sadok, Calin Curescu 
    Abstract: Users pay to run their applications in cloud infrastructure, and in return they expect high availability and minimal data loss in case of failure. From the cloud providers' perspective, any hardware or software failure must be detected and recovered as quick as possible to maintain users' trust and avoid financial losses. From the users' perspective, failures must be transparent and not impact on their applications' running. To be able to recover a failed application, cloud providers need to perform checkpoints, to periodically save application data, which can be recovered during a failover process. Currently, a checkpoint service can be implemented in many different ways; each one has its own features and presents different performance results. In this way, the main question to be answered is: what is the best checkpoint strategy according to the users' requirements? In this paper, we performed experiments with different checkpoint service strategies to understand how these are affected by the computing resources. We also provide a discussion about the relation between service availability and checkpoint service.
    Keywords: checkpoint; failover; performance evaluation; SAF standard.

  • Implementation of a high presence immersive traditional crafting system with remote collaborative work support   Order a copy of this article
    by Tomoyuki Ishida, Yangzhicheng Lu, Akihiro Miyakawa, Kaoru Sugita, Yoshitaka Shibata 
    Abstract: A high presence immersive traditional crafting system was developed to provide users, who interact with the system through head-mounted displays, with a highly realistic traditional crafting presentation experience that allows moving functions, such as free walk-through and teleportation. Users can also interactively operate traditional craft objects in space. In addition, the system supports collaborative work in a virtual space shared by remote users. To evaluate the effectiveness of this system, a questionnaire survey was administered to 124 subjects, who provided overwhelmingly positive responses regarding all functions. However, there is still room for improvement in the operability and relevancy of the system.
    Keywords: collaborative virtual environment; head-mounted display; Japanese traditional crafts; interior simulation.

  • A configurable and executable model of Spark Streaming on Apache YARN   Order a copy of this article
    by Jia-Chun Lin, Ming-Chang Lee, Ingrid Chieh Yu, Einar Broch Johnsen 
    Abstract: Streams of data are produced today at an unprecedented scale. Efficient and stable processing of these streams requires a careful interplay between the parameters of the streaming application and of the underlying stream processing framework. Today, finding these parameters happens by trial and error on the complex, deployed framework. This paper shows that high-level models can help to determine these parameters by predicting and comparing the performance of streaming applications running on stream processing frameworks with different configurations. To demonstrate this approach, this paper considers Spark Streaming, a widely used framework to leverage data streams on the fly and provide real-time stream processing. Technically, we develop a configurable and executable model to simulate both the streaming applications and the underlying Spark stream processing framework. Furthermore, we model the deployment of Spark Streaming on Apache YARN, which is a popular open-source distributed software framework for big data processing. We show that the developed model provides a satisfactory accuracy for predicting performance by means of empirical validation.
    Keywords: modelling; simulation; Spark Streaming; Apache YARN; batch processing; stream processing; ABS.

  • Models for hyper-converged cloud computing infrastructure planning   Order a copy of this article
    by Carlos Melo, Jamilson Dantas, Jean Araujo, Paulo Maciel, Rubens Matos, Danilo Oliveira, Iure Fé 
    Abstract: The data centre concept has evolved, mainly due to the need to reduce expenses with the required physical space to store, provide and maintain large computational infrastructures. The software-defined data centre (SDDC) is a result of this evolution. Through SDDC, any service can be hosted by virtualising more reliable and easier-to-keep hardware resources. Nowadays, many services and resources can be provided in a single rack, or even a single machine, with similar availability, considering the deployment costs of silo-based environments. One of the ways to apply the SDDC into a data centre is through hyper-convergence. Among the main contributions of this paper are the behavioral models developed for availability and capacity-oriented availability evaluation of silo-based, converged and hyper-converged cloud computing infrastructures. The obtained results may help stakeholders to select between converged and hyper-converged environments, owing to their similar availability but with the particularity of having lower deployment costs.
    Keywords: Hyper-convergence; Dependability Models; Dynamical Reliability Block Diagrams; SDDC; DRBD; virtualisation; capacity-oriented availability; deployment cost; redundancy; cloud computing; OpenStack.

  • Architecture for diversity in the implementation of dependable and secure services using the state machine replication approach   Order a copy of this article
    by Caio Costa, Eduardo Alchieri 
    Abstract: The dependability and security properties of a system could be impaired by a system failure or by an opponent that exploits its vulnerabilities, respectively. An alternative to mitigate this risk is the implementation of fault- and intrusion-tolerant systems, in which the system properties are ensured even if some of its components fail (e.g., because a software bug or a failure in the runtime environment) or are compromised by a successful attack. State Machine Replication (SMR) is widely used to implement these systems. In SMR, servers are replicated and client requests are deterministically executed in the same order by all replicas in a way that the system behaviour remains correct even if some of them are compromised since the correct replicas mask the misbehaviour of the faulty ones. Unfortunately, the proposed solutions for SMR do not consider diversity in the implementation and all replicas execute the same software. Consequently, the same attack or software bug could compromise all the system. Trying to circumvent this problem, this work proposes an architecture to allow diversity in the implementation of dependable and secure services using the SMR approach. The goal is not to implement different versions of a SMR library for different programming languages, which demands a lot of resources and is very expensive. Instead, the proposed architecture uses an underlying SMR library and provides means to implement and execute service replicas (the application code) in different programming languages. The main problems addressed by the proposed architecture are twofold: (1) communication among different languages; and (2) data representation. The proposed architecture was integrated in the SMR library BFT-SMaRt and a set of experiments showed its practical feasibility.
    Keywords: diversity; security; dependability; state machine replication.

  • Target exploration by Nomadic Levy walk on unit disk graphs   Order a copy of this article
    by Kouichirou Sugihara, Naohiro Hayashibara 
    Abstract: Random walks play an important role in computer science, covering a wide range of topics in theory and practice, including networking, distributed systems, and optimisation. Levy walk is a family of random walks whose distance of a walk is chosen from the power law distribution. There are lots of recent reports of Levy walk in the context of target detection in swarm robotics, analysing human walk patterns, and modelling the behaviour of animal foraging . According to these results, it is known as an efficient method to search in a two-dimensional plane. However, most of the works assume a continuous plane. In this paper, we propose a variant of Homesick Levy walk, called Nomadic Levy walk, and analyse the behaviour of the algorithm regarding the cover ratio on unit disk graphs. We also show the comparison of Nomadic Levy walk and Homesick Levy walk regarding the target search problem. Our simulation results indicate that the proposed algorithm is significantly efficient for sparse target detection on unit disk graphs compared with Homesick Levy walk, and it also improves the cover ratio. Moreover, we analyse the impact of the movement of the sink (home position) on the efficiency on the target exploration.
    Keywords: random walk; Levy walk; target search; unit disk graphs; DTN; autonomic computing; bio-inspired algorithms.

  • An identity-based cryptographic scheme for cloud storage applications   Order a copy of this article
    by Manel Medhioub, Mohamed Hamdi 
    Abstract: The use of remote storage systems is gaining an expanding interest, namely the cloud storage-based services. In fact, one of the factors that led to the popularity of cloud computing is the availability of storage resources provided at a reduced cost. However, when outsourcing the data to a third party, security issues become critical concerns, especially confidentiality, integrity, authentication, anonymity and resiliency. Based on this challenge, this work provides a new approach to ensure authentication in cloud storage applications. ID-Based Cryptosystems (IBC) have many advantages over certificate-based systems, such as simplification of key management. This paper proposes an original ID-based authentication approach in which the cloud tenant is assigned the IBC-Private Key Generator (PKG) function. Consequently, it can issue public elements for its users, and can keep confidential resulting IBC secrets. Moreover, in our scheme, the public key infrastructure is still in usage to establish trust relationships between the PKGs.
    Keywords: cloud storage; authentication; IBC; identity-based cryptography; security; Dropbox.
    DOI: 10.1504/IJGUC.2019.10018608
     
  • COBRA-HPA: a block generating tool to perform hybrid program analysis   Order a copy of this article
    by Thomas Huybrechts, Yorick De Bock, Haoxuan Li, Peter Hellinckx 
    Abstract: The Worst-Case Execution Time (WCET) of a task is an important value in real-time systems. This metric is used by the scheduler in order to schedule all tasks before their deadlines. However, the code and hardware architecture have a significant impact on the execution time and thus the WCET. Therefore, different analysis methodologies exist to determine the WCET, each with their own advantages and/or disadvantages. In this paper, a hybrid approach is proposed which combines the strengths of two common analysis techniques. The two-layer hybrid model splits the code of tasks into so-called basic blocks. The WCET can be determined by performing execution time measurements on each block and statically combining those results. The COBRA-HPA framework presented in this paper is developed to facilitate the creation of hybrid block models and automate the measurements/analysis process. Additionally, an elaborated discussion on the implementation and performance of the framework is given. In conclusion, the results of the COBRA-HPA framework show a significant reduction in analysis effort while keeping sound WCET predictions for the hybrid method compared to the static and measurement-based approach.
    Keywords: WCET; worst-case execution time; hybrid analysis methodology; code behaviour framework; COBRA; basic block generator.
    DOI: 10.1504/IJGUC.2019.10018610
     
  • The big data mining forecasting model based on combination of improved manifold learning and deep learning   Order a copy of this article
    by Xiurong Chen, Yixiang Tian 
    Abstract: In this paper, we use the combination of Local Linear Embedding (LLE) with Continuous Deep Belief Networks (CDBN) as the input of RBF, and construct a mixed-feature RBF model. However, LLE depends too much on the local domain which is not easy to be determined, so we propose a new method, Kernel Entropy Linear Embedding (KELE) which uses Kernel Entropy Component Analysis (KECA) to transfer the non-linear problem into linear problem. CDBN has the difficulty in confirming network structure and lacks supervision, so we improve the situations by using the kernel entropy information obtained from KECA, which is called KECDBN. In the empirical part, we use the foreign exchange rate time series to examine the effects of the improved methods, and results show that both the KELE and the KECDBN show better effects in reducing dimensionality and extracting features, respectively, an also improve the prediction accuracy of the mixed-feature RBF.
    Keywords: LLE; local linear embedding; CDBN; continuous deep belief network; KECA; kernel entropy component analysis; KELE; kernel entropy linear embedding; KECDBN; kernel entropy continuous deep belief network.
    DOI: 10.1504/IJGUC.2019.10018611
     
  • Impact of software architecture on execution time: a power window TACLeBench case study   Order a copy of this article
    by Haoxuan Li, Paul De Meulenaere, Siegfried Mercelis, Peter Hellinckx 
    Abstract: Timing analysis is used to extract the timing properties of a system. Various timing analysis techniques and tools have been developed over the past decades. However, changes in hardware platform and software architecture introduced new challenges in timing analysis techniques. In our research, we aim to develop a hybrid approach to provide safe and precise timing analysis results. In this approach, we will divide the original code into smaller code blocks, and then construct a timing model based on the information acquired by measuring the execution time of every individual block. This process can introduce changes in the software architecture. In this paper, we use a multi-component benchmark to investigate the impact of software architecture on the timing behaviour of a system.
    Keywords: WCET; timing analysis; hybrid timing analysis; power window; embedded systems; TACLEBench; COBRA block generator.
    DOI: 10.1504/IJGUC.2019.10018612
     
  • Accountability management for multi-tenant cloud services   Order a copy of this article
    by Fatma Masmoudi, Mohamed Sellami, Monia Loulou, Ahmed Hadj Kacem 
    Abstract: The widespread adoption of multi-tenancy in the Software as a Service delivery model triggers several data protection issues that could decrease the tenants' trustworthiness. In this context, accountability can be used to strengthen the trust of tenants in the cloud through providing the reassurance of the processing of personal data hosted in the cloud according to their requirements. In this paper, we propose an approach for the accountability management of multi-tenant cloud services allowing: compliance checking of services behaviours with defined accountability requirements based on monitoring rules, accountability-violation detection otherwise, and post-violation analysis based on evidences. A tool-suite is developed and integrated into a middleware to implement our proposal. Finally, experiments we have carried out show the efficiency of our approach relying on some criteria.
    Keywords: cloud computing; accountability; multi-tenancy; monitoring; accountability violation; analysis; evidence; multi-tenant evidence.
    DOI: 10.1504/IJGUC.2019.10018613
     
  • A Big Data approach for multi-experiment data management   Order a copy of this article
    by Silvio Pardi, Guido Russo 
    Abstract: Data sharing among similar experiments is made difficult by the usage of ad hoc directory structures, data and metadata naming and by the large variety of access protocols. The Big Data paradigm provides the context to overcome the current heterogeneity problems. In this work, we present a study for a Global Storage Ecosystem designed to manage large and distributed datasets, in the context of physics experiments. The proposed environment is based on HTTP/WebDav protocols, together with modern data searching technologies, according to the Big Data paradigm. The main goal is to aggregate multiple storage areas and to simplify the operations of data retrieval, using Elasticsearch and Apache Lucene library. This platform offers to physicists an effective instrument to simplify the multi-experiment data analysis without knowing a priori the directory format or the data itself. As a proof of concept, we realised a prototype over the ReCaS Supercomputing infrastructure in Napoli.
    Keywords: Big Data; data federation; physics experiments; HTTP storage; computing model.
    DOI: 10.1504/IJGUC.2019.10018614
     
  • A WLAN triage testbed based on fuzzy logic and its performance evaluation for different number of clients and throughput parameter   Order a copy of this article
    by Kosuke Ozera, Takaaki Inaba, Kevin Bylykbashi, Shinji Sakamoto, Makoto Ikeda, Leonard Barolli 
    Abstract: Many devices communicate over Wireless Local Area Networks (WLANs). The IEEE 802.11e standard for WLANs is an important extension of the IEEE 802.11 standard focusing on Quality of Service (QoS) that works with any PHY implementation. The IEEE 802.11e standard introduces Enhanced Distributed Channel Access (EDCA) and HCF Controlled Channel Access (HCCA). Both these schemes are useful for QoS provisioning to support delay-sensitive voice and video applications. EDCF uses the contention window to differentiate high-priority and low-priority services. However, it does not consider the priority of users. In this paper, in order to deal with this problem, we propose a Fuzzy-based Admission Control System (FACS). We implemented a triage testbed using FACS and carried out an experiment. The experimental results show that the number of connected clients is increased during Avoid phase, but does not change during Monitoring phase. The experimental results show that the implemented testbed performs better than conventional WLAN.
    Keywords: WLAN triage; congestion control; fuzzy logic; admission control.
    DOI: 10.1504/IJGUC.2019.10018615
     
  • E-XY: an entropy based XY routing algorithm   Order a copy of this article
    by Akash Punhani, Pardeep Kumar, Nitin 
    Abstract: In recent years, Network on Chip (NoC) is used to handle the communication issues between cores or processing elements. The major drawback of existing XY routing algorithm for mesh topology is its inability to handle a high traffic load. In this paper, an E-XY (Entropy based XY) routing algorithm is proposed to generate information about the adjacent router based on previously communicated packets. Tests have been carried out on a mesh topology of 8x8 simulated using Omnet++ 4.4.1 using HNOCS version. Different types of traffic have been considered for result computation including uniform, bit complement, neighbour and tornado. The proposed algorithm is compared with other routing algorithms including XY, IX/Y and Odd Even. Results demonstrate that the proposed algorithm is comparable to XY routing algorithm up to the load factor of 0.8 and performs well than the XY, IX/Y and Odd Even routing algorithms with the increase in load.
    Keywords: routing algorithm; adaptive; parallel communication; router architecture; maximum entropy model.
    DOI: 10.1504/IJGUC.2019.10018616
     
  • OFQuality: a quality of service management module for software-defined networking   Order a copy of this article
    by Felipe Volpato, Madalena Pereira Da Silva, Mario Antonio Ribeiro Dantas 
    Abstract: The exponential growth of online devices has been causing difficulties to network management and maintenance. At the same time, we have noticed that applications are getting richer in terms of content and quality, thus requiring more and more network guarantees. To overcome this issue, new network approaches such as Software-Defined Networking (SDN), have emerged. OpenFlow, which is one of the most used protocols for SDN, is not enough to provide QoS based on queue prioritisation. In this paper we propose an architecture of a controller module that implements the Open vSwitch Database Management Protocol (OVSDB) in order to provide QoS management with queue prioritisation. Our module differs from others because it features mechanisms to test and facilitate user configuration. Our experiments showed that our module behaved as expected, causing few delays when managing switch elements, therefore making a useful tool for QoS management on SDN.
    Keywords: SDN; software-defined networking; OpenFlow; QoS; quality of service; OVS; open vSwitch; OVSDB; open vSwitch database; management plane.
    DOI: 10.1504/IJGUC.2019.10018617
     

Special Issue on: Recent Developments in Parallel, Distributed and Grid Computing for Big Data

  • GPU accelerated video super-resolution using transformed spatio-temporal exemplars   Order a copy of this article
    by Chaitanya Pavan Tanay Kondapalli, Srikanth Khanna, Chandrasekaran Venkatachalam, Pallav Kumar Baruah, Kartheek Diwakar Pingali, Sai Hareesh Anamandra 
    Abstract: Super-resolution (SR) is the method of obtaining high resolution (HR) image or image sequence from one or more low-resolution (LR) images of a scene. Super-resolution has been an active area of research in recent years owing to its applications to defence, satellite imaging, video surveillance and medical diagnostics. In a broad sense, SR techniques can be classified into external database driven and internal database driven approaches. The training phase in the first approach is computationally intensive as it learns the LR-HR patch relationships from huge datasets, and the test procedure is relatively fast. In the second approach, the super-resolved image is directly constructed from the available LR image, eliminating the need for any learning phase but the testing phase is computationally intensive. Recently, Huang et al. (2015) proposed a transformed self-exemplar internal database technique which takes advantage of the fractal nature in an image by expanding patch search space using geometric variations. This method fails if there is no patch redundancy within and across image scales and also if there is a failure in detecting vanishing points (VP), which are used to determine perspective transformation between LR image and its subsampled form. In this paper, we expand the patch search space by taking advantage of the temporal dimension of image frames in the scene video and also use an efficient VP detection technique by Lezama et al. (2014). We are thereby able to successfully super-resolve even the failure cases of Huang et al. (2015) and achieve an overall improvement in PSNR. We also focused on reducing the computation time by exploiting the embarrassingly parallel nature of the algorithm. We achieved a speedup of 6 on multi-core, up to 11 on GPU, and around 16 on hybrid platform of multi-core and GPU by parallelising the proposed algorithm. Using our hybrid implementation, we achieved 32x super-resolution factor in limited time. We also demonstrate superior results for the proposed method compared with current state-of-the-art SR methods.
    Keywords: super-resolution; self-exemplar; perspective geometry; temporal dimension; vanishing point; GPU; multicore.

  • Energy-efficient fuzzy-based approach for dynamic virtual machine consolidation   Order a copy of this article
    by Anita Choudhary, Mahesh Chandra Govil, Girdhari Singh, Lalit K. Awasthi, Emmanuel S. Pilli 
    Abstract: In cloud environment the overload leads to performance degradation and Service Level Agreement (SLA) violation while underload results in inefficient use of resources and needless energy consumption. Dynamic Virtual Machine (VM) consolidation is considered as an effective solution to deal with both overload and underload problems. However, dynamic VM consolidation is not a trivial solution as it can also lead to violation of negotiated SLA owing to runtime overheads in VM migration. Further, dynamic VM consolidation approaches need to answer many questions, such as (i) when to migrate a VM? (ii) which VM is to be migrated? and (iii) where to migrate the selected VM? In this work, efforts are made to develop a comprehensive approach to achieve better solutions to such problems. In the proposed approach, future forecasting methods for host overload detection are explored; a fuzzy logic based VM selection approach that enhances the performance of VM selection strategy is developed; and a VM placement algorithm based on destination CPU use is also developed. The performance evaluation of the proposed approaches is carried out on CloudSim toolkit using PlanetLab data set. The simulation results have exhibited significant improvement in the number of VM migrations, energy consumption, and SLA violations.
    Keywords: cloud computing; virtual machines; dynamic virtual machine consolidation; exponential smoothing; fuzzy logic.

  • A distributed framework for cyber-physical cloud systems in collaborative engineering   Order a copy of this article
    by Stanislao Grazioso, Mateusz Gospodarczyk, Mario Selvaggio, Giuseppe Di Gironimo 
    Abstract: Distributed cyber-physical systems play a significant role in enhancing group decision-making processes, as in collaborative engineering. In this work, we develop a distributed framework to allow the use of collaborative approaches in group decision-making problems. We use the fuzzy analytic hierarchy process, a multiple criteria decision-making method, as the algorithm for the selection process. The architecture of the framework makes use of open-source utilities. The information components of the distributed framework act in response to the feedback provided by humans. Cloud infrastructures are used for data storage and remote computation. The motivation behind this work is to make possible the implementation of group decision-making in real scenarios. Two illustrative examples show the feasibility of the approach in different application fields. The main outcome is the achievement of a time reduction for the selection and evaluation process.
    Keywords: distributed systems; cyber-physical systems; web services; group decision making; fuzzy AHP; product design and development.

Special Issue on: MPP2017 and WAMCA2017 Advancements in High-level Parallel Programming Models for Edge/Fog/In-situ Computing

  • HPSM: a programming framework to exploit multi-CPU and multi-GPU systems simultaneously   Order a copy of this article
    by Joao Vicente Ferreira Lima, Daniel Di Domenico 
    Abstract: This paper presents a high-level C++ framework to explore multi-CPU and multi-GPU systems called HPSM. HPSM enables execution of parallel loops and reductions simultaneously over CPUs and GPUs using three parallel backends: Serial, OpenMP, and StarPU. We analysed HPSM development effort with AXPY program through two standard metrics (NCLOC and ES). In addition, we evaluated performance and energy with three parallel benchmarks: N-Body, Hotspot, and CFD solver. HPSM reduced code effort by up to 56.9% compared with StarPU C interface, although it resulted in 2.5x more lines of code compared to OpenMP. The CPU-GPU combination attained speedup results with Hotspot of up to 92.7x on a X86-based system with 4 GPUs and up to 108.2x on an IBM POWER8+ system with two GPUs. On both systems, the addition of GPUs improved energy efficiency.
    Keywords: high performance computing; CPU-GPU systems; parallel programming models; high-level framework; parallel loops.

  • An efficient pathfinding system in FPGA for edge/fog computing   Order a copy of this article
    by Alexandre Nery, Alexandre Sena, Leandro Guedes 
    Abstract: Pathfinding algorithms are at the heart of several classes of applications, such as network appliances (routing), GPS navigation and autonomous cars, which are all part of recent trends in artificial intelligence and Internet of Things (IoT). Moreover, advances in semiconductor miniaturisation technologies have enabled the design of efficient Systems-on-Chip (SoC) devices, with demanding performance requirements and energy consumption constraints. Such advanced systems often include Field Programmable Gate Arrays (FPGAs) to allow the design of customised co-processors/accelerators that yield lower power consumption and higher performance, as can be found today in various well-known cloud computing services: Amazon AWS, Baidu, etc. However, the amount of embedded systems with processing capabilities and internet access has led to a substantial increase of network traffic towards such cloud service systems. Therefore, this work aims at designing and evaluating an efficient pathfinding accelerator system provided with a FPGA co-processor based on Dijkstra's shortest path algorithm. The system aims to mitigate the network traffic problem with an efficient accelerator for the pathfinding problem, placed at the edge of the network. The system is designed using the Xilinx High-Level Synthesis (HLS) compiler and is implemented in the programming logic of a Xilinx Zynq FPGA, embedded with an ARM microprocessor that is not only in charge of controlling the co-processor, but also in charge of lightweight TCP/IP network communication on top of the FreeRTOS operating system. Extensive performance, circuit-area and energy consumption results show that the co-processor can find the shortest path about 2.5 times faster than the system's ARM microprocessor, on a simulation scenario test case based on touristic locations in the city of Rio de Janeiro, acquired from the OpenStreetMap database.
    Keywords: pathfinding; FPGA accelerator; high-level synthesis; fog computing; edge computing.

  • A network coding protocol for wireless sensor fog computing   Order a copy of this article
    by Bruno Marques, Igor Machado Coelho, Alexandre Sena, Maria Clicia Castro 
    Abstract: A communication protocol for fog computing should be efficient, lightweight and customisable, enabling the computing results on the edges to be passed easily through the network. In this work we focus on a communication protocol for fog nodes in a network composed of wireless sensors, which are autonomous and spatially distributed for monitoring physical or environmental conditions. Problems with data congestion and limited physical resources are common in these networks. For the optimisation of data flow, it is important to apply techniques that reduce the transmitted data. We propose a network coding technique and study its efficiency for data transmission protocols. The experiments were performed through a wireless sensors programming framework with TinyOS operating system, NesC language and TOSSIM simulator. Experimental results demonstrate a better performance when the network coding technique is applied to the data communication protocol, for different scenarios. Further results demonstrate the network coding efficiency for fog computing, with around 80% of the total messages exchanged compared with a traditional protocol.
    Keywords: fog computing; wireless sensor; network coding.

  • An optimised dataflow engine for GPGPU stream processing   Order a copy of this article
    by Marcos Rocha, Felipe Franca, Alexandre Nery, Leandro Guedes 
    Abstract: Stream processing applications have high-demanding performance requirements that are hard to tackle using traditional parallel models on modern many-core architectures, such as GPUs. On the other hand, recent dataflow computing models can naturally expose and facilitate the parallelism exploitation for a wide class of applications. Thus, instead of following the program order, different operations can be run in parallel as soon as their input operands become available. This work presents an extension to an existing dataflow library for Java. The library extension implements high-level constructs with multiple command queues to enable the superposition of memory operations and kernel executions on GPUs. Experimental results show that significant speedup can be achieved for a subset of well-known stream processing applications: volume ray-casting, path-tracing and Sobel filter. Moreover, new contributions in respect to concurrency analysis and the stream processing parallel model in dataflow are presented.
    Keywords: dataflow; heterogeneous systems; high-performance computing.

  • A dataflow runtime environment and static scheduler for edge, fog and in-situ computing   Order a copy of this article
    by Caio B. G. Carvalho, Victor Da Cruz Ferreira, Felipe Maia Galvão França, Cristiana Barbosa Bentes, Gabriele Mencagli, Tiago Assumpção De Oliveira Alves, Alexandre Da Costa Sena, Leandro Augusto Justen Marzulo 
    Abstract: In the dataflow computation model, instructions or tasks are executed according to data dependencies, instead of following program order, thus allowing natural parallelism exploitation. A wide variety of dataflow-based solutions, in different flavours and abstraction levels (from processors to runtime libraries), have been proposed as interesting alternatives for harnessing the potential of modern computing systems. Sucuri is a dataflow library for Python that allows users to specify their application as a dependency graph and execute it transparently at clusters of multicores, while taking care of scheduling issues. Recent trends in fog and in-situ computing assume that storage and network devices will be equipped with processing elements that usually have lower power consumption and performance. An important decision on such systems is whether to move data to traditional processors (paying the communication costs), or perform computation where data is sitting, using a potentially slower processor. Hence, runtime environments that deal with that trade-off are extremely necessary. This work presents a study on different factors that should be considered when running dataflow applications in edge/fog/in-situ environments. We use Sucuri to manage the execution in a small system with a regular PC and a Parallella board, emulating a smart storage (edge/fog/in-situ device). Experiments performed with a set of benchmarks show how data transfer size, network latency and packet loss rates affect execution time when outsourcing computation to the smart storage. Then, a static scheduling solution is presented, allowing Sucuri to avoid outsourcing when there would be no performance gains.
    Keywords: dataflow computing; edge computing; fog computing; scheduling techniques; smart storage.

Special Issue on: Emergent Peer-to-Peer Network Technologies for Ubiquitous and Wireless Networks

  • An improved energy efficient multi-hop ACO-based intelligent routing protocol for MANET   Order a copy of this article
    by Jeyalaxmi Perumaal, Saravanan R 
    Abstract: A Mobile Ad-hoc Network (MANET) consists of group of mobile nodes, and the communication among them is done without any supporting centralised structure. Routing in a MANET is a difficult because of its dynamic features, such as high mobility, constrained bandwidth, link failures due to energy loss, etc., The objective of the proposed work is to implement an intelligent routing protocol. Selection of the best hops is mandatory to provide good throughput in the network, therefore Ant Colony Optimisation (ACO) based intelligent routing is proposed. Selecting the best intermediate hop for intelligent routing includes ACO technique, which greatly reduces the network delay and link failures by validating the co-ordinator nodes. Best co-ordinator nodes are selected as good intermediate hops in the intelligent routing path. The performance is evaluated using the simulation tool NS2, and the metrics considered for evaluation are delivery and loss rate of sent data, throughput and lifetime of the network, delay and energy consumption.
    Keywords: ant colony optimisation; intelligent routing protocol; best co-ordinator nodes; MANET.

  • Analysis of spectrum handoff schemes for cognitive radio networks considering secondary user mobility   Order a copy of this article
    by K.S. Preetha, S. Kalaivani 
    Abstract: There has been a gigantic spike in the usage and development of wireless devices since wireless technology came into existence. This has contributed to a very serious problem of spectrum unavailability or spectrum scarcity. The solution to this problem comes in the form of cognitive radio networks, where secondary users (SUs), also known as unlicensed users, make use of the spectrum in an opportunistic manner. The SU uses the spectrum in a manner such that the primary or the licensed user (PU) doesnt face interference above a threshold level of tolerance. Whenever a PU comes back to reclaim its licensed channel, the SU using it needs to perform a spectrum handoff (SHO) to another channel that is free of PU. This way of functioning is termed as spectrum mobility. Spectrum mobility can be achieved by means of SHO. Initially, the SUs continuously sense the channels to identify an idle channel. Errors in the sensing channel are possible. A detection theory is put forth to analyse the spectrum sensing errors with the receiver operating characteristic (ROC) considering false alarm probability, miss detection and detection probability. In this paper, we meticulously investigate and analyse the probability of spectrum handoff (PSHO), and hence the performance of spectrum mobility, with Lognormal-3 and Hyper-Erlang distribution models considering SU call duration and residual time of availability of spectrum holes as measurement metrics designed for tele-traffic analysis.
    Keywords: cognitive radio networks; detection probability; probability of a miss; SNR; false alarm probability; primary users; secondary users.

  • Link survivability rate-based clustering for QoS maximisation in VANET   Order a copy of this article
    by D. Kalaivani, P.V.S.S.R. Chandra Mouli Chandra Mouli 
    Abstract: The clustering technique is used in VANET to manage and stabilise topology information. The major requirement of this technique is data transfer through the group of nodes without disconnection, node coordination, minimised interference between number of nodes, and reduction of hidden terminal problem. The data communication among each node in the cluster is performed by a cluster head (CH). The major issues involved in the clustering approaches are improper definition of cluster structure, maintenance of cluster structure in dynamic network. To overcome these issues in the clustering technique, the link- and weight-based clustering approach is developed along with a distributed dispatching information table (DDIT) to repeatedly use the significant information for avoiding data transmission failure. In this paper, the clustering algorithm is designed on the basis of relative velocity value of two same directional vehicles by forming a cluster with number of nodes in a VANET network. Then, the CH is appropriately selected based on the link survival rate of the vehicle to provide the emergency message towards different vehicles in the cluster, along with the stored data packet information in the DDIT table for fault prediction. Finally, the efficient medium access control (MAC) protocol is used to provide a prioritised message for avoiding spectrum shortage of emergency messages in the cluster. The comparative analysis between the proposed link-based CH selection with DDIT (LCHS-DDIT) with the existing methods, such as clustering-based cognitive MAC (CCMAC), multichannel CR ad-hoc network (MCRAN), and dedicative short range communication (DSRC), proves the effectiveness of LCHS-DDIT regarding the throughput, packet delivery ratio, routing control overhead with minimum transmission delay.
    Keywords: vehicular ad-hoc networks; link survival rate; control channel; service channel; medium access control; roadside unit; on-board unit.

Special Issue on: Emerging Scalable Edge Computing Architectures and Intelligent Algorithms for Cloud-of-Things and Edge-of-Things

  • A survey on fog computing and its research challenges   Order a copy of this article
    by Jose Dos Santos Machado, Edward David Moreno, Admilson De Ribamar Lima Ribeiro 
    Abstract: This paper reviews the new paradigm of distributed computing, which is fog computing, and it presents its concept, characteristics and areas of performance. It performs a literature review on the problem of its implementation and analyses its research challenges, such as security issues, operational issues and their standardisation. We show and discuss that many questions need to be researched in academia so that their implementation will become a reality, but it is clear that their adherence is inevitable for the internet of the future.
    Keywords: fog computing; edge computing; cloud computing; IoT; distributed computing; cloud integration to IoT.

  • Hybrid coherent encryption scheme for multimedia big data management using cryptographic encryption methods   Order a copy of this article
    by Stephen Dass, J. Prabhu 
    Abstract: In todays world of technology, data has been playing an imperative role in many different technical areas. Data confidentiality, integrity and data security over the internet from different media and applications are challenging tasks. Data generation from multimedia and IoT data is another huge source of big data on the internet. When sensitive and confidential data are accessed by attacks this lead to serious countermeasures to security and privacy. Data encryption is the mechanism to forestall this issue. Many encryption techniques are used for multimedia and IoT, but when massive data are developed it there are more computational challenges. This paper designs and proposes a new coherent encryption algorithm that addresses the issue of IoT and multimedia big data. The proposed system can cause a strong cryptographic effect without holding much memory and easy performance analysis. Handling huge data with the help of GPU is included in the proposed system to enhance the data processing more efficiently. The proposed algorithm is compared with other symmetric cryptographic algorithms such as AES,DES,3-DES, RC6 and MARS based on architecture, flexibility, scalability, security level and also based on computational running time, and throughput for both encryption and decryption processes. An avalanche effect is also calculated for the proposed system to be 54.2%. The proposed framework better secures the multimedia against real time attacks when compared with the existing system.
    Keywords: big data; symmetric key encryption; analysis; security; GPU; IoT; multimedia big data.

  • A study on data deduplication schemes in cloud storage   Order a copy of this article
    by Priyan Malarvizhi Kumar, Usha Devi G, Shakila Basheer, Parthasarathy P 
    Abstract: Digital data is growing at immense rates day by day, and finding efficient storage and security mechanisms is a challenge. Cloud storage has already gained popularity because of the huge data storage capacity in storage servers made available to users by the cloud service providers. When lots of users upload data in cloud there can be too many redundant data as well and this can waste storage space as well as affect transmission bandwidth. To promise efficient storage handling of this redundant data is very important, which is done by the concept of deduplication. The major challenge for deduplication is that most users upload data in encrypted form for privacy and security of data. There are many prevailing mechanisms for deduplication, some of which handle encrypted data as well. The purpose of this paper is to conduct a survey of the existing deduplication mechanisms in cloud storage and to analyse the methodologies used by each of them.
    Keywords: deduplication; convergent encryption; cloud storage.

Special Issue on: INCOS 2018 Applied Soft Computing for Optimisation and Parallel Applications

  • An enhanced jaya algorithm for solving nurse scheduling problem   Order a copy of this article
    by Ahmed Ali, Walaa El-Ashmawi 
    Abstract: The Nurse Scheduling Problem (NSP) is one of the main optimisation problems that require an efficient assignment of a number of nurses to a number of shifts in order to cover the hospital's planning horizon demands. NSP is an NP-hard problem subject to a set of hard and soft constraints. Such problems can be efficiently solved by optimisation algorithms, such as meta-heuristic algorithms. In this paper, we enhance one of the most recent meta-heuristic algorithms, which is called jaya, for solving the NSP. The enhanced algorithm is called EJNSP (Enhanced Jaya for Nurse Scheduling Problem). EJNSP focus on maximising the nurses' preferences about shift requests and minimising the under- and over-staffing. EJNSP has two main strategies. First, it randomly generates an initial effective scheduling that satifies a set of constraints. Second, it uses swap operators in order to satisfy the set of soft constraints to achieve an effective scheduling. A set of experiments have been applied to a set of benchmark dataset with a different number of nurses and shifts. The experimental results demonstrated that EJNSP algorithm achieved effective results for solving NSP in order to minimise the under- and over-staffing and satisfy the nurses' preferences.
    Keywords: nurse scheduling problem; meta-heuristic algorithms; jaya optimisation algorithm.

  • An adaptive technique for cost reduction in cloud data centre environment   Order a copy of this article
    by Hesham Elmasry, Ayman Khedr, Mona Nasr 
    Abstract: The developing interest for using cloud computing has expanded the energy consumption of data centres, which has become a critical issue. High energy consumption not only translates to high operational cost but also reduces the profit margin for the cloud providers and leads to high carbon emissions which are not environmentally friendly. Therefore we need energy-saving solutions to minimise the negative impact of cloud computing. In order to do this we propose an Energy Saving Load Balancing (ESLB) technique that can save energy in the cloud server while keeping up the Service Level Agreement (SLA), which includes quality of service and resource usage between cloud service provider and cloud customers. This paper presents an ESLB technique to enhance the response time and the resource usage, and reduce both energy consumption and carbon dioxide emission to mitigate the negative impact of cloud computing on the environment.
    Keywords: data centre; energy efficiency; green cloud computing; load balancing; quality of service.

  • Novel mobile palmprint databases for biometric authentication   Order a copy of this article
    by Mahdieh Izadpanahkakhk, Seyyed Mohammad Razavi, Mehran Taghipour-Gorjikolaie, Seyyed Hamid Zahiri, Aurelio Uncini 
    Abstract: Mobile palmprint biometric authentication has attracted a lot of attention as an interesting analytics tool for representing discriminative features. Despite the advances in this technology, there are some challenges including lack of enough data and invariant templates to the rotation, illumination, and translation. In this paper, we provide two mobile palmprint databases and we can address the aforementioned challenges via deep convolutional neural networks. In the best of our knowledge, this paper is the first study in which mobile palmprint images were acquired in some special views and then were evaluated via deep learning training algorithms. To evaluate our mobile palmprint images, some well-known convolutional neural networks are applied for verification task. By using these architectures, the best-achieved results are classification cost of 0.118 in the training phase and classification accuracy of 0.925 in the test phase obtained in the 1-to-1 matching procedure.
    Keywords: training algorithms; biometric authentication; palmprint verification; mobile devices; deep learning; convolutional neural network; feature extraction.

  • IoT-based intensive care secure framework for patient monitoring and tracking   Order a copy of this article
    by Lamia Omran, Kadry Ezzat, Alaa Bayoumi, Ashraf Darwich, Aboul Hassanien 
    Abstract: This paper aims to design a prototype of a real-time patient control system. The proposed framework is used to quantify the physical parameters of the patient, such as the temperature of the body, rate of heartbeat and ECG, observing with the assistance of sensors. The collected data are sent to the cloud, then to the nurse station, specialist and the patient tablet or the web application. In this framework, the patient's health is checked consistently and the data obtained through the networks are transmitted. If any irregularity is noticed in the patient's signs, it will be sent to nurses and doctors for any suggestions to help the patient. The system is implemented through Arduino advanced controller, and simulation results are obtained. The smart intensive care unit (SICU) provides a new way for health monitoring of patients in order to improve healthcare systems and patients' care and safety. The cloud system is provided by a group of micro-services hosted in many servers that simulate a small cloud system. The patients data is secured through this framework by using OAuth server to authenticate the users and the sensors and generate the tokens. This system can assist doctors and nurses to achieve their missions and improve the healthcare system.
    Keywords: intensive care unit; internet of things; patient health monitoring; cloud; Arduino; security.

  • An autonomic mechanism based on ant colony pattern for detecting the source of incidents in complex enterprise systems   Order a copy of this article
    by Kamaleddin Yaghoobirafi, Eslam Nazemi 
    Abstract: The variability and complexity of modern information systems have motivated many research studies on autonomic computing and self-adaptation paradigms. Although the final goal of these paradigms were to be applied at the level of business and enterprise solutions, this has not been addressed by the majority of current solutions owing to the challenges of attaining this level of self-adaptation. In complex enterprises, various events such as failure of a server, malfunction of an application, inconsistency of data items and etc. may happen. In many cases, these events are cause by a change or an incident in a resource that belongs to another layer of information technology architecture. This makes the detection of complex events very difficult. In this paper, a mechanism is proposed for detecting the main source of failures and deficiencies in any position in the information technology architectural layers and recognizing appropriate alternative solutions. This mechanism uses pheromone deposition approach which is inspired by Ant Colony Optimisation (ACO). It is expected that the mechanism which is proposed in this paper, facilitates the coordination and detection of events between the resources that are not digitally represented. For sake of evaluation, two case studies are considered that investigate the inter-layer adaptation scenarios. The results show that the adaptation process can be done in less time and with more scalability by the use of the proposed mechanism in comparison with classic approaches. This enhancement is due to the possibility of direct recognition of an affected dependency path.
    Keywords: autonomic computing; ant colony pattern; interoperability; complex enterprise systems; coordination.

  • Evaluation prediction techniques to achieve optimal biomedical analysis   Order a copy of this article
    by Samaher Al_Janabi, Muhammed Abaid Mahdi 
    Abstract: Prediction techniques in data mining have been widely used to support optimising future decision-making in many different fields, including healthcare and medical diagnoses (HMD). Obtaining valuable information is a significant task of the prediction process. By this means, data is analysed and summarised using various perspectives. This work not only presents and explores the previous related works in the field of prediction techniques in the HMD sector, but also analyses their main techniques. These techniques include Chi-squared Automatic Interaction Detection (CHAID), Exchange Chi-squared Automatic Interaction Detection (ECHAID), Random Forest Regression & Classification (RFRC), Multivariate Adaptive Regression Splines (MARS), and Boosted Tree Classifiers and Regression (BTCR). This paper is going to present the general properties, summary, advantages, and disadvantages of each one. Most importantly, the analysis depends on the parameters that have been used for building a prediction model for each one. Besides, classifying those techniques according to their main and secondary parameters is another task. Furthermore, the presence and absence of parameters are also compared in order to identify the better sharing of those parameters among the techniques. Therefore, the main and optional steps of the prediction procedure are comparatively analysed and produced in this paper. As a result, the techniques with no randomness and mathematical basis are the most powerful and fast compared with the others. This work will allow the HMD sector to have a better way to look at their future decisions.
    Keywords: biomedical analysis; data mining; prediction techniques; healthcare problem; parameters.

  • Facial expression recognition using geometric features and modified hidden Markov model   Order a copy of this article
    by Mayur Rahul, Narendra Kohli, Rashi Agarwal, Sanju Mishra 
    Abstract: This work proposes a geometric feature-based descriptor for efficient Facial Expression Recognition (FER) that can be used for better human-computer interaction. Although lots of research has been focused on descriptor-based FER, there are still different problems to be solved regarding noise, recognition rate, time and error rates. The JAFFE datasets help to make an FER more reliable and efficient as pixels are distributed uniformly. The FER system depends on the feature extraction techniques. The proposed system introduces novel geometric features to extract robust features from the images and layered Hidden Markov Model(HMM) as a classifier. This new HMM consists of two layers: bottom layer and upper layer. The bottom layer represents the atomic expressions and the upper layer represents the combinations of atomic expressions. Then the extracted features from feature extraction steps are used to train different facial expressions. Finally, a training HMM is used to recognise seven facial expressions i.e. anger, disgust, fear, joy, sadness, surprise and neutral. The proposed framework is compared with existing systems where the proposed system proves its superiority with the recognition rate of 84.7% with the others 85%. Our proposed framework is also tested in terms of recognition rates, processing time and error rates and found to be superior to the other existing systems.
    Keywords: geometric features; hidden Markov model; Baum-Welch method; Viterbi method; forward method; state sequences; probability; human-computer interaction.