Forthcoming articles

International Journal of Cloud Computing

International Journal of Cloud Computing (IJCC)

These articles have been peer-reviewed and accepted for publication but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

Register for our alerting service, which notifies you by email when new issues are published online.

Open AccessArticles marked with this Open Access icon are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.
We also offer which provide timely updates of tables of contents, newly published articles and calls for papers.

International Journal of Cloud Computing (110 papers in press)

Regular Issues

  • Optimization of Automatic Web Services Composition Using Genetic Algorithm   Order a copy of this article
    by Mirsaeid Hosseini Shirvani 
    Abstract: In recent years, with the expansion of organizations, service-oriented architecture is known as an effective tool for creating applications. Hence, the need to use web services in organizations to reduce costs is felt more than ever. The purpose of web service composition is to determine a proper mix of user requests that cannot be met by a simple web service. In this paper, a genetic-based algorithm is proposed for combining cloud services that ensures multiple clouds work efficiently. The proposed method also provides an overview of the weaknesses of other available methods in terms of computational complexity in automated selection of web services and makes it possible to fulfill the demands of the composition of web services in a more optimal way. It is worth noting that the simulation results show the superiority of the proposed method compared to other methods analyzed in the paper. Keywords: Web Service, Web Services Composition, Service-Oriented Architecture, Quality of Service.
    Keywords: Web Service; Web Services Composition; Service-Oriented Architecture; Quality of Service.

  • A Secure and efficient multi cloud-based data storage and retrieval using hash-based verifiable secret sharing scheme   Order a copy of this article
    by Majid Farhadi, Hamideh Bypour, Seyyed Erfan Asadi 
    Abstract: As the availability of many smart devices rises, fast and easy access to data as well as sharing more information is felt. Cloud computing is a computational approach that shares configurable resources such as network, servers, storage space, applications and services on the Internet, and allows the user to access services without the expertise or control of the technology infrastructure. The confidentiality, integrity, and availability of the data, reducing computational cost and communication channel between the data owner (user) and cloud service providers (CSPs) are essential parts of cloud computing. In the paper, we propose a new scheme to construct a secure cloud data storage based on the verifiable secret sharing scheme with public verifiability to protect data integrity. In the new scheme, the validity of secret shares can be publicly verified without leaking the privacy of secret shares in the verification phase. Moreover, the verification phase does not depend on any computational assumptions. Furthermore, the proposed scheme cannot only detect the cheating but also identify who are the cheaters. It is worth noting that the proposed scheme is more efficient compared with the other secret sharing-based cloud data storage since heavy and complex computation is not required.
    Keywords: Cloud computing; cloud data storage; verifiable secret sharing scheme; public verifiability; hash function.

  • Stream of Traffic Balance in Active Cloud Infrastructure Service Virtual Machines Using Ant Colony   Order a copy of this article
    by Ankita Taneja, Hari Singh, Suresh Chand Gupta 
    Abstract: Cloud load balancing is the manner of distributing computing resources and workloads over a cloud computing infrastructure. It allows an enterprise to manage workloads through appropriate resource allocation in the cloud. Various load balancing techniques in cloud computing are reviewed and the work presented in this paper thoroughly analyzes and compares two well-known algorithms in MATLAB, the Ant Colony Optimization (ACO) Algorithm and Genetic Algorithm (GA). The objective is to produce an optimal solution for cost and execution time through balancing the workload. It is observed through experimental observations that ACO based load balancing possess incurs low cost and low execution time as compared to the GA for a constant workload over a fixed number of cloud machines. However, the execution time follows a different trend when workload increases and more machines are utilized to handle the increased workload; it rises sharply in ACO as compared to the GA.
    Keywords: ACO; ant colony optimization; GA; genetic algorithm; load balancing; cloud computing; pheromone matrix; pheromone table; IAAS; infrastructure as a service.

  • Memory constraint Parallelized resource allocation and optimal scheduling using Oppositional GWO for handling big data in cloud environment   Order a copy of this article
    by Chetana Tukkoji, Seetharam Keshav Rao 
    Abstract: In cloud computing, task scheduling is one of the challenging troubles, especially when deadline and cost are conceived. On the other hand, the key issue of task scheduling is to reach optimal allocation of users tasks for to optimize the task scheduling performance and reduce non-reasonable task allocation in clouds. Besides, in terms of memory space and time complexities, the processing of huge number of tasks with sequential algorithm results in greater computational cost. Therefore, we have improved an efficient Memory constraint Parallelized resource allocation and optimal scheduling method applying Oppositional GWO for resolving the scheduling trouble of big data in cloud environment by this paper. In parallel over distributed systems, the suggested scheduling approach applies the MapReduce framework to perform scheduling. The Map Reduce framework is consisted of two main processes; particularly, the task prioritization stage (with Fuzzy C-means Clustering method based on memory constraint) in Map phase and optimal scheduling (using Oppositional Grey Wolf Optimization algorithm) in reduce phase. Here, the scheduling is maximized to reduce the makespan, cost and to raise the system utilization.
    Keywords: Oppositional Grey Wolf Optimization algorithm; Fuzzy C-means Clustering; MapReduce; Task Prioritization; Virtual Machine Allocation; Apache Spark Distributed file System (SDFS).

  • An efficient load balancing scheduling strategy for cloud computing based on hybrid approach   Order a copy of this article
    by Mohammad Oqail Ahmad, Rafiqul Zaman Khan 
    Abstract: Cloud computing is a promising paradigm that is widely used in both academia and industry. Dynamic demand for resources by users is one of the prime goals of scheduling process of task in cloud computing. Task scheduling is NP-hard problem which is responsible for allocating the task to VMs and maximizing their utilization while minimizing the total task execution time. In this paper, the authors propose a load balancing scheduling strategy, Hybridization of RALB method using the PSO technique inspired by the honeybee behaviour proposed named as (PSO-RALB). This strategy optimize the results and perform scheduling based on resource aware load balancing scheme. The foraging behaviour of the honey bee optimization algorithm is utilized to balance load across VM and resource aware is used to manage the resources. The computational results show that proposed scheme minimize the makespan time, total processing time, total processing cost and the degree of imbalance factor when compared with existing techniques PSO standard and PSO based Load Balancing (PSO-LB) algorithms.
    Keywords: Cloud computing; Load balancing; Honey bee foraging; Particle Swarm Optimization; PSO-RALB Algorithm; Degree of imbalance;.

  • End-to-End SLA Management in Federated Clouds   Order a copy of this article
    by Asma Al Falasi, Mohamed Adel Serhani, Younes Hamdouch 
    Abstract: Cloud services have always promised to be available, flexible, and speedy. However, in some circumstances (e.g. drastic changes in application requirements) a Cloud provider might fail to deliver such promises to their distinctly demanding customers. Cloud providers have a constrained geographical presence and are willing to invest in infrastructure only when it is profitable to them. Cloud federation is a concept that collectively combines segregated Cloud services to create an extended pool of resources for Clouds to competently deliver their promised level of services. This paper is concerned with studying the governing aspects related to the federation of Clouds through collaborative networking. We propose a network of federated Clouds, CloudLend, that creates a platform for Cloud providers to collaborate, and for customers to expand their service selections. We also define and specify a service level agreement (SLA) management model in order to govern and administer the relationships established between different Cloud services in CloudLend. We define a multi-level SLA specification model to describe QoS terms, in addition to a game theory-based automated SLA negotiation model. We also define an adaptive agent-based SLA monitoring model. Formal verification proved that our proposed framework assures customers with maximum optimized guarantees to their QoS requirements, in addition to supporting Cloud providers to make informed resource utilization decisions. Additionally, simulation results demonstrate the effectiveness of our SLA management model.
    Keywords: Cloud Computing; Federated Clouds; SLA Management; Game Theory; QoS Requirements.

  • A Cloud Data Collection Platform for Canine Behavioral Prediction using Objective Sensor Data   Order a copy of this article
    by Zachary Cleghern, Marc Foster, Sean Mealin, Evan Williams, Timothy Holder, Alper Bozkurt, David Roberts 
    Abstract: Training successful guide dogs is time and resource intensive, requiring copious professional and volunteer labor. Even among the best programs, dogs are released with attrition rates commonly at 50\%. Increasing success rates enables non-profits to meet growing demand for dogs and optimize resources. Selecting dogs for training is a crucial task; guide dog schools can benefit from both better selection accuracy and earlier prediction. We present a system aimed at improving analysis and selection of which dogs merit investment of resources using custom sensing hardware and a cloud-hosted data processing platform. To improve behavioral analysis at early stages, we present results using objective data acquired in puppy behavioral tests and the current status of an IoT-enabled ``Smart Collar'' system to gather data from puppies while being raised by volunteers prior to training. Our goal is to identify both puppies at risk and environmental influences on success as guide dogs.
    Keywords: Cloud Computing; Canine Behavior; Behavioral Prediction; Sensor Data; Internet-of-Things; Machine Learning; Wearable Computing; Guide Dogs.

  • Evaluation and Selection of Cloud deployment models using Fuzzy Combinative Distance-Based Assessment   Order a copy of this article
    by Nandini Kashyap, Rakesh Garg 
    Abstract: Cloud computing (CC) is an innovative technology that is completely transforming the way of individuals to collect, share and approach their data files. Although, CC technology provides many benefits such as elasticity, resource pooling and on-demand services, yet there arise various issues and challenges for the successful implementation of this technology. Evaluation and selection of cloud deployment models (CDMs) are one of challenges highly faced by the cloud practitioners. The present study addresses the CDMs evaluation and selection problem in the education sector by modeling it as a multi-criteria decision making (MCDM) problem. To solve this selection problem, a hybrid MCDM approach, namely, Fuzzy-Combinative Distance-based Assessment (Fuzzy-CODAS) is proposed. The proposed approach works on the calculation of desirability index value for each of the alternatives based on Euclidean and Hamming distances from the negative ideal solution. Finally, the alternatives are ranked on their desirability index values. The alternative having maximum value of desirability index is placed at top position, whereas alternative with minimum value is placed at the last position.
    Keywords: Cloud Computing; Cloud deployment models; Multi-criteria decision making; Fuzzy- Combinative distance based assessment; Academic Organization.

  • Performance evaluation & Reliability analysis of predictive hardware failure models in Cloud platform using ReliaCloud-NS   Order a copy of this article
    by Rohit Sharma 
    Abstract: Cloud Computing Systems at the present time established as a promising trend in providing the platform for coordinating large number of heterogeneous tasks and aims at delivering highly reliable cloud computing services. It is most necessary to consider the reliability of cloud services and timely prediction of failing hardware in Cloud Data Centre's so that it ensures correct identification of the overall time required before resuming the service after the failure. In this paper reliability of two recently introduced predictive hardware failure models has been analysed, first model is on the basis of two open data sources i.e. Self-Monitoring And Reporting Technology (SMART), Windows performance counters and second model is based on FailureSim which is a neural networks based system for predicting hardware failures in data centres is done over our carefully designed two Test Cloud simulations of 144 VM's & 236 VM's. The results are thoroughly compared and analysed with the help of ReliaCloud- NS that allow researchers to design a CCS and compute its reliability.
    Keywords: Cloud Computing System (CCS); Virtual Machines (VM); Monte Carlo Simulation (MCS); Neural Networks; Annual Failure Rate (AFR); Self-Monitoring And Reporting Technology (SMART).

  • Efficient Multi-Level Cloud based Agriculture Storage Management System   Order a copy of this article
    by Kuldeep Sambrekar, Vijay Rajpurohit 
    Abstract: Attaining good agriculture productivity aid countries Gross Domestic Product (GDP) growth. Guarantying food security across globe possesses huge challenges due global warming resulting unpredictable weather and shrinking natural resources. As a result, use of Data Analytic (DA), and Internet of Things (IoT) has been employed by various agencies such as remote sensing forecasting and GIS Technology to build efficient agriculture management system. Cloud computing platform has been adopted for storing and accessing these data remotely. However, it incurs cost overhead for storing and assessing large data. Multi-cloud platform is adopted, however these models are not efficient as it incurs latency and does not provision fault tolerance guarantee. For overcoming these research challenges, this work presents Efficient Multi-Level Cloud based Agriculture Storage Management System (EMLC-ASMS). The outcome shows EMLC-ASMS attain significant performance over existing model in terms of computation cost, and latency.
    Keywords: cloud based agricultural storage management;multi-level cloud storage;cloud storage optimization;multi-cloud storage;efficient hierarchical cloud based storage mechanism.

  • Towards an Efficient and Secure Computation over Outsourced Encrypted Data using Distributed Hash Table   Order a copy of this article
    by Raziqa Masood, Nitin Pandey, Q.P. Rana 
    Abstract: On-demand access to outsourced data from anywhere has diverted data owners' mind to store their data on the cloud servers instead of standalone devices. Security, privacy, and availability of data are still the major concerns that need to be addressed. A quick overcome for the users from these issues is to encrypt their data with their keys before uploading it to the cloud. However, computing over encrypted data still remains to be highly inefficient and impractical. In this paper, we propose an efficient and secure data outsourcing with the distribution of servers using a distributed hash table mechanism. It helps to compute over the data from multiple owners encrypted using different keys, without leaking the privacy. We observe that our proposed solution has less computation and communication cost from other existing mechanisms while is free from a single point of failure.
    Keywords: Distributed Hash Table; Data Outsourcing; Peer-Proxy Re-encryption; Privacy; Security.

  • A Correlation based Investigation of VM Consolidation for Cloud Computing   Order a copy of this article
    by Nagma Khattar, Jaiteg Singh, Jagpreet Sidhu 
    Abstract: Virtual machine consolidation is of utmost importance in maintaining energy efficient cloud data centers. Tremendous amount of work is listed in literature for various phases of virtual machine consolidation (host underload detection, host overload detection, virtual machine selection and virtual machine placement). Benchmark algorithms proposed by pioneer researchers always cater as a base to develop other optimised algorithms. It seems essential to understand the behaviour of these algorithms for VM consolidation. There is a lack of analysis on these base techniques which otherwise can lead to more computationally intensive and multidimensional solution. The requirement to crucially investigate behaviour of these algorithms under various tunings, parameters and workloads is the need of the hour. This paper addresses the gap in literature and analyses the characteristics of these algorithms in depth under various scenarios (workloads, parameters) to find the behavioural patterns of algorithms. This analysis also helps in identifying strength of relationship and correlation among parameters. Future research strategy to target the VM consolidation in cloud computing is also proposed.
    Keywords: VM consolidation; host underload detection; host overload detection; virtual machine selection; virtual machine placement; cloud computing.

  • A Case Study On Major Cloud Platforms Digital Forensics Readiness - Are We There Yet?   Order a copy of this article
    by AMEER PICHAN, Mihai Lazarescu, Sie Teng Soh 
    Abstract: Digital forensics is a post crime activity, carried out to identify the culprit responsible for the crime. The forensic activity requires the crime evidence that are typically found in a log that stores the events. Therefore, the logs detailing user activities are a valuable and critical source of information for digital forensics in the cloud computing environment. Cloud service providers (CSPs) usually provide logging services which records the activities and events with varying level of details. In this work, we present a detailed and methodological study of the logging services provided by three major CSPs, i.e., Amazon Web Services, Microsoft Azure and Google Cloud Platform, to elicit their forensic compliance. Our work aims to measure the forensic readiness of the three cloud platforms using their prime log services. More specifically, this paper (i) proposes a systematic approach that specifies the cloud forensic requirements; (ii) uses a generic case study of crime incident to evaluate the digital forensic readiness. It shows how to extract the crime evidence from the logs and validate them against a set of forensic requirements; (iii) identifies and quantifies the gaps which the CSPs failed to satisfy.
    Keywords: Cloud Computing; Cloud forensics; Cloud log; Evidence; Forensic artifacts; Digital investigation; Digital forensics.

  • Ontology Building for Patient Bioinformatics of the Smart Card Domain: Implementation Using Owl   Order a copy of this article
    by Waseem Alromima, Ahmed Alahmadi 
    Abstract: Abstract: Smarting cards play a very important part in facilitating the bioinformatics information process. The current problem is integrating information, such as for the structure of similar information regarding analysis and services. Therefore, patient information services need to be modelled and re-engineered in the area of e-governmental information sharing and processing to deliver information appropriately according to the citizen and situation. Semantic web technology-based ontology has brought a promising solution to these engineering problems. In this study, the main purpose is to provide each patient with a smart card that will hold all their bioinformatics for their entire life based on the proposed domain ontology. It will be recognized and used in all organizations related to e-health. The other aims is for automatic process of important medical documents and its related organizations, such as pharmacies, hospitals and clinics. The smart card can draw up the history of the patient in terms of illnesses and/or treatments; thus, facilitating the future management of his/her medical file. The proposed ontology for e-health information offers ease in introducing new bioinformatics information for patients services without moving the structure of the former ontology. The ontology created with the knowledge-based editor tool Prot
    Keywords: Semantic Web; Domain ontology; Services; owl; Citizens; e-health; Patient.

  • Machine Learning Classifiers with Preprocessing Techniques for Rumor Detection on Social Media: An Empirical Study   Order a copy of this article
    by Mohammed Al-Sarem, Muna Al-Harby, Essa Abdullah Hezzam 
    Abstract: The rapid increase in popularity of social media helped the users to easily post and share information with others. However, due to uncontrolled nature of social media platforms, such as Twitter and Facebook, it becomes easy to post fake news and misleading information. The task of detecting such problem is known as rumor detection. This task requires data analytics tools due to the massive amount of shared content and the rapid speed at which it is generated. In this work, the authors aimed to study the impact of different text preprocessing techniques on the performance of classifiers when performing rumor detection. The experiments performed on a dataset of tweets on emerging breaking news stories covered several events of Saudi political context (EBNS-SPC). The results have shown that preprocessing techniques have a significant impact on increasing the performance of machine learning methods such as support vector machine (SVM), Multinomial Na
    Keywords: Rumor Detection; Saudi Arabian News; Multinomial Naïve Bayes; Support Vector Machine; K-nearest Neighbor; Twitter Analysis.

  • A Survey of High School Students' Usage of Smartphone in Moroccan Rural Areas   Order a copy of this article
    by Mourade Azrour, Jamal Mabrouki, Azidine Guezzaz, Yousef Farhaoui 
    Abstract: In the last decade, the number of students owning smartphones is rising very rapidly. Therefore, this subject is new attractive of researchers, which trying to explain the impact of using smartphones in school environment. The aim of our study is to offer statistical data about using smartphone in Moroccan neglected countryside. By using interviewing method, we have got the different students usage of smartphone. We have found that the most usage of smartphones is social web, internet and music. In the contrary, the phone is used for education finalities by a few numbers of students.
    Keywords: Smartphone; Moroccan student; social web; new technology; education; school.

  • Autonomic Scalability Control for Cloud Workloads with Bayesian Network   Order a copy of this article
    by Sanjay T. Singh, Mahendra Tiwari, Hari Mohan Singh 
    Abstract: Cloud Computing facilitates access to Infrastructure, Platform, and Software to clients over the Internet, on-demand. Clients run applications on Cloud which are supported by Virtual Machines (VMs) running on top of Physical Machines (PM). Workload traffic coming to the cloud varies over time. To meet these changing workload demands the VMs must be scaled up and scaled down automatically to ensure that the Service Level Agreement (SLA) parameters are not violated and clients Quality of Service (QoS) is maintained. This auto-scaling can be realised with a Machine Learning (ML) technique Bayesian Network (BN). In this article we propose a framework for autonomic scalability of cloud resources. The framework takes cloud data centre management dataset, selects SLA parameters of choice, discretises data of selected features, learns structural dependencies between these features and draws a BN, learns parameters, validates model, and after that makes decisions to either scale up or scale down on the basis of predictive and diagnostic capabilities of the BN model. The results achieved after evaluation are logical and consistent as prediction of workload demands and diagnosis of under utilization of cloud resources is done with sufficient accuracy.
    Keywords: Cloud Computing; Scalability; Bayesian Network; Cloud Workload.

  • A Discovery and Selection Protocol for Decentralized Cloudlet Systems   Order a copy of this article
    by Dilay Parmar, Padmaja Joshi, Udai Pratap Rao, A. Sathish Kumar, Ashwin Nivangune 
    Abstract: Cloudlets help in overcoming latency issue of clouds in mobile cloud computing to offload the computing tasks. Communication protocols are important part of the implementation of cloudlet based systems for the Mobile Cloudlet-Cloud environment. In this work, an approach for communication between entities in a decentralized cloudlet based systems is proposed. For that purpose, a cloudlet discovery protocol which is used for discovering cloudlets in Wi-Fi vicinity of Mobile Devices is proposed. A selection algorithm for selecting the suitable cloudlet from available discovered cloudlets is also proposed. Our proposed selection algorithm uses infrastructure specific criterion for selection decision, which makes the algorithm more generic to use.
    Keywords: Cloudlet; Mobile Cloud Computing; Discovery; Selection.

  • Major Drivers for the Rising Dominance of the Hyperscalers in the Infrastructure as a Service Market Segment   Order a copy of this article
    by Sebastian Floerecke, Christoph Ertl, Alexander Herzfeldt 
    Abstract: The rapidly growing worldwide market for Infrastructure as a Service (IaaS) is increasingly dominated by four hyperscalers Alibaba, Amazon Web Services (AWS), Google and Microsoft. On the flip side, the market share and number of small and medium-sized regional IaaS providers have been declining steadily over the past years. Astonishingly, this fight for market shares has been largely neglected by the research community so far. Against this background, the goal of this study is to identify the major drivers for this market development. To this end, 18 exploratory expert interviews were conducted with high-ranking employees of various successful regional IaaS providers in Germany. The results indicate that the central driver is the significant lower price of the hyperscalers offerings. Beyond that, eight additional important drivers, such as market presence, innovative strength, amount of financial and human resources and high and global availability as well as high user experience of the IaaS services, have been identified. This study sheds light on the IaaS market and opens up and supports future in-depth investigations of this duel. Regional IaaS providers can use these insights to unravel the IaaS market conditions in general and to better understand the decisive strengths of the hyperscalers in particular. Based on this knowledge, regional IaaS providers are enabled to develop strategies and business models for counteracting or at least decelerating the hyperscalers growing dominance.
    Keywords: Cloud Computing; Infrastructure as a Service (IaaS); Business Models; Hyperscalers; Regional IaaS Providers; Exploratory Expert Interviews; Theory for Explaining.

  • Modeling of a cloud platform via M/M1+M2/1 queues of a Jackson network   Order a copy of this article
    by Sivasamy Ramasamy, Paranjothi N 
    Abstract: Modeling of a cloud platform that can provide the best QoS (Qualityrnof Service) to minimize the average response times of its clients is investigatedrnvia an open Jackson network. Compact expressions for the input and outputrnparameters and measures of the proposed model are presented. Designing ofrnthe model involves the performance measures of M/M1+M2/1 queues with a K - policy.rnThis new cloud system is able to control virtual machines dynamically andrnto implement its operations to promote effectiveness in most of the commercialrnapplications.rn
    Keywords: Cloud computing; Open Jackson network; M/M/1 queue; Response time and Quality of Service.

  • Towards P2P Dynamic-Hash-Table based Public Auditing for Cloud Data Security and Integrity   Order a copy of this article
    by Raziqa Masood, Nitin Pandey, Q.P. Rana 
    Abstract: Cloud storage is the most demanded feature of cloud computing to provide outsourced data on-demand for both organizations and individuals. However, users are in a dilemma to trust over the cloud service providers (CSPs) regarding whether privacy is preserved, integrity is maintained, and security is guaranteed, towards the outsourced data. Therefore, it requires to develop an efficient auditing technique to provide confidence upon the data present in cloud storage. This article proposes a peer-to-peer (P2P) public auditing scheme to audit outsourced data using a dynamic hash table (DHT) to strengthen the users' trust, confidence, and availability over the outsourced data. Each DHT maintains the information of outsourced data, which helps the auditors to provide safety and integrity while auditing the data. Moreover, these auditors are organized into a structured P2P to accelerate the auditing along with the auditor's availability. Thus, the proposed scheme overcomes from a single point of failure. The computation cost and communication cost of our proposed protocol are compared with the existing methods using pairing-based cryptography (PBC) library, and it found to be an effective solution for public auditing on outsourced data.
    Keywords: Cloud Computing; Dynamic Hash Table; Outsourced Data Storage; Peer-to-Peer; Privacy; Public Auditing.

  • Impact of Internet of Things in Social and Agricultural Domains in rural Sector: A Case Study   Order a copy of this article
    by Zdzislaw Polkowski, Sambit Kumar Mishra, Brojo Kishore Mishra, Samarjeet Borah, Anita Mohanty 
    Abstract: The term internet of things (IoT) in general connects various types of things/objects to the internet with the help of various information perception devices towards exchanging information. It may be linked towards building future innovations with the help of smart objects connected globally and capable of sensing, communicating as well as sharing information for different applications. In this case, data may be one of the most valuable aspects of internet of things. Accordingly, the data linked to internet of things may have specific characteristics towards modernizing and improving the technologies associated with relational-based database management. Now a day it may create a big impact on our day to day activities. As an instance, mechanisms with solutions may be associated towards monitoring the public transports. In this regard, requisite sensors may be used to analyze the maintenance issues along with priority. As there may be a huge increase of devices, the amount of data generated may also be too large, the main aim in this case may be to organize the large amount of data to build a new future of computing by taking every smart object into a globally connected network. Similarly, application of Internet of Things (IoT) in agriculture and food may lead to the main challenges over the existing applications and technologies. Organizing food along with its security may be a major challenging issue due to the increase of the world population and the growing welfare in emerging economies. In this regards, it may be obvious to face the challenges through better sensing and monitoring of production through internet of things which may have contribution towards usage of farm resources, crop development, food processing along with clear understanding of the specific farming conditions. And also in this paper, it may be aimed to provide and implement adaptive, efficient remote and logistic operations by actuators. This approach may also be based on service oriented architecture and component technology, which may help to realize dynamic semantic integration.
    Keywords: Sensor; Actuator; Internet; Logistic operations; Semantic integration.

  • EARA- PSOCA: An Energy-Aware Resource Allocation and Particle Swarm Optimization (Pso) Based Cryptographic Algorithm In E-Health Care Cloud Environment   Order a copy of this article
    by Palani Subramanian, Rameshbabu K 
    Abstract: In cloud platforms, large amount of energy is consumed by execution of scientific workflow. So, VMs has to be deployed in energy efficient manner. Throughout the world, wide attention is attracted by cloud platforms energy consumption. Cooling systems, light systems, network peripherals, monitors, console, processors cooling fan, server running consumes large amount of power in a cloud data centre. In order to face these issues, Energy-aware Resource Allocation method is proposed in this work, which is termed as EnReal. For execution of scientific workflow, virtual machines are deployed dynamically which focus Current e-health standards and solutions. In general e-health systems, client platform security is not addressed by this method which is an important factor to be considered. In e-health domain infrastructures, for privacy establishment, new Particle Swarm Optimization (PSO) with cryptography-based security algorithm (PSOCA) is proposed in this work. Controlled environment can be created by this for datas privacy easy management as well as for security. For experimentation, CloudSim framework is adapted in cloud simulation environment. Hence the Proposed method are evaluated using various parameters like energy consumption and resource utilization
    Keywords: Cloud security; cloud computing; resource allocation; cryptography; Energy-aware method.

  • An Application of Taguchi L16 Method for optimization of Load balancing Process Parameters in Cloud Computing   Order a copy of this article
    by Shahbaz Afzal, G. Kavitha, Amir Ahmad Dar 
    Abstract: Cloud computing has emerged as a large scale distributed computing platform to maintain and deliver data, information, applications, web services, IT infrastructure and other cloud services on a global scale of users over internet. With a feature of global concurrent access to users on its finite resources, scheduling of tasks is an essential process in cloud computing to assign cloud user tasks on cloud resources. Under the circumstances of varying nature of user tasks, task scheduling and resource allocation mappings are not self sufficient to keep the overall cloud system functional in a balanced state. So, task scheduling in absence of proper load balancing techniques result in workload imbalanced machines with overloaded, under-loaded or idle resources. This has negative consequences on deliverable Quality of Service and profit. Hence, prior to designing a load balancing algorithm, it is essential to determine the input parameters that have much impact on the output / response variable to prevent load imbalances among cloud computing machines. The study investigates the impact of input parameters namely growth rate, magnitude of cloud task with respect to CPU or memory, initial population of tasks, and sampling interval, on the population of tasks N(t) with the help of Taguchi Design of Experiment. Taguchi L16 method is used for experimental setup and two statistical techniques - Analysis of Mean (ANOM) and Analysis of Variance (ANOVA) are used for performance analysis. ANOM is used to identify which input parameter has a significant effect on N(t) and it also provides the best optimal combination of input variables for which the virtual machine is stable. ANOVA is used to measure the percentage contribution of each input parameter on the response variable. From the experimental results, it is concluded that N0 has the most significant impact on N(t) with a percentage contribution of 37%. The whole setup was executed on the Minitab18 statistical software toolbox.
    Keywords: Cloud computing; load balancing; scheduling; virtual machines; control parameters; Taguchi method; ANOM; ANOVA; optimal combination.

  • FSACE: Finite State Automata based client-side Encryption for Secure data Deduplication in Cloud Computing   Order a copy of this article
    by Basappa Kodada, Demian Antony D'Mello 
    Abstract: Now a day, digital data are growing vastly that are generated from different source of media in unstructured manner. The maintenance and management of this high volume of data is very critical that guides the clients to make use of cloud storage service. In reality, the communication and computation overhead will be increased to manage these data by security expectations at cloud with duplicate entries. The data deduplication technique is widely used that reduces overhead on cloud service provider. The several approaches have been proposed by researcher to address the issues of data deduplication. The convergent encryption(CE) and its flavors are widely used in secure data deduplication to reduce network bandwidth usage, storage usage and storage cost and improves storage efficiency, but CE algorithm faces dictionary based brute-force attack and threats from inner and outer adversaries. In this paper, we propose FSA based client side encryption to accomplish secure data deduplication that provides data confidentiality and integrity for users data. The FSACE protocol achieves data access control by using Proof of ownership (PoW) challenge given to data owner. The security analysis indicates that, FSACE protocol is secure enough to protect data from inner and outer adversaries.We also demonstrates performance evaluation on obtain results that shows considerably decrease in communication and computation overhead and increase in storage efficiency.
    Keywords: Security;Encryption;Cryptography;Deduplication;Secure Deduplication;Proof of ownership;Data Security;Cloud Data Security.

  • Predictive Data Center Selection Scheme for Response Time Optimization in Cloud Computing   Order a copy of this article
    by Deepak Kapgate 
    Abstract: The quality of cloud computing services is evaluated based on various performance metrics out of which response time is most important. Nearly all cloud users demands its applications response time as minimum as possible, so to minimize overall system response time we have proposed Request Response Time prediction based data center (DC) selection algorithm in this work. Proposed DC selection algorithm uses results of optimization function for DC selection formulated based on M/M/m queuing theory, as present cloud scenario roughly obeys M/M/m queuing model. In cloud environment DC selection algorithms assessed based on their performance in practice, rather than how they are supposed to be used. Hence explained DC selection algorithm with various forecasting models is evaluated for minimum user application response time and response time prediction accuracy on various job arrival rates, real parallel workload types and forecasting model training set length. Finally performance of proposed DC selection algorithm with optimal forecasting model is compared with other DC selection algorithms on various cloud configurations, considering generic cloud environment.
    Keywords: Cloud Computing; Response Time Optimization; Time Series Forecasting; M/M/m Queuing Model.

  • 3S - Hierarchical Cluster-Based Energy-Efficient Data Aggregation Protocol in Wireless Sensor Network   Order a copy of this article
    by Arun Agarwal, Amita Dev 
    Abstract: Energy utilization is one of the most common challenges in Wireless Sensor networks, as frequent communication between the sensor nodes results in a huge energy drain. Moreover, a key challenge is to schedule the sensor network activities for data transmission by reducing energy utilization. To overcome this challenge, we have proposed a 3S- Hierarchical Cluster-Based Energy-Efficient Data Aggregation Protocol in Wireless Sensor Network, where 3S represents Smart Sensing, Selective Cluster Head Nomination, and Data Aggregation Schedule which reduces the energy consumption for prolonging network lifetime. The proposed technique has three phases: In the first phase, smart data sensing is applied to the predefined network model, which uses an unbounded buffer to store data values and reduces many transmissions to reduce energy dissipation. In the second phase, sensor nodes are being categorized based on their available energy, and their relative received signal strength indicator level and high-value nodes are treated as extended nodes to become the candidates for cluster head selection. In the third phase, an advance aggregation schedule is introduced, which alternatively demands sensor nodes to send their stored buffer based on node identification value categorized as even or odd node. Simulation analysis and results proved that our proposed algorithm could effectively enhance the network lifetime and reduce energy consumption while maintaining acceptable packet delivery ratios and delay values.
    Keywords: Aggregation; Data Management; Energy Efficiency; Residual Energy; Scheduling; Smart Sensing.

  • Implementation of Hybrid Adaptive System for SLA Violation Prediction in Cloud Computing   Order a copy of this article
    by Archana Pandita, Prabhat Kumar Upadhyay, Nisheeth Joshi 
    Abstract: The cloud has to commit with Service Level Agreements (SLA) which ensures a specific level of performance and sets a penalty if SLA is violated by the provider. These days, managing and applying penalties have become an essential and critical issue for Cloud Computing. In this research, Adaptive Neuro-Fuzzy Inference System (ANFIS) is used to develop proactive fault prediction model which has been designed by utilizing the power of Machine Learning and tested on the datasets to highlight the accurate models for faults prediction in the cloud environment. The suggested algorithm has achieved a percentage accuracy of 99.3% in detecting Violations. The performance of the proposed model has been compared with Bayesian Regularization and Scaled Conjugate Gradient methods which reflect the facts as obtained from the results that effectiveness of this scheme in terms of predicting systems Violation is more effective.
    Keywords: Adaptive Neuro-Fuzzy Inference System; Cloud computing; Cloud Service; Machine Learning; Service Level Agreement; Violation; Quality of Service; Prediction.

  • A Systematic Literature Review of Cloud Computing Cybersecurity   Order a copy of this article
    by Hanane Bennasar, Mohamed Essaaidi, Ahmed BENDAHMANE, Jalel BENOTHMAN 
    Abstract: Cloud Computing is a large-scale distributed computing system which has initially emerged from financial systems. Security is usually listed as the number one concern for cloud computing adoption. Cloud security issues persistently rank above cloud reliability, network issues, availability and worries about the cloud financial profit. This paper proposes a systematic literature review which aims to provide an up-to-date and a comprehensive overview of cyber-security issues in cloud computing. A systematic literature review analyzes the peer-reviewed research papers published and indexed by Science Direct, Springer, Google Scholar, Web of Science, IEEE Xplore, etc. With the following string terms: Cloud computing issues, cloud computing cyber-security, threats to cloud computing, cloud computing risks, cloud computing solutions, and cloud computing recommendations. This literature review is conducted based on the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) methodology, which is used to create a systematic, accurate, and reliable overview of the literature. This yielded 1134 papers out of which 87 ultimately met inclusion criteria and were reviewed. From the Systematic Literature Review, it was possible to overview and identify the state-of-the-art main cloud computing cyber-security threats and challenges.
    Keywords: Cloud Computing; Threats; Cyber-Security; Systematic Literature Review.

Special Issue on: ICBDSDE'19 Cloud Computing for Smart Digital Environment

  • Versioning Schemas of JSON-based Conventional and Temporal Big Data through High-level Operations in the TJSchema Framework   Order a copy of this article
    by Zouhaier Brahmia, Safa Brahmia, Fabio Grandi, Rafik Bouaziz 
    Abstract: ?JSchema is a framework for managing time-varying JSON-based Big Data, in temporal JSON NoSQL databases, through the use of a temporal JSON schema. This latter ties together a conventional JSON schema, which is a standard JSON Schema file, and its corresponding temporal logical and temporal physical characteristics, which are stored in a temporal characteristic document. Conventional JSON schema and temporal characteristics could evolve over time to satisfy new requirements of the NoSQL database administrator (NSDBA) or to comply with changes in the modelled reality. Accordingly, the corresponding temporal JSON schema is also evolving over time. In our previous work (Brahmia et al., 2017, 2018b, 2019a), we have proposed low-level operations for changing such schema components. However, these operations are not NSDBA-friendly as they are too primitive. In this paper, we deal with operations that help NSDBAs to maintain these schema components, in a more user-friendly and compact way. In fact, we propose three sets of high-level operations for changing the temporal JSON schema, the conventional JSON schema, and the temporal characteristics. These high-level operations are based on our previously proposed low-level operations. They are also consistency-preserving and more helpful than the low-level ones. To improve the readability of their definitions, we have divided these new operations into two classes: basic high-level operations, which cannot be defined through other basic high-level operations, and complex ones.
    Keywords: Big Data; NoSQL; JSON; JSON Schema; TJSchema; Conventional JSON schema; Temporal JSON schema; Temporal logical characteristic; Temporal physical characteristic; Schema change operation; Schema versioning; temporal databases.
    DOI: 10.1504/IJCC.2021.10030585
     
  • Versioning Temporal Characteristics of JSON-based Big Data via the ?JSchema Framework   Order a copy of this article
    by Safa Brahmia, Zouhaier Brahmia, Fabio Grandi, Rafik Bouaziz 
    Abstract: Several modern applications, which exploit Big Data (e.g., Internet of Things and Smart Cities), require the analysis of a complete history of the changes performed on these data which may also include modification to their schemas (or structures). Although schema versioning has long been advocated to be the best solution to cope with this issue, there are no currently available technical solutions, provided by existing Big Data management systems (especially NoSQL DBMSs), for handling temporal evolution and versioning aspects of Big Data. In (Brahmia et al., 2016), for a disciplined and systematic approach to the temporal management of JSON-based Big Data in NoSQL databases, we have proposed the use of a framework, named ?JSchema (temporal JSON Schema). It allows the definition and validation of temporal JSON documents that conform to a temporal JSON schema. A ?JSchema schema is composed of a conventional (i.e., non-temporal) JSON schema annotated with a set of temporal logical and temporal physical characteristics. Moreover, since these two components could evolve over time to respond to new applications requirements, we have extended ?JSchema, in (Brahmia et al., 2017), to support versioning of conventional JSON schemas. In this work, we complete the picture by extending our framework to also support versioning of temporal logical and physical characteristics. In fact, we propose a technique for temporal characteristics versioning, and provide a complete set of low-level change operations for the maintenance of these characteristics; for each operation, we define its arguments and its operational semantics. Thus, with the proposed extension, ?JSchema will provide a full support of temporal versioning of JSON-based Big Data at both instance and schema levels.
    Keywords: Big Data; NoSQL; JSON; JSON Schema; ?JSchema; Conventional JSON schema; Temporal JSON schema; Temporal logical characteristic; Temporal physical characteristic; Schema change; Schema versioning.
    DOI: 10.1504/IJCC.2021.10030586
     
  • Peer-to-peer Storage Engine for Schemaless Immutable Data   Order a copy of this article
    by Jose Ghislain Quenum, Alexander Brown Shipena 
    Abstract: In this paper, we present TaYo, a peer-to-peer storage engine explicitly designed for immutable data. We argue that although most storage engines cater for immutability, they generally introduce many more functions, rendering the engines complex and sometimes inefficient. Besides, most storage engines rely on the underlying file system(s) to manage the data on the actual storage medium. Here we advocate for and demonstrate a storage engine designed exclusively for immutable data that bypasses the file system during the storage. TaYo follows content-addressable storage (CAS) approach using Cuckoo hashing to generate a hash of the content that then serves as its identity. In TaYo, we trimmed the I/O operations to the basic two I/O operations for a storage engine: read and write. To write data to TaYo, we split it into eight (8) chunks, record the structure in a separate index and assign the chunks to worker processes that write concurrently. Each chunk is replicated twice (three (3) copies in total). When the write operation completes, the identifier is returned to the client application. To read data from TaYo, the client has to provide the identifier which the index uses to locate the chunks. For each chunk, only one replica is requested to pull the chunk. Thereafter, all chunks are assembled and the data transferred back to the client. TaYo uses a semi-active replication technique, a blend of active and passive replication while storing the data. It uses a consensus protocol built on top of Raft to guarantee consistency among the replicas.
    Keywords: Storage Systems; Storage Engines; Data Management; Peer-to-Peer; Immutable Data; Content-Addressable Storage.

  • User Arrival Rate dependent Profit Maximization of Web Application deployment on Cloud   Order a copy of this article
    by N. Neelima, B. Basaveswarrao, K. Gangadhara Rao, K. Chandan 
    Abstract: The user arrival rate dependent profit maximization is derived for Cloud Service Provider (CSP) when Web Applications are deployed on the Cloud for various number of instances of VMs operated using multi server queueing model. For any Web Application deployment on Cloud to get maximum profit with perfect management of auto scaling and perfect choosing of user charge according to the dynamic changes of user arrival rates is necessary. This paper is to find the profit maximization with user arrival rate , because many CSP used built in tools for auto scaling where there is no influence mechanisms on user arrival rates .In view of the dynamical changes of user behavior ,interests , necessities and attraction of the charges ,the user request rate is independent of the configurable parameters like buffer size , number of VMs , and speed of the VM. There is a need to investigate the influence of user arrival rate for profit maximization for CSP. To reach this objective the finite multi server profit queueing model is adopted and derived maximum profit through partial derivative with combination of bisection search method. Then sensitive analysis of user charges based on optimum user arrival rate for maximization of profit is processed. Finally the supporting numerical illustration is carried out and results are presented.
    Keywords: Cloud Computing; User arrival rate; Profit Optimization; User charge; SLA.

  • An Improved Pricing Algorithm for Infrastructure as a Service Clouds   Order a copy of this article
    by Seyyed-Mohammad Javadi-Moghaddam, Asieh Andarzgoo, Mohsen Saberi 
    Abstract: Marketing in cloud systems enables users to trade and share resources. For the sales of services, Client applications and service providers negotiate to make a Service Level Agreement. Offering prices in the negotiation of services become one that is challenging. A federal cloud is an efficient approach of recent interest to better balance risk sharing between services provider and customer. This work presents a new algorithm to increase service provider revenue and reducing user costs simultaneously. The auction of remaining time spent on resources and interactions between federal clouds increases the profits of clouds, the number of successful requests, and reduces users' costs. The simulation results confirm the expectations of the proposed approach.
    Keywords: Federal cloud; Pricing model; Service quality; Service level agreement.

  • Analysing Knowledge in Social Big Data   Order a copy of this article
    by Lejdel Brahim 
    Abstract: Big data has become an important issue for a large number of research areas such as data mining, machine learning, computational intelligence, the semantic Web, and social networks. The combination of big data technologies and traditional machine learning algorithms has generated new and interesting challenges in other areas as social media and social networks. These new challenges are focused mainly on problems such as data processing, data storage, data representation and visualizing data. In this paper, we will present a new approach that can extract entities and their relationships from social big data, allowing for the inference of new meaningful knowledge. This approach is a hybrid approach of multi-agent systems and K-means algorithm.
    Keywords: K-means; Multi-Agent Systems; Big data; data mining; social networks.

  • Developing a Smart Learning Environment for the Implementation of an Adaptive Connectivist MOOC Platform   Order a copy of this article
    by Soumaya EL EMRANI, Ali EL MERZOUQI, Mohamed KHALDI 
    Abstract: A pedagogical object can refer to any pedagogical component that can be used in the learning process. It could be a text, an image, a video, a web page, etc. Personalizing the pedagogical content can be considered crucial. So, this can declare the need to find collaboration agreements between the pedagogical contents specialists, in order to get the collaborative development or the pedagogical content reuse. Consequently, E-learning standards and specifications give the solution, with possibilities of reuse, interoperability and customization.rnSince the main goal of our research is providing an adaptive cMOOC, this requires to adjusting the pedagogical content to each learner profile. So, an adaptive learning design has to present different learning strategies based on process of data analytics that include previous and current experiences, learning styles and learner profile.rnAs a part of our implementation, this structural design can be made by using some machine learning algorithms in parallel of the IMS standard and related specification.rn
    Keywords: Pedagogical Object; MOOC; cMOOC; Adaptive cMOOC; Machine Learning; Intelligent Platform; Pedagogical Content; IMS.

  • An Effective Cooperative Aligner to Resolve Multiple Sequence Alignment Problem   Order a copy of this article
    by Lamiche Chaabane 
    Abstract: In this research work, we propose a new cooperative aligner based on metaheuristics to find an approximate solution to the multiple sequence alignment (MSA) problem. The developed approach named HPSOSA apply in the first stage the particle swarm optimization (PSO) with a crossover operator as a move mechanism for each particle. In the second stage, simulated annealing is incorporated in order to improve worst solutions in the population and to help HPSOSA to escape from local optimum alignment. Simulation results on BaliBASE benchmarks have demonstrated the capability of the proposed method to obtain better results for the MSA problem compared to those produced by some literature works in the same field.
    Keywords: Cooperative aligner; multiple sequence alignment; SA; PSO; BaliBASE benchmarks.

  • Data openness for efficient E-governance in the age of Big data   Order a copy of this article
    by Safae Sossi Alaoui, Yousef Farhaoui, Brahim Aksasse 
    Abstract: The data revolution in recent years has led governments around the world to realize the different benefits of communicating and opening data over their information and communication technologies (ICT) in behalf of their citizens. Indeed, the need for data openness is vitally important for governments, research community and businesses, especially in the era of Big data, which characterized by the increase in volume of structured and unstructured data, the speed at which data is generated and collected and the variety of data sources; this is known as the three Vs. Therefore, Big data has changed the ways governments manage and support their policies towards their digital data and tend to make it more open and accessible. This open data movement has been adopted by several countries thanks to its multiple benefits in different domains to uncover hidden patterns and improve e-governance effectiveness in terms of cost, productivity and innovation. Through using machine learning algorithms, this paper demonstrates that governments applying open policies are the same as those who get a high score in terms of human development index. To fulfil papers objectives, the powerful statistical tool named IBM SPSS Statistics is used to accomplish the entire analytical process.
    Keywords: Open data; E-governance; Big data; regression algorithms.

  • Cloud Computing Services, Models and Simulation Tools   Order a copy of this article
    by Saad-Eddine CHAFI, Younes Balboul, Said Mazer, Mohammed Fattah, Moulhime El Bekkali, Benaissa BERNOUSSI 
    Abstract: Nowadays, Cloud computing is an internet-based platform that renders various computing services like hardware, software and other computer related services remotely. As the adoption and deployment of cloud computing grow, it is critical to evaluate the performance of cloud environments. Cloud simulators are required for cloud systems testing to decrease the complexity and separate quality concerns. Several cloud simulators have been particularly developed for performance evaluation of cloud computing environments. We accomplished comparative analysis of some cloud simulators based on varied parameters. The objective is to offer insights for each analysis , given their features, functionalities and guidelines on the way to researchers on their desire of preference of suitable tools.
    Keywords: Cloud computing; CloudSim; CloudAnalyst; GreenCloud; CloudReports; iCanCloud;.

  • Semantic integration of Moroccan Cultural Heritage using CIDOC CRM: case of Dr   Order a copy of this article
    by FOUAD NAFIS, Badraddine AGHOUTANE, Ali YAHYAOUY 
    Abstract: This paper presents the approach adopted and the results obtained as a part of a project aiming at publishing the data of the Moroccan Cultural Heritage (CH) of the Dr
    Keywords: Cultural Heritage; Ontology; Preservation; Drâa-Tafilalet; CIDOC CRM; Semantic; RDF.

  • Efficient skin cancer diagnosis based on Deep learning approach using lesions skeleton   Order a copy of this article
    by Filali Youssef, Sabri My Abdelouahed, Aarab Abdellah 
    Abstract: Skin cancer is one of the most threatening cancer all over the world. The early detection of this skin cancer could help dermatologists in saving patients lives. For that, a Computer-aided diagnosis is used for an early evaluation of this kind of cancer. Skeletons of the lesions are an effective representation making it possible to properly describe the shape and size of lesions and thus used to classify them effectively as melanoma or non-melanoma. Therefore, the proposed idea in this paper is to use the lesions skeleton as Deep Learning entry instead of the original images. Experimentation shows that this idea can both increase the classification rate, in comparison with recent approaches from the literature, and thus reduce the number of layers used to create the deep network. The accuracy of our proposed approach on the well-known ISIC challenge and Dermoscopy datasets is 95%, showing the effectiveness of our system.
    Keywords: Skin cancer; melanoma; deep learning; convolutional neural network (CNN); skeleton.

  • A Deadline Based Elastic Approach for Balanced Task Scheduling in Computing Cloud Environment   Order a copy of this article
    by K. Jairam Naik 
    Abstract: Cloud is a pay as you go servicing environment where parallelized virtual resources are provisioned to the users based on the quality of service requirements of their tasks. In such environment, it is demanded to assign enough number of virtual resources for executing the user tasks within given deadline. Also, effective management of load among resources is a challenging task. Efficient load management helps to reduce the Make-span time, increase the tasks execution rate and makes optimized resource utilization. There are several approaches available at present for task allocation and workload balancing among the Virtual Machines (VM) in the cloud. But, most of the approaches were concentrating on increasing the virtual machines statically if required and distributing the load randomly to them. This will cause inefficient resource utilization and also makes some task to miss deadline. Most of the existing works have not concentrated on an emerging feature like elasticity for dynamic Provisioning or Deprovisioning of VMs while allocating the workload among cloud resources. Management of variable workload on cloud resources is essential when the numbers of user tasks to avail cloud resources or the numbers of resource available is varying dynamically. Hence, there is a necessity of introducing elasticity-based scheduling and workload balancing for cloud. The proposed elasticity-based load balancing approach (DL_ELBalTSch) considers the percentage of VM resources overloaded or underloaded as a supporting threshold at that movement and takes the decision either to raise or cut the VMs. This approach is competent enough for successful execution of tasks on variable number of resources instead of failing to meet established deadline. The proposed approach diminishes the Makespan time of user task and improves the successful execution ratio compared to other approaches. Extensive simulations are performed on a java-based simulation toolkit called CloudSim and obtained higher task execution ratio, lower makespan time and task execution cost when compared with existing approaches.
    Keywords: Computing cloud; Scheduling; Workload balancing; Deadline; Virtual Machines; Resources; Elastic Scaling; Utilization; Makespan Time; Provisioning; Deprovisioning; Execution ratio.

  • Tourism Recommender Systems: An overview   Order a copy of this article
    by Khalid AL FARARNI, Badraddine AGHOUTANE, Ali YAHYAOUY, Jamal RIFFI, Abdelouahed SABRI 
    Abstract: The amount of information available on the World Wide Web and its number of users has increased considerably over the past decade. All this information can be particularly useful for users who are planning to visit an unknown destination. Information on travel destinations and their associated resources, such as hotels, restaurants, museums or events, etc., is commonly sought by tourists in order to plan a trip. However, the list of possibilities offered by Web search engines (or even specialized tourist sites) can be overwhelming. Evaluating this long list of options is very complex and time consuming for tourists to choose the one that best suits their needs. Computer techniques have been developed to facilitate this search as well as the extraction of relevant information. The ones we focus on in this article are the recommendation systems. rnThe purpose of this paper then is to provide a detailed and up-to-date review of the most commonly used profiling techniques and recommendation approaches in the field of tourism, with an emphasis on content-based and collaborative approaches.
    Keywords: Tourism Recommender Systems; Collaborative Filtering; Content-Based Filtering; Hybrid Recommender System; User/Item Profiling.

  • A Multidimensional-Multilayered Anomaly Detection in RFID-Sensor Integrated Internet of Things Network   Order a copy of this article
    by Adarsh Kumar 
    Abstract: Outlier detection in a single dimension is not enough to protect the network from known and unknown attacks. There is a strong need to apply multiple steps at multiple stages for combating these attacks. There are various approaches to protect the network from malicious activities and it is an ongoing process to fight against them from a multi-layers perspective parallel to networking layering models. This work has considered a multi-layered and multi-dimensional approach to protect the hierarchical mobile ad-hoc network (MANET) from known and unknown attacks. The proposed multilayered approach consists of ultralight, light and heavy computational overhead-based outlier detection approaches to combat the attacks. It has been observed that these approaches apply threshold as well as earning processes based mechanisms for fighting against the attacks. Further, hardware constraint is considered to identify these schemes as ultralight, light, or heavy. Simulation results show that variations of nodes from 10 to 5000 vary the cluster formation from 5 to 53 with an error rate of 0.4%.
    Keywords: Anomaly detection; active and passive attacks; QoS; performance; clustering; machine learning; threshold measurement; optimization.

  • Use of Internet of Things for Monitoring and Evaluation water's Quality: Comparative Study   Order a copy of this article
    by Jamal Mabrouki, Mourade AZROUR, Souad El Hajjaji 
    Abstract: Over the past decade, water resources have faced some challenges, including pollution, drought, etc. Thus, the monitoring of this vital resource becomes significant. On the other hand, the Internet of Things (IoT) has known a significant evolution nowadays; it is adopted in various fields in order to improve human life. In this paper, we present the results of our comparison study, that aim to review and compare between various proposed system for monitoring water quality using the Internet of Things technologies.
    Keywords: IoT;Internet of Things; water; monitoring; werless network.

  • Analysis and simulation of a Reverse Osmosis unit for producing drinking water in Morocco   Order a copy of this article
    by Maria BENBOUZID, Jamal Mabrouki, Mahmoud HAFSI, Souad El Hajjaji 
    Abstract: Seawater and brackish water desalination becomes an imperative solution to provide drinking water in Morocco as in similar countries facing water scarcity. Reverse osmosis process is now well-developed technology and currently dominating the desalination market. However, he problem with reverse osmosis and membrane filtration in general is membrane fouling due to accumulation of matter on membrane surface. Several parameters can be monitor to indicate membrane fouling such as the flow rate, the pressure drop, and the permeate conductivity. In this work, the desalination process is done on a salty surface water located in Middle Atlas of Morocco, characterized by a chloride content of 295 mg/L and a variable quality depending on the seasons. Actually, surface water composition depends on atmospheric deposition and rockwater interaction. Indeed, the monitoring of water river quality confirmed that the characteristics of the raw water are not stable according to weather conditions. The recorded measurements clearly show that the minimum value of conductivity was recorded during winter season due to the rainfall dilution process and increases during the summer season due to the evaporation of the water. The questioning of water quality variation over the seasons initiated to make then several simulations of the design of reverse osmosis system with two different water quality: The first quality is characterized by a conductivity of 1230
    Keywords: Modelling; Seasonal changes; Simulation; Reverse osmosis; Water treatment; Water salinity.

Special Issue on: CUDC - 2019 Emerging Research Trends in Engineering, Science and Technology

  • A Survey of Multi-signature Schemes for XML Documents   Order a copy of this article
    by JATIN ARORA 
    Abstract: eXtensible Markup Language (XML) is a widely used data exchange format in various fields for data transmission over the web. But due to security risk, the privacy of XML documents cannot be ensured, and hence sharing of data had become the major challenge. Web vulnerabilities cause threats to XML data confidentiality and integrity. There is a set of security mechanisms such as encryption, decryption, digital signature, and validations which are applied to XML documents. Signing a document is the commonly used method for ensuring the authentication, non-repudiation, and integrity in which only a single signer is responsible for signing a document. But now a day, the responsibility of signing a document is shared among multiple signers instead of single signer which results in increasing the popularity of using multi-signature schemes. Therefore, a document may have multiple signatures and the integrity of the entire document depends upon it. This arises the need of studying various multi-signature schemes Here in this paper, various developments in the era of multi-signature schemes as well as its application areas are discussed.
    Keywords: XML; SSL/TLS; Multi-signature; XML Vulnerabilities.

  • Software Defined Networking: A Crucial Approach for Cloud Computing Adoption   Order a copy of this article
    by Sumit Badotra, Surya Narayan Panda 
    Abstract: The most important convince which is contributed by the cloud is that it lets to deliver an infrastructure framework and various services rapidly instead of ordering, installing and then configuring a lot of servers, you can go for a particular number of virtual machines (VMs). Networking approach used in the cloud the network is becoming a hurdle to expand its scalability and therefore, it becomes one of the reasons that the network has become more complex and highest time-consuming part of executing the application. But with the help of introducing Software Defined Networking (SDN) approach into the networking, now the network infrastructure and its services can be configured through well-defined an Application Programming Interface (API), manageability of the cloud network is enhanced with the capability of increasing its scalability and therefore, the collaboration of Cloud and SDN is one of the hottest topics nowadays. This study aims to provide the importance of SDN in the cloud. In order to limit the hurdles in cloud infrastructure, especially in the large data, centers detailed study on its importance, architectural and advantages are stated. One of the newly emerged simulation tool (CloudSimSDN) with its detailed explanation for executing the experiments is also illustrated.
    Keywords: cloud computing; data centers; software defined networking; data plane; control plane; application programming interface.

  • Performance Comparison of Various Techniques for Automatic Licence Plate Recognition Systems   Order a copy of this article
    by Nitin Sharma, Pawan Kumar Dahiya, Baldev Raj Marwah 
    Abstract: Automatic licence plate recognition system is direly needed nowadays for various applications like toll collection system, parking system, identification of stolen cars, incident management, electronic payment service, electronic customs clearance of commercial vehicle, automatic security roadside inspection, security monitoring in a car, emergency notification, and personal security, etc. An automatic licence plate recognition system performs three important processing steps on the input image, i.e., extraction, segmentation, and recognition. A number of algorithms are developed for these steps since last few years. The result of which is significant improvement in the licence plate recognition. The aim of this study is a survey of the existing techniques for licence plate recognition. In this paper, a number of existing techniques for automatic licence plate recognition are presented and their benefits and limitations are discussed. Further, the paper also foresees the future scope in the area of automatic licence plate recognition system.
    Keywords: Automatic Licence Plate Recognition System (ALPR); Neural Network (NN); Optical Character Recognition (OCR); Support Vector Machine (SVM).

  • Comparative Analysis of different Polynomial Interpolations for implementing Key Management techniques in MANETs   Order a copy of this article
    by Chetna Monga, K.R. Ramkumar, Shaily Jain 
    Abstract: The backbreaking issue in Mobile Ad hoc NETworks (MANETs) is ensuring security which abounds due to dynamic nature and unavailability of centralized infrastructure. Due to the distributed nature of network, trading the complexity has been found so far as a natural remedy to ensure security. In order to secure MANETs, we inspect two polynomial interpolation approaches avowed as Lagranges interpolation and other as Curve fitting. The key shares are disseminated among some predefined fraction of nodes called Security Association Members (SAMs). In order to facilitate certificate management in a versatile stance, identity (ID) based method with polynomial based interpolation approach is used. The new node has to fit into the parameters set by these SAMs so as to acquire the required quantity of key shares. As the key shares are transferred through Error- Free and Error- Prone channels, so the assumptions are done likewise. The analysis represents the superiority of Curve fitting over Lagranges approach as the intricacy of generating polynomial in Lagranges approach is high than the Curve fitting. The result reveals the acute accuracy of Curve fitting approach along with less memory and time consumption with each order of polynomial.rnrn
    Keywords: MANETs; Key Management; Polynomial Interpolation; Lagrange Interpolation; Curve Fitting; Security; Accuracy; Memory consumption; Secret Key; Node-ID.

Special Issue on: Machine Learning and Artificial Intelligence for Computing and Networking in the Internet of Things

  • Translation of Code Mixed Language to Monolingual Languages using Rule Based Approach   Order a copy of this article
    by Shree Harsh, T.V. Prasad, G. Ramakrishna 
    Abstract: Computational Linguistics is an evolving area in Artificial Intelligence. The demand of language translation has significantly increased due to cross- lingual communication and information exchange. Bilingual code switching is habitually observed in bilingual community. Nowadays, much research is being done in machine translation (MT) from Indian languages to foreign languages, generally to English and vice versa. The core component of MT system is identification and translation of morphological inflections and PoS word ordering with respect to language structure. Indian languages are morphologically richer than English language and have multiple inflections during translation into English Language. This paper focuses on the analysis and translation of code mixed language, i.e., Hinglish into pure Hindi and pure English languages. The experiments based on the algorithms in the paper are able to translate code mixed sentences to pure Hindi with a maximum success rate of 91% and to pure English with a maximum success rate of 84%.
    Keywords: Code Mixing; Hinglish; Pure Language Translation; Hybrid Morphologyrn.

  • Enhancing the operations for Integrity check on Virtual Instance Forensic logs using Cuckoo Filter Trees   Order a copy of this article
    by Gayatri S P, Saurabh Shah, K.H. Wandra 
    Abstract: Logs play a vital role in the forensic domain. Logs are congregated by the cloud service providers or some third parties with the help of the cloud service provider. These logs can hold pieces of evidence for the crime committed using the resources of the cloud service provider. A user of the cloud who can be an oppugner can hire the virtual instances, launch an attack, commit a crime to delete all the contents and close them. In such a case, logs play a major role to trace such oppugner. Such logs which are stored in a centralized system can be a major drawback as they can be tampered easily by the oppugners with the help of employees of the service provider termed as the malicious insider or with the help of the forensic investigator during the investigation process. The tampering of logs is done to defend the oppugner who can bribe the malicious insider or the forensic investigator. To handle such issues the authors have recommended techniques which aid in validating the logs against tampering. The authors have developed the algorithms using the cuckoo filters which are developed in the recent past. The cuckoo filter trees assist in providing the integrity of logs to the court of law and thus making legal trails and thus prosecuting the oppugner in a fair manner.
    Keywords: cloud forensic; cuckoo filter; oppugner; log integrity; concealment; cuckoo filter tree; forensic log; Forensics Braced Cloud; virtual instance.

  • A BIOMETRIC BASED SECURE, ENERGY EFFICIENT, LIGHTWEIGHT AUTHENTICATION PROTOCOL FOR WIRELESS BODY AREA NETWORKS   Order a copy of this article
    by T. Santhi Vandana, S. Venkateshwarlu 
    Abstract: Wireless body area networks are one of the important categories of wireless networks for remote healthcare monitoring using wearable computing devices. In WBANs, it must be guaranteed that the privacy of users is not exposed to unauthorized entities while sending and receiving the data. Hence, a proficient and secure authentication protocol is highly essential in WBANs. Conventional security protocols have restricted and few protocols cannot protect the privacy of users. In this work, a secure, energy-efficient, lightweight authentication protocol is proposed for securing WBAN that performs authentication session key establishment in privacy-preserving data aggregation. The proposed protocol provides confidentiality, integrity, availability and authenticity of data. Based on the security analysis, it is clear that the proposed protocol provides powerful protection than majority of the available schemes in insecure channels.
    Keywords: Wireless body area networks; authentication protocol; biometrics; electrocardiogram; security protocols; wireless sensor networks.

  • ICU Medical Alarm System using IOT   Order a copy of this article
    by Fahd Alharbi 
    Abstract: : Monitoring in the Intensive Care Units (ICU) is an essential task to patient health and safety. The monitoring systems provide physicians and nurses with the ability to intervene when there is a deterioration in patient's condition. The ICU monitoring system uses audio alarms to alert about critical conditions of the patient or when there is a medical device failure. Unfortunately, there are cases of failure to respond to medical alarms that endanger the patient safety and result in death. The main reasons for the lack of responding to the alarms are alarm fatigue and alarm masking. In this paper, these issues are investigated and we propose a monitoring system using Internet of Things (IOT) to continually report the ICU medical alarm to doctors, nurses and family.
    Keywords: ICU; safety; audio alarm; alarm masking; alarm fatigue; IOT.

  • An Integrated Principal Component and Reduced Multivariate Data Analysis Technique for detecting DDoS attacks in Big data federated clouds   Order a copy of this article
    by Sengathir Janakiraman 
    Abstract: The rapid development and wide application of cloud computing in the applications of Big data on clouds necessitates the process of handling massive data, since they distributed among the diversely located data center clouds. Thus the need for an efficient detection scheme that differentiates legitimate cloud traffic from illegitimate becomes indispensable. In this paper, An Integrated Principal Component and Reduced Multivariate Data Analysis (PCA-RMD) Technique was proposed for detecting DDoS attacks in Big data federated clouds. This proposed PCA-RMD initially reduces the dimension of feature characteristics extracted from the big data traffic information by minimizing the principal components based on the method of correlation. Further, the correlation method is utilized for discriminating traffic based on EAMCA (Enhanced and Adaptive and Multivariate Correlation Analysis) and Enhanced Mahalanobis distance (EMD). The proposed PCA-RMD Technique is predominant in classification accuracy, memory consumptions and CPU cost compared to the baseline approaches used for investigation.
    Keywords: Big data Federated Clouds;DDoS attacks; Multivariate Data Analysis; Principle Component Analysis; Enhanced Mahalanobis Distance.

  • Smart Scheduling on Cloud for Traffic Signal to Emergency Vehicle Using IoT   Order a copy of this article
    by J. MANNAR MANNAN, Karthick Myilvahanan J, Mohemmed Yousuf R, Sindhanai Selvan K, Parameswaran T 
    Abstract: Emergency transportation in the larger cities needs a special concentration. A single negligence would cause a severe traffic deadlocks. Providing a dedicated lane to the emergency vehicle in larger cities is not feasible. The existing semi-automated traffic control system is not feasible to handle the situation of emergency transportation on metropolitan cities. To address this limitation, Internet of Things (IoT) based adaptive traffic signal control system is proposed. In this proposed system, GPS enabled ambulance position indicator get controls the traffic signal dynamically based on the position of ambulance and traffic density by using IoT devices. The deployment of RFID on road side closer to traffic signal have used to measure the distance of the vehicle queue over the predetermined ambulance path. The signal on the predetermined ambulance path have turned green dynamically by the ambulance vehicle via IoT controlled traffic signal, based on the location of the ambulance received from the GPS. This dynamic scheduling of traffic signal for smooth bypassing of ambulance vehicle is accurately measured without any delay by switching a traffic signal timing, from fixed time duration into variable time duration until the vehicle bypass the signal. The simulation results of our proposed research performed better compared with the other existing methods and it is very suitable to smart cities for traffic management during emergency vehicle transportation.
    Keywords: IoT; Internet; Emergency Transport System; Smart City; RFID; GPS; Automation.

  • Hybrid Privacy Preserving Clustering for Big Data while Ensuring Security   Order a copy of this article
    by Pushpavathi T P, Murthy PVR 
    Abstract: Big Data is a trend and a recent technology in modern generation. Data is important information and used to analyse by ensuring a proper security. Data clustering is a fundamental technique in knowledge discovery and data engineering. Clustering a data using different algorithms is extensively used in various applications, such as soft computing, image processing, mobile communications and medicine. Consistency in the data clustering is a main problem in big data application. The traditional probabilistic clustering using c-means (PCM) algorithm is used in image analysis, high dimensional data. PCM utilizes constraint enrolment capacities, however the issue with PCM is joins to coincidence bunches of cluster. To determine this issue, to join the High Order PCM (HOPCM), this utilizes tensor relational information model and further modifies it as security preserving HOPCM calculation to protect the security by utilizing Brakerski-Gentry-Vaikuntanathan (BGV) for data and to provide security. The proposed hybrid approach of privacy preserving c-means (PPCM) method shows better results that it is successfully cluster a large group countless information using cloud computing without disclosing the private information.
    Keywords: Big Data Cluster; Cloud Computing; PPCM; HOPCM; DHOPCM; BGV.

Special Issue on: Advances in Security and Privacy for Cloud Computing

  • A Novel redundancy Technique to enhance the security of Cloud Computing   Order a copy of this article
    by Syed Ismail 
    Abstract: Cloud Computing is an emerging technology that offers computing, storage, and software as a service to ITorganizations and individuals. The users of the cloud can access the applications provided by it from anywhere, anytime, and anyplace in the world. Security is considered as a critical issue in the cloud environment. To prevent cloud resources from external threats, data leakage, and various attacks, security controls, and technological safeguards should be offered to the datacenters of the cloud. Additionally to integrity and availability cloud should also possess reliability. Reliability enables the users to completely forget about the availability and security of the data stored in the cloud without jeopardizing data loss.This paper proposes a novel approach known as the Multi-Cloud Database(MCDB) which uses multiple Cloud Service Providers (CSP) instead of a single CSP. For this purpose, a Shamir's secret sharing algorithm and a sequential Triple Modular Redundancy(TMR) technique are implemented toimprove the reliability and offer enhanced security to the MCDB. The proposed model is compared with one single cloud(SPORC) and four multi-cloud models(DepSky, HAIL, RACS,MCDB without TMR) in terms of Reliability, Integrity, Confidentiality, Availability, and Security. The maximum Reliability, Integrity, Confidentiality, Availability, and Security values obtained for the proposed model were 100%, 99%, 99%, 97%, and 99%.
    Keywords: Cloud Computing; Reliability; Security; Multi-cloud Database; Shamir's secret sharing algorithm; and Triple Modular Redundancy.

  • PRESERVING PERSONAL HEALTH RECORDS SECURITY AND PRIVACY USING C-R3D ALGORITHM AND MULTIMODAL BIOMETRIC AUTHENTICATION   Order a copy of this article
    by Meena Settu, Gayathri V 
    Abstract: Data security and privacy are staying one of the most significant concerns for cloud computing. The secrecy of the Personal Health Records (PHI) and Personally Identifiable Information (PII) is the main issue when financial cloud servers are utilized by healthcare associations to preserve the patients' health records since patient's information could be handled by numerous foundations for example, government and private emergency clinics and hospitals, general professionals and examination labs. Recent years, numerous intrusions on healthcare information intensified the requirement for tight security for healthcare data. Additionally, the security specialists state that such a large number of vulnerabilities are there at the Health and Humanities Service Systems Data (HHSSD). If it isn't alleviated, it could make an immense risk and potential threats to the HHSSD. So the security solutions must be expedient and simple to supplying and aiding high-level safety without compromising network performance and it is more essential to regulate critical layer of security to maintain the patients sensitive information. This paper proposes novel data encryption in healthcare cloud by applying C-R3D (Combined RSA and Triple DES) algorithm to encrypt every patient's personal health record file before moving into the cloud which ensures data confidentiality. In addition, Multimodal Biometric authentication has been connected, for example, integrated unique finger impression and iris authentication along with username and password which ensures the privacy of patients sensitive information stored in the healthcare cloud. Thus, the experimental outcomes demonstrate the effectiveness of the proposed framework
    Keywords: Data Security; Personal Health Records; Health and Humanities Service Systems Data (HHSSD); Combined RSA and Triple DES; Multimodal Biometric Authentication.

  • Intrusion Detection and Prevention of DDoS attacks in Cloud Computing Environment: A Review on Issues and Current Methods   Order a copy of this article
    by Kiruthika Devi, Subbulakshmi T 
    Abstract: Cloud computing has emerged as the most successful service model for the IT/ITES community due to the various long-term incentives offered in terms of reduced cost, availability, reliability and improved QoS to the cloud users. Most of the applications already migrated to centralized data centres in the cloud. Due to the growing needs of the business model, more small and medium enterprises rely on the cloud because little investment would suffice on the infrastructure and hardware/software. The most alarming cyber-attack in the cloud that interrupts the availability of the cloud services is Distributed Denial of Service (DDoS) attack. In this paper, various existing Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS) and their positioning in the cloud are investigated and the essence of the current techniques in the literature is briefed in detail. The comprehensive review on the latest IDS/IPS solutions and their capabilities to detect and prevent intrusions in the cloud are explored and the comparisons of the methodology provides the researchers with the security issues/challenges exposed in the cloud computing environment. The significance of the design of a secure framework for cloud is also being emphasized for achieving improved security in the cloud.
    Keywords: Cloud computing; DDoS; IDS; IPS; security.

  • Legal Issues in Consumer Privacy Protection in the Cloud Computing Environment: Analytic Study in GDPR, and USA Legislations   Order a copy of this article
    by Alaeldin Alkhasawneh 
    Abstract: Cloud computing services are considered one of the most important services provided for companies due to the various benefits they confer. However, data privacy is a big concern for users and laws covering this have many contradictions and require improvement. This paper discusses the laws governing privacy issue in cloud and highlights missing components that could be added to laws considered and also proposes laws amendments which may help create a better consumer experience, improved service and increased protection for personal data. At the end of the paper, a set of recommendations is proposed that should be followed by government and private companies which would increase the responsibility held by cloud computing service providers in case of failing to protect personal data from privacy invasion.
    Keywords: Consumer; Privacy; Cloud Computing; GDPR.

Special Issue on: ICAIIS-2019 Smart Intelligent Computing and Communication Systems

  • A comparative study on various preprocessing techniques and deep learning algorithms for text classification   Order a copy of this article
    by Bhuvaneshwari Petchimuthu, NagarajaRao A 
    Abstract: Preprocessing is the primary technique employed in sentiment analysis, and selecting the suitable methods in that techniques can increase the classifier accuracy. It reduces the complexity innate in the raw data which makes the classifier to learn faster and precisely. Despite of its importance, the preprocessing in polarity deduction has not attained much attention in the deep learning literature. So in this paper, 13 popularly used preprocessing techniques are evaluated on three different domain online user review datasets. For evaluating the impact of each preprocessing technique, four deep neural networks are utilized and they are auto-encoder, Convolution Neural Network (CNN), Long Short Term Memory (LSTM), and Bidirectional LSTM (BiLSTM). Experimental results on this study shows that using appropriate preprocessing techniques can improve the classification success. In addition, it is noted BiLSTM model performs better than the remaining neural networks.
    Keywords: ;sentiment analysis; deep learning; auto-encoder; convolution neural network; Long short term memory; Bidirectional LSTM.
    DOI: 10.1504/IJCC.2022.10031639
     
  • Tangles in IOTA to make Crypto currency Transactions Free and Secure   Order a copy of this article
    by Prabakaran Natarajan 
    Abstract: Block-chain introduction has made a revolutionary change in the cryptocurrency around the world but it has not delivered on its promises of free and faster transaction confirmation. Serguei Popov proposal of using tangles, a directed acyclic graph which essentially is considered to be the successor of block-chains and offers the required features like machine to machine micro payment and feeless transaction. It requires the user to approve the previous two transactions in the web to participate in the network. This essentially eliminates the miners and the mining part form the currency exchange and provides the user or participants to do their transactions feeless. Since the participant verifies the previous two transactions it also contributes to the security of the tangle. In this paper features of IOTA and all the improvements in it using tangles are discussed along with how it contributed to the security and how it enables the participants to have feeless transactions is also discussed.
    Keywords: E-coin; Block-chain; Cryptocurrency; IOTA; Tangles; DLT; Feeless Transaction.

  • A Novel Filter for Removing Image Noise and Improving the Quality of Image   Order a copy of this article
    by Prathik A, Anuradha J, Uma K 
    Abstract: This paper proposed a Hybrid Wavelet Double Window Median Filter (HWDWM) which is made by blending Decision Based Coupled Window Median Filter and Discrete Wavelet Transform (DWT) and review is made to increase the filters which are widespread for removing noise. In proposed filter there are double window such as row window and column window. This proposed method take the noisy image for processing and it moves row window for indexing from 1st pixel of the noisy image up to last pixel of the noised image then it indexing is made by column window then decompose the signal of the image to provide the localization. The noisy image is decomposed by DWT, then coefficients are transformed to independent distributed variables. The coefficients are then analyzed on the basis of threshold. Image is reconstructed using wavelet transforms inverse after the threshold. Experiments were executed in order to show the effect of noise removal filters on soil image. Two metrics are used to measure the quality of image they are: peak signal to noise ratio (PSNR) and Root Mean Square Error (RMSE). Experimental results show the superiority of this filter over other noise removal filters.
    Keywords: Data mining; Soil Classifications; Filters; PSNR and MSE.

  • Implementation of Data Mining to Enhance the Performance of Cloud Computing Environment   Order a copy of this article
    by Annaluri Sreenivasa Rao, Attili Venkata Ramana, Somula Ramasubbareddy 
    Abstract: To deal with large scale computing events, the advantages of cloud computing are used extensively, whereby the possibility of machines processing larger data is possible to deliver in a scalable manner. Most of the government agencies across the globe are using the architecture of cloud computing platform and its application to obtain the desired services and business goals. However, one cannot ignore the challenges involved using the technology linked with large amount of data and internet applications (i.e. cloud). Though there are many promising advantages of cloud computing involving distributed and grid computing, virtualization, etc. helps the scientific community, also restricts with their limitations as well. One of the biggest challenges cloud computing faces is due to the exploitation of all the opportunities towards the security breaching and related issues. In this paper, an extensive mitigation system is proposed to achieve enhanced security and safer environment while using the applications of cloud computing. Using the decision tree model Chaid algorithm, it is proved to be a robust technique to classify and decision making by providing high end security for the cloud services. From the research of this work, it is proved that the standards, controls and policies are very important to the management processes for securing and protecting the data involved at the time of processing or application usage. Also a good management process needs to assess and examine the risks involved in cloud computing while protecting the system in use and data involved due to various security issues or exploits.
    Keywords: Cloud computing; security; Data mining; Multilayer percepton; decision tree (C4.5); Partial Tree.

  • Analysis of Breast Cancer Prediction and Visualization using Machine Learning Models   Order a copy of this article
    by Magesh G, Swarnalatha P 
    Abstract: Breast cancer is one of the most commonly occurring malignancies cancer in women, and there are millions of new cases diagnosed among womens and over 400,000 deaths annually worldwide. In our dataset, we have 30 real-valued attributes as features which are computed from the Fine Needle Aspirate (FNA) test. Our dataset values are calculated from the processed image of a first needle aspirate test of a breast mass. Our input values are extracted from the digitalized image of the FNA test. There are many algorithms used for prediction systems. We are choosing the best algorithms based on the precision result, accuracy, error rate. We are making a comparison of an effective way of applying algorithms and classifying data. We have different machine learning algorithms, a performance comparison conducted between those algorithms on the Breast Cancer datasets. Data visualization and descriptive statistics have presented. SVM with all features achieves 95% of precision, recall, and F1-score. After tuning the SVM parameters, accuracy has improved to 97%.
    Keywords: Breast Cancer; Machine Learning; Decision Tree; Classification; SVM; Prediction.

  • An Optimal Selection of Virtual Machine for E-Healthcare Services in Cloud Data Centers   Order a copy of this article
    by PRATHAP R, MOHANASUNDARAM R 
    Abstract: In recent times, Cloud Computing plays a huge role in the processing of healthcare services. Such name that Electronic Healthcare services which are used to improve the healthcare performance in the cloud. A selecting and placing the virtual machine for healthcare service plays an important role and one of the challenges in the cloud. Huge levels of the data center are used to process the medical request. By doing these we would maximize the resource utilization and reduces the execution time of the medical request in the cloud data center. Multiple ways of techniques are used to solve the optimal issues in cloud resources. In this paper, a hybrid request factor-based multi-objective grey wolf optimization (RMOGWO) algorithm to solve the healthcare request in the cloud data centers efficiently. The proposed algorithm was tested and compared with the benchmark well-known algorithm for VM utilization in the cloud data centers. In addition, the efficiency of the Electronic healthcare services system in cloud performance increases in cloud utilization. Inaccuracy, the hybrid algorithm performs the maximum level of interaction with users. It is one of the superior models that improve resource utilization for healthcare services in the cloud.
    Keywords: Cloud Computing; Healthcare services; Virtualization; Multi-Objective Grey Wolf Optimization.

  • A Study on Automated Toll Collection: Towards the Utilization of RFID based System   Order a copy of this article
    by Naresh Kannan, Ranjan Goyal, Dhruv Goel 
    Abstract: The toll collection is becoming a major problem on the highways leading to large waiting queues. Toll gates installed on highways result in increased waiting time and fuel usage. In this paper, a study on Automated Toll Collection System using Radio Frequency Identification (RFID) based system is presented, which provides fast identification of vehicles and toll collection. Using this system, the identification can be done just by slowing down the speed of the vehicle when it is passing from the toll plaza. The RFID reader scans the RFID tag or card and deducts the amount from it. The research analyzed the system by proposing a mechanism and implementing an example scenario, which considered the random arrival of different vehicles. This technique is also compared with other existing mechanisms such as number plate recognition and bar code-based passes, which showed the need to utilize this technique for toll collection.
    Keywords: Automated toll collection; Radio Frequency Identification (RFID); RFID reader; RFID tag; Micro-controller.

  • A Microcontroller based System for Patient and Elderly Community Assistance   Order a copy of this article
    by Asmita Chotani, Naresh Kannan 
    Abstract: Generally, the elderly community is bedridden due to age parameters and health issues. Thus, there is a need for a system which can be an aid for this group. In this paper, a microcontroller-based system model is proposed. This system model assists the patients and elderly community by providing them the facility to satisfy their needs by informing the attenders/wards through a handheld device. Depending upon the frequency generated for a key press, a particular need is triggered among the set of predefined needs to the facilitator in terms of the audio from the device, which is mounted within his proximity.
    Keywords: DTMF; Mobile Phone; microcontroller; Arduino; patient; assistance.

  • The Big Data in Healthcare Industry Made Simple to Save People Life   Order a copy of this article
    by Vijay Anand R, Iyapparaja M 
    Abstract: In healthcare system big data is playing an important role andusing this data analysis to predict the outcomeof diseases prevention of effect of such additional disorders or diseases, transience and saving the cost of medical treatment. In many countries they diagnosis the diseases treatment big data playing a main role for information generate to identify the diseases. The main focus of Large information has started out and a few tasks were installed a place to share information of patients scientific records and perceive their records amongst fashionable public, non-public hospitals and clinics. However there are many challenges in conducting huge facts in healthcare specially in relation to privacy, protection, requirements, authority, integration of statistics, save the statistics, classify the data and to combine the generation. It's miles authoritative that these challenges to be overcome before huge facts can be implemented effectively in healthcare
    Keywords: Bigdata; Healthcare; Bayesian Network and Patients...

Special Issue on: Impact of Machine Learning in the Cloud Computing Revolution

  • Word Sense Disambiguation using Optimization Techniques   Order a copy of this article
    by Rajini Selvaraj, Vasuki A 
    Abstract: In the field of Computational Linguistics, Word Sense Disambiguation(WSD) is a problem of high significance which helps us to find the correct sense of a word or a sequence of words based on the given context. Word sense disambiguation is treated as a combinatorial optimization algorithm wherein the aim is to discover the set of senses which help to improve the semantic relatedness among the target words. Nature inspired algorithms are helpful to find optimal solutions in reduced time. They make use of collection of agents that interact with the surrounding environment in a coordinated manner. In this article, two such algorithms, namely, Cuckoo Search and Firefly algorithms, have been used for solving this problem and their performance have been compared with the D-Bees algorithm based on Bee Colony optimization algorithm. They have been evaluated using the standard SemEval 2016 task 11 data set for complex word identification. Experimental results show that Firefly algorithm is performing the best.
    Keywords: Word sense disambiguation; Cuckoo search; optimization; firefly; Bees algorithm; unsupervised.

  • Multi cloud based Secure Privacy Preservation of Hospital Data in Cloud Computing   Order a copy of this article
    by KanagaSubaRaja S, Sathya Arunachalam, Karthikeyan S, Janani T 
    Abstract: The growth of cloud computing has led to privacy concerns abundantly. Any organization/user sends all the information to the cloud service provider and so the organizations/users data security is a concern. Data privacy and security issues can be solved by establishing clear policies that enable authorized data access and security. User authentication is the primary basis for access control and so using cryptographic encryption mechanism like key policy attribute based encryption we can provide strong authentication to ensure that data can be viewed by only who have to access it. Followed by which a never compromised integrity mechanism like SHA-256 hash mechanism is used to ensure that data is not modified in transit. These hashes are concatenated in a way to form top hash by structuring in a merkle hash tree. They are used by the erasure code to find lost data during any of the crashes. To make it efficient and to find the data loss, third party auditors are installed to check and report any changes in any of the cloud storage. Data recovery is by means of retrieving the data from another cloud that has the replica of these data.
    Keywords: Multi cloud; Key policy attribute based encryption; MHT; erasure code; third party auditor.

  • Effective Data Management and Real Time Analytics in Internet of Things   Order a copy of this article
    by Jeba N, Rathi S 
    Abstract: Integrating various embedded devices and systems in our socio-economic living environment enables Internet of Things (IoT) for smart cities. The underlying IoT infrastructure of smart and connected cities would generate enormous amount of heterogeneous data that are either big in nature or fast and real time data streams that can be leveraged for safety and efficient living of the inhabitants. Real time analytics on data enable to extract useful information from the voluminous data and provide information to users for decision making and help in feedback mechanism. In this paper, the effective management of heterogeneous data and real time analytics on data are studied. Data management deals with collecting and storing useful information to reduce manual tasks. Therefore, data management techniques should be consistent, interoperable and ensure reusability and integrity. We have explained the various architectures that can be used to deploy IoT in neural networks and the various streamingrntechniques for real time analytics.
    Keywords: Real time analytics; data management; heterogeneous data; IoT.

  • An Efficient Document Clustering Using Hybridized Harmony Search K-Means Algorithm with Multi view Point   Order a copy of this article
    by Siamala Devi, S. Anto , Siddique Ibrahim S P 
    Abstract: Document clustering is the most needed process in the data mining field where the number of documents with different methodologies are scattered. The meaningful information can be extracted from the group of documents by grouping them effectively. There are various researches exists previously which concentrates on clustering the documents present in the real. In the previous work, document clustering is done by using the methodology called the Hybridized Harmony K Means search (HHKM) algorithm. In this, clustering is done by using the K-means algorithm and the centroids of clusters are found optimally by using the harmony search algorithm. Initially, Hybridization of K-Means and Harmony Search based on Concept based, Kernel and weighted feature based Clustering algorithm (CKW HHKM) is adopted to cluster the documents. The problem reside in this method is the poor accuracy while clustering the documents where the unrelated documents are grouped together. To overcome this problem, Multi view Point HHKM (MP HHKM) approach is introduced, in which clustering can be done accurately. In this work, multi point analysis is done based on the similarity measurement. The exploratory tests were directed on News group and TREC data set from which it is clear that the proposed technique MPHHKM outperforms the existing technique with better accuracy values.
    Keywords: Clustering; Harmony Search; Multi view Point; Optimal.

  • DNA Coding and RDH Scheme Hybrid Encryption Algorithm Using SVM   Order a copy of this article
    by SHIMA RAMESH MANIYATH, Thanikaiselvan V 
    Abstract: As the communication technology advanced rapidly in recent times, the need for confidential data communication also arose. Here, a computationally feasible encryption/decryption algorithm is proposed to secure data using DNA sequences. The principal objective of DNA algorithm is to reduce big image encryption time. In this algorithm, natural DNA sequences are used as main keys. The image in which secret data is hidden using Reversible Data Hiding (RDH) technique is encrypted twice before transmission. RDH is an information security technology which is extremely helpful in telemedicine. Authentication is necessary for images captured by robots. This can be used for authentication of data or the owner of data. This technique also enables us to embed Electronic Patient Records (EPR) data into medical image before transmission, which can be later recovered on transmission side. The images are divided block-wise before encryption, in the proposed scheme. Machine Learning helps us to design a Support Vector Machine (SVM), based on which a classification scheme is obtained to group encrypted and original images separately, and to recover original image from encrypted image.
    Keywords: Reversible Data Hiding; DNA; Image Encryption; Support Vector Machine; Feature Extraction.

  • Computation of Testing Approach in Cloud Mobility Service   Order a copy of this article
    by Yuvaraj D, Bazeer Ahamed B, Manikandan V 
    Abstract: Abstract At present, programming item turns into a fundamental segment in running numerous partners' exercises. For example, the enterprises, for the most part, use cloud administrations to execute their significant business usefulness. Be that as it may, by a couple of info's parameter interfacing, this usefulness can be pended. Such requirement postures testing to cover different highlights of disappointment particularly in guaranteeing cloud application. One path is to devise a technique to cover input parameters' qualities dependent on combinatorial testing approach. This method incorporates every single imaginable blend of test contributions for identifying bugs on the System Under Test (SUT). The paper clarifies the combinatorial covering exhibits to create generally comprehensive testing by demonstrating highlights of test administrations utilizing Feature IDE module in Eclipse IDE. Along these lines, we fabricate the information area model to speak to the inclusion of the current portability administration running on NEMo Mobility cloud stage. Utilizing this model, covering exhibits is connected to create t-way experiments by utilizing IPOg calculation, which is executed in a CiTLab. As an experiment, the executives, the JUnit testing structure uses test stubs to approve the test techniques for produced experiments on the predefined administration (SUT)
    Keywords: Combinatorial Testing; Input Domain Model; Software Testing; CiTLAB; Cloud Mobility Service.

  • Protection of Mental healthcare documents using sensitivity based Encryption   Order a copy of this article
    by Kalaiselvi Shanmughasundaram, Vanitha Veerasamy, Sumathi V P 
    Abstract: Data security breaches and medical identity theft are the growing concerns in current scenario. Adopting IT services provided under cloud based technologies again increases the security threats. Several cryptographic techniques exist to protect data where the selection of appropriate technique increases the security while reducing the processing cost. The proposed method analyses the textual medical documents for their content sensitiveness and determines the adoption of appropriate cryptographic techniques. As security remains top concern for cloud adoption the proposed sensitivity based encryption improves the security and encryption efficiency at a significant level. The experimentation reveals that about 4% of time complexity gets reduced in encryption.
    Keywords: Encryption; Efficiency; AES; Sensitivity data; Cloud computing.

  • Using Augmented Reality to Support Children With Dyslexia   Order a copy of this article
    by Majed Aborokbah 
    Abstract: This paper presents the use of interactive improved reality interface to assist and support children with dyslexia and it is one of the most common learning disabilities in the world. This is a literacy-based learning difficulty that mainly effect in reading, writing, speaking, short-term memory, spelling and etc. Many more people perhaps as many as 1520% of the population as a whole have some of the symptoms of dyslexia. This paper introduces case studies with different learning scenarios of Arabic language which have been designed based on Human Computer Interaction (HCI) principles so that meaningful virtual information is presented for dyslexic children in an interactive and compelling way. The smart phones are considered as being potentially valuable learning tools, this due to their portability, accessibility and pervasiveness. The blending of Technology and education is something that is growing rapidly and becomes most popular. Augmented Reality (AR) is recent example of a technology that has been combined into the educational field. This work aims to integrate mobile technology and AR method to improve the dyslexic children (DC) academic performance, concentration and short-term memory. The design process includes the following steps of identify the research problem and determines the requirements to overcome dyslexia problems, collect carefully the data from different sources and the collected data will be used to construct the target product based on the prototype methodology. As the output come, it will contribute in improving the learning and basic skills of children with dyslexia.
    Keywords: learning disabilities; learning tools; augmented reality;.

  • MOBILITY OF SINK BASED DATA COLLECTION PROTOCOL (MSDCP) FOR ENERGY BALANCING IN WSN   Order a copy of this article
    by LALITHA THAMBIDURAI, SaravanaKumar R 
    Abstract: A sensor node is that the significant part of a wireless sensor network. Sensor nodes have various roles in a network includes identifying data storage, data processing and routing method. Cluster is an organizational element for wireless sensor networks. The powerful environment of this network is very essential for them to be broken down into clusters to make easier responsibilities such as communication. Cluster heads are the group head of a cluster have greater data rate match the alternative cluster member. rnThey frequently needed to associate activities within the cluster. These methods comprise but are not controlled to data aggregation and forming account of a cluster .Base station is at the upper level of organized wireless sensor network. It generates communication link among the sensor network and the end user.The data in a sensor network can be used for an enormous variety of applications. A detailed application is form use of network data over the internet retaining a personal digital assistant or desktop computer.rn This paper contributions mobility based reactive protocol named Mobility of Sink based Data Collection Protocol (MSDCP).This protocol sensor with great energy and maximum quantity information are picked as cluster heads that gather data from the common nodes between the clusters. This data is placed unless mobile sink comes within the transmission area of cluster heads and request for the gathered data. One time the request is received from cluster head and it forward data to the mobile sink.rn
    Keywords: WSN; MSDCP; Transmission Area; Cluster Head;.

  • Fuzzy-C means Segmentation of Lymphocytes for the Identification of the Differential Counting of WBC   Order a copy of this article
    by Duraiswamy Umamaheswari 
    Abstract: In the domain of histology, discovering the population of White Blood Cells (WBC) in blood smears helps to recognize the destructive diseases. Standard tests performed in hematopathological laboratories by human experts on the blood samples of precarious cases such as leukemia are time-consuming processes, less accurate and it totally depends upon the expertise of the technicians. In order to get the advantage of faster analysis time and perfect partitioning at clumps, an algorithm is proposed in this paper that automatically identifies the counting of lymphocytes present in peripheral blood smear images containing Acute Lymphoblastic Leukemia (ALL). That performs lymphocytes segmentation by Fuzzy C-Means clustering (FCM). Afterward, neighboring and touching cells in cell clumps are individuated by the Watershed Transform (WT), and then morphological operators are applied to bring out the cells into an appropriate format in accordance with feature extraction. The extracted features are thresholded to eliminate the regions other than lymphocytes. The algorithm ensures 98.52% of accuracy in counting lymphocytes by examining 80 blood smear image samples of the ALL-IDB1 dataset.
    Keywords: Fuzzy c-means; medical image processing; morphology; segmentation; watershed; WBC count; leukemia.

  • A New Venture to Image Encryption using Combined Chaotic System and Integer Wavelet Transforms   Order a copy of this article
    by Subashanthini S, Pounambal Muthukumar 
    Abstract: In this digital era, securing multimedia information is receiving its due concern apart from securing textual data. Securing the image by utilising integer wavelet transform is the chief curiosity of the proposed work. This research work is envisioned to explore the use of reversible Integer Wavelet Transforms (IWT) for designing robust image encryption algorithm. The proposed exploration comforts to seal the gap in the space in between image encryption and the existing robust IWT. Ten different IWT namely Haar, 5/3, 2/6, 9/7-M, 2/10, 5/11-C, 5/11 A, 6/14, 13/7-T, 13/7-C are used for the analysis. Four keys utilised for image scrambling and image diffusion are generated with the help of the proposed combined chaotic system. Image scrambling is performed only on the approximation coefficients to get full image scrambling and Bit XOR is used for image diffusion. This proposed method provides NPCR value as 99.6246%, UACI value as 33.5829, entropy value as 7.997 and very less correlation values. Simulation results prove that image encryption technique can be designed with various integer wavelet transforms.
    Keywords: IWT; Chaotic map; Image encryption; Bit XOR encryption; Image scrambling; Entropy.

  • Programming and Epic Based Digital Storytelling Using Scratch   Order a copy of this article
    by Yamunathangam D 
    Abstract: Storytelling is a powerful tool to impart traditional and cultural values to children. Traditional storytelling followed by our ancestors have reduced. Digital storytelling has emerged as the successor and the modern storytelling method follows similar strategies of classical storytelling. Digital storytelling has started its evolution in teaching and learning process and emerged as the best tool to engage teachers and their students. Middle school students use various digital storytelling environments to learn a programming language. In this paper, Epic Based Digital Storytelling(EBDS) pedagogy using scratch to learn a programming language is discussed. The various aspects of using EBDS in education are given in the paper.
    Keywords: Epic Based Digital Storytelling; pedagogy; Scratch; team based learning; Programming.

Special Issue on: Cloud Computing Issues and Future Directions

  • Cloud Resource Management using Adaptive Firefly Algorithm and Artificial Neural Network   Order a copy of this article
    by S.K. Manigandan, Manjula S., Nagaraju V., Tapas Bapu B R, D. Ramya 
    Abstract: There has been a steady ascent in the prominence of the cloud computing worldview over the ongoing years. Cloud computing can be characterized in basic terms as one of the stages for giving conveyed figuring assets. The registering assets may incorporate capacity, data transfer capacity, memory space, handling components, etc. These assets are leased to the clients utilizing the compensation per-utilize model. The interest for the assets may not be static yet they can be mentioned on-request because of the developing web office. There are various variables which are basic for the achievement of a cloud framework that incorporates accessibility, unwavering quality and adaptability. These measurements vary dependent on the points of view. The clients of the cloud want to have a limited reaction time and cost yet then again, the cloud supplier centers around accomplishing proficient cloud assets distribution and limiting the support costs. Resource management is a practice of provisioning and managing the cloud resources efficiently. It also provides the techniques to provision the resources, schedule the jobs, balance the loads. This proposal provides a resource management technique for efficient provisioning of resources and the scheduling of jobs on the static and dynamic cloudlet requests.
    Keywords: Cloud computing; Cloudlet; Adaptive Firefly Algorithm; Artificial Neural Network.

  • AN INTELLIGENT CLOUD ASSISTANCE FOR HEALTHCARE SECTORS   Order a copy of this article
    by Jayanthiladevi A, Aithal P.S., Krishna Prasad K, Nandha Kumar K.G., Manivel Kandasamy 
    Abstract: E-healthcare systems have been measured as a drastically facilitating healthcare monitoring, earlier intervention and disease modelling and proof based medical consultancy using medical image feature extraction and text mining. Due to certain resource constraints encountered in wearable devices, it is essential to outsource frequently generated healthcare information of individuals to cloud systems. Regrettably, handling both computation and storage to some un-trusted entity is a sequence of privacy and security crisis. The prevailing approaches significantly concentrates on finely tunes privacy preservation statistical medical text analysis and access, which is barely considered as dynamic fluctuation of health condition and analysis of medical image. In this work, an effectual and secure privacy preservation dynamic medical modelling scheme has been anticipated as a healthcare service with cloud based e-healthcare solution which offers automated functions to acquire nearby hospitals to take proper treatment during disorder conditions. Predictive analysis over individual health information is performed using adaptive bagging model to alert the physicians and nurses during health disorder. While in emergency condition, alerts to nearby health care centres are triggered and appropriate treatment has been provided depend on individuals historical information attained from the unique id produced using healthcare service. Simulation is carried out in MATLAB environment. The performance analysis of anticipated model is superior than prevailing approaches.
    Keywords: Healthcare; cloud computing; Healthcare services; cloud assistance; alert triggeringrnrn.

  • Cloud Resources Allocation for Critical IaaS Services in Multi-Cloud Environment   Order a copy of this article
    by Driss Riane, Ahmed Ettalbi 
    Abstract: In this paper, we propose new algorithms to allocate cloud resources for a composition of IaaS services in multi-cloud environment. First, we use Gomory-Hu transformation in order to identify the critical components of the user request. Second, the proposed algorithms use the computing and networking costs to select best suitable clouds to host the critical components. Some simulation results are presented to evaluate the performances of our algorithms.
    Keywords: Cloud Computing; Interoperability; Cloud Networking; IaaS Composition; Optimization; Multi-Cloud Computing.

  • FOR MOBILE SERVICES IN CLOUD OF THINGS A MODEL-DRIVEN DEVELOPMENT PATTERNS   Order a copy of this article
    by Prince Mary, S.A.I. SREE NARLA, P. PRASANNA LAKSHMI 
    Abstract: Cloud Computing has recent advances when it comes to storing and processing the information in a server away from the end user. Numerous users are using the services offered by cloud for various applications. Some use cloud platforms to build their own applications while some use the entire application running on the cloud. An enormous amount of information is being processed the cloud environment need to be secured in all the aspects. Any data leaks could lead to serious problems. To overcome this challenge, a mobile-driven model for securing the information in the cloud using the secured key is proposed. The key is shared only to the authenticated user and not with unauthorized once. The services are available only when the user is verified with the unique password generated for him. Various parameters were evaluated for the proposed model and were observed that it performed better than the traditional models. The computation speed was minimal with highly secured key generation
    Keywords: Cloud; Security; Authentication; End User; Services; Storage; Privacy.

  • AUTHENTICATION FOR CLOUD COMPUTING SYSTEM THROUGH SMARTCARD   Order a copy of this article
    by Albert Mayan John 
    Abstract: The platform has wide consequence on information technology systems was cloud computing. Outflow can be reduced significantly from pay-per-use method. Society shifting from personal software and hardware system towards this platform which offers high flexibility. Main threat for cloud computing is safekeeping. Securing their system against external competitors by using secured connections and firewalls is done by most of those benefactors of the cloud. When as soon as that figures are send to outer party, the information secrecy will convert as main problematic, beyond that problematic of which those illegal users access that resource of cloud server can steal data of legal users and those legal users access that illegal server. For the security of users privacy, when the authorized cloud users access that service sources, they needs to verify the cloud server, and cloud server needs to identify those users login requests to make ensure the users are legal users. In this system we are proposing the virtual smart card ID produce with clutter techniques Which is used to prevent the fake cloud servers an also protect that cloud data from many hackers
    Keywords: Cloud computing; Virtual smartcard ID; Clutter techniques.

  • Semi Convergent Matrix Based Neural Predictive Classifier for Big Data Analytics and Prediction in Cloud Services   Order a copy of this article
    by Rajasekar R, Srinivasan K 
    Abstract: Big data analytics is a technique concerning gathering, organizing and analyzing huge units of records to find out patterns or useful information. Recently, many research works has been designed for large records analytics of climate data. In system in accordance with overcome certain challenge,Semi Convergent Matrix based Neural Predictive Classifier (SCM-NPC) Procedure is recommended that gives productive Big Data calculation and data partaking in Cloud Services. Initially, the SCM-NPC technique constructsSemi Convergent Matrix on distributed Big Data for improving the search accuracy of user requested information , Next, The SCM-NPC technique is used Neural Predictive Classifier for improving the prediction rate of climate data on Cloud Big Data. Finally, The SCM-NPC technique applies MapReduce function on Neural Class that provides efficient predictive analytics about climate data conditions on Cloud Big Data.The proposed SCM-NPC system conducts test takes a shot at parameter, for example, forecast rate, computation time and classification accuracy by using Amazon EC2 Cloud Big Data sets. The results show that SCM-NPC technique is to build the expectation level of atmosphere realities conditions on Big Data and furthermore decreases the calculation time when contrasted with best in class works.
    Keywords: Big Data; Cloud Services; Semi Convergent Matrix; climate data conditions; Neural Predictive Classifier; MapReduce function.

  • Implementation of Multicloud Strategies for Healthcare Organizations to Avoid Cloud Sprawl   Order a copy of this article
    by Mohamed Uvaze Ahamed Ayoobkhan, ThamaraiSelvi R, Jayanthiladevi A 
    Abstract: Healthcare organizations are being overwhelmed by data, devices, and apps and disjointed, multiple cloud services. Well-heeled multicloud can provide a unified cloud model that provides greater control and scalability at reduced costs. Healthcare multicloud is turning into an appealing path for associations to manage the blast of advanced health care information in digital health, Internet of Things, associated gadgets and healthcare applications. As more human services associations grasp distributed computing, they are progressively going to a blend of open, private, and hybrid cloud administrations and foundation. In fact most of the health-care service organizations
    Keywords: Multicloud; Healthcare; HybridCloud; Cloud Environment.

  • Decentralized Erasure Code for Hadoop Distributed Cloud File Systems   Order a copy of this article
    by Mohana Prasad, Kiriti S., V.T.Sudharshan Reddy 
    Abstract: Hadoop distributed file system (HDFS) has been developed for the data-oriented model, here data archival is also the concept for the inactive data removal. Data archival in HDFS is facing security issues like main data has been deleted or moved to some other place. For satisfying these issues, we have developed the unique architecture for securing the data movement in cloud-based HDFS, it will also predict the inactive data for the future purpose. Admin will do three types of activities namely, configuration, integration,and recovery by manually to see the malicious in a distributed system. If any unwanted data enters, then it will configure the system to security level programs. This area can also satisfy the cloud platform in the future.Cloud-based HDFS will give the data access comfortability to the users, we have chosen this area to reduce the attack as well as inconvenience. A number of unwanted intermediate activities like data stealing, data moving, data altering and data communication. This malicious will spoil every network, to reducing this area, we have proposed the security terms on designing the system also achieved the security level and communication speed as good on comparing the existing system values.
    Keywords: HDFS; Security,Admin; Integration; Performance; Data Communication.

  • ALLOCATION OF CONFERENCE HALL BOOKING IN ANDROID APPLICATION USING CLOUD   Order a copy of this article
    by Mohana Prasad, A. Sai Eswar, A. Vijaya Manideep 
    Abstract: In each association of school there is constantly need of meeting rooms, to direct different occasions. It is discovered that there is one gathering hall in each institution, regardless of whether it is a schools and universities. A lot of divisions need to share this single meeting hall for directing its occasion. Henceforth there is dependably a plausibility of the hall being reserved by at least two departments around the same time. The conflict in timing will be known to the departments just when the day of the occasion has come to, at that point it will be past the point of no return and next to no time left for substitute course of action. The framework will be additionally being produced as an Android application, since numerous individuals today utilizes Android. Consequently a proficient and easy to use application is required to save the hall in advance and make the data accessible to others to check the status of the hall before booking. This procedure dependent on information putting away in cloud and client gets the notifications when they book the hall via sms or mail.
    Keywords: Hall Booking; Notification; Applicationrnrn.

  • Data Set Identification for prediction of Heart Diseases   Order a copy of this article
    by Palguna Kumar B., T.P. Latchoumi 
    Abstract: Over the generations, many techniques have been devised to predict or identify cardiovascular heart disease in advance. Datasets extracted from the UC Irvine (UCI) repository of machine learning plays a major role in predicting this disease. The extracted clinical datasets were huge in number and these entire datasets were not useful for the prediction of heart disease. Techniques that were used over these decades to overcome the existing issue, but most of these datasets are not accurate in making clinical decisions because of not taking proper dataset as input. This paper mainly focuses on preprocessing the needed dataset for predicting heart diseases accurately based on clinical decisions. The irrelevant data that need to be removed and process the identification of patterns that cause heart diseases. Finally, the selected datasets are analyzed with the UCI repository which is useful in designing the model to provide accurate results in predicting heart diseases.
    Keywords: Data Mining; Genetic Algorithms; Data Preprocessing; Feature selection; Knowledge Discovery Database.

  • Determining the Effectiveness of Drugs on a Mutating Strain of Tuberculosis Bacteria by using Tuberculosis Datasets under a Secure Cloud based Data Management   Order a copy of this article
    by Rishin Haldar, Swathi Jamjala Narayanan 
    Abstract: Drug-resistant Tuberculosis (TB) poses alarmingly high risk of mortality, due to complex mutations that the bacterial genes undergo in response to anti-tuberculosis drugs. In order to study the gene regions, where mutations have occurred in response to a specific drug, association mining, a machine learning technique was first applied on established datasets to group the individual gene-drug pairs and their corresponding reported mutations. Secondly, a simple, yet novel, effectiveness factor is proposed which evaluated the gene-drug pair by incorporating both the frequency and distribution of the mutations in a specific gene of bacteria. The proposed factor was generated for both single gene to single and multiple drugs. As the datasets provided mutations for a specific TB strain, H37Rv, the proposed factor helped in ranking the effectiveness of the anti-tuberculosis drugs for H37Rv. The proposed method can also be applied to any other TB strains, subject to the availability of datasets. The datasets as well as the information generated from the proposed study can be readily stored in a secure cloud storage system, either for public or private access and retrieval.
    Keywords: Drug resistant Tuberculosis; Mtb; Association Mining; gene mutation; drug recommendation.

  • Support Vector Machine Model for Performance Evaluation of Intrusion Detection in Cloud System   Order a copy of this article
    by Ved Prakash Mishra, Balvinder Shukla, A.Jayanthila Devi 
    Abstract: Intrusion detection and prevention in real time is becoming a challenge in this current moving digital world. Data and log details are growing in every minute. In this manuscript, a support vector machine (SVM) model is proposed and implemented, which is efficient, quick and capacity to handle large datasets. The basic idea of the proposed model is derived from the finite Newton method for classification problems. The experimental and comparative studies of proposed SVM is done with existing classification algorithms and related studies to assess the efficiency of the proposed SVM classification algorithm.
    Keywords: Support Vector Machine ; Intrusion Detection ; Cloud System.

  • Application of Data Mining on Cloud for Lung Cancer Prognosis   Order a copy of this article
    by Juliet Rajan 
    Abstract: Data is growing exponentially at a faster rate with the growing population. Today we are involved in building models to predict several diseases and cancer is one of the major diseases which requires a model to predict the disease at an early stage. Most of time, cancer is diagnosed at the later stage, i.e, Stage 4. Diagnosing cancer at Stage 1 can increase the survival rate of the patient by 85%. Hence the goal of the article is to predict cancer during Stage 1 itself. Another challenge the article addresses is handling huge amount of cancer data in order to come up with the model that performs accurate prediction.
    Keywords: Classification; Gradient Descent; Predictive Model; Support Vector machine; Cloud computing; Machine Learning; Precision; Recall; Classification Accuracy.

  • A Novel Approach Towards Tracing the parents of orphanage children and dead bodies in cloud and IoT Based Distributed Environment by integrating DNA databank with Aadhar and FIR databases   Order a copy of this article
    by Ved Prakash Mishra 
    Abstract: - An AAdhar card is a unique and authentic identity card in India which is being used as a valid identity proof for all types of day-to-day transactions including sale, purchase, opening bank account, air tickets, train tickets, bus tickets, and for getting benefits of government of India and state governments schemes. An AAdhar card of a person includes finger prints, thumbprints, iris images (left eye & right eye), and face image for a normal. But, for a differently able persons like blind, deaf, and physically handicapped persons it includes face, fingerprints, and thumbprints.rnThe authors believe that it is the basic right of every child / person to know the names of his / her biological parents. The technology of this earth is transforming rapidly and growing at tremendous speed. We the people of this planet achieved much technological advancements in the field of identity verification, information technology, and management skills. But, still a lot of progress is required in tracing the parents of orphanage children and tracing the relatives of unclaimed decomposed dead bodies. The parents of many orphanage children are alive and are desperately roaming here & there on this earth in the search of their children. But, it is unfortunate that the orphanage children of these parents are not able to meet with their parents and the technological advancement of this earth is feeling helplessness.rnIn this research work the authors have proposed a novel technique in which the Aadhar database is integrated with short tandem Repeat (STR) part of DNA database & first information report (FIR) lodged online in different police stations to trace the parents of orphanage children and unclaimed decomposed dead bodies using cloud computing, Internet of Things (IoT), spiral search and block chain technologies. T
    Keywords: Block-Chain Technologies; Cloud Computing Systems; Internet of Things; Orphanage Children; Short Tandem Repeat Part of DNA Sequence; Spiral Search.

Special Issue on: Cloud Computing and Networking for Intelligent Data Analytics in Smart City

  • Feature selection using evolutionary algorithms: A data-constrained environment case study to predict tax defaulters   Order a copy of this article
    by Chethan Sharma, Manish Agnihotri, Aditya Rathod, Ajitha Shenoy K.B 
    Abstract: Multiple features are encountered while performing analysis on income tax data owing to its background in finance. In data constrained environments, feature selection techniques play an essential role to ensure the usage of this data in modern machine learning algorithms. In this paper, a novel method is introduced to predict tax defaulters from the given data using an ensemble of feature reduction in the first step and feeding those features to a proposed Neural Network. The feature reduction step utilizes various methods including Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO) in the performance analysis to determine the best approach. After completion of feature reduction, second stage deals with experiments on the architecture of a neural network for appropriate predictions. The proposed case study on PSO and ACO indicates substantial improvement in results of Tax Defaulter Prediction upon the usage of the feature subset selected by PSO. This research work has also successfully demonstrated the positive influence on the usage of Linear Discriminant Analysis (LDA) to perform dimensionality reduction of the unselected features to preserve the underlying patterns. The best results have been achieved using PSO for feature reduction with an accuracy of 79.2%.It highlights 5.4% improvement compared to the existing works.
    Keywords: Evolutionary Algorithm; Particle Swarm Optimization; Ant Colony Optimization; Linear Discriminant Analysis; Neural networks; Tax defaulter prediction.

  • Rule based Translation Surface for Telugu Nouns   Order a copy of this article
    by Prasad Rao P, Phani Kumar S 
    Abstract: This work proposes a noun analyzer program, which is a full- length translation surface for Telugu nouns. It compiles and analyses nouns of Telugu language into their roots and their constituent postpositions along with their grammatical properties. Grammatical properties at word level include finding a number, gender, person, and other associated features. The method employed in this paper is based on the Suffix-Order Pattern technique is used to develop a computational model for morphological analysis of nouns in Telugu. The Suffix-Order Pattern is a new technique in linguistic computation. The current work is an implementation of a word-level translation surface for Telugu. It is a java-based application developed for analyzing Telugu nouns. The procedure is theoretically justified and practically executed for a variety of words related to Telugu Nouns. The present proposal is a demonstration of XML based repository for all the root words of different categories of Telugu Nouns. It is an optimal organization of a linguistic database and its performance in the computational environment is highly appreciated.
    Keywords: Natural language processing; Linguistic computation; Suffix order pattern; Translation surface; Morphology; Machine translation.

Special Issue on: Cloud Computing for Sustainable Intelligent Communications

  • Energy Saving Slot Allocation Based Multicast Routing in Cloud Wireless Mesh Network   Order a copy of this article
    by JeyaKarthic M, SUBALAKSHMI N 
    Abstract: In recent days, Cloud computing gains significant attention among researchers which offers more technical support. There are many techniques considered for cloud computing, but many parameters have affected the performance of cloud computing like planning, security and time issues. The previous methods focus on work programs based on priority activities, considering various traits to plan proposals for new task planning, workouts, and safety planning. Several steps have been suggested to provide practical guidelines and protocols to offer client requests to cloud ends. This work proposes a method for virtual cloud planning in a single or multiple data centers utilizing simple protocols. A new Load Balanced Amortized Multi-Scheduling Algorithm (LBAM) is proposed, which assigns the following cloud task based on active load on the cloud system. The proposed system calculates the multiple attribute weight for each workplace and introduces an application source that changes in application virtualization environment. For example, there are many data, each has access to different resources, but the importance is available for support and usage. The system calculates the cloud data weight based upon the allocation of data and its consequence based on the processing efficiency of the cloud machine. This works very well in most configurations, but virtual machines hold the ability to process load equally, speedy and with efficient memory management. A detailed comparative analysis is made with the existing methods namely Service Level Agreement (SLA), Fast Parallel Grid Generation Algorithm (FPGGA) and DJ-Scheduling (DJS) Algorithm. The obtained results indicated that the LBAM model is superior to compared methods under several aspects.
    Keywords: Cloud Computing; Cloud services; Data center; Distributed system; Parallel processing; Load balancing; Scheduling; Virtual Machine.

  • Fog Computing based Public E-Service application in Service Oriented Architecture   Order a copy of this article
    by Mohamed M. Abbassy, Waleed M. Ead 
    Abstract: The Fog Computing Framework provides the tools for handling public e-service resources through a real-world testbed in fog environment. The framework also facilitates communication between equipment, harmonies of fog tools and equipment, deployment of public e-services and the delivery of dynamic resources in fog landscape. In order to respond to various system events, such as device access, device failure, and device overload, the framework is also able to respond to these main functionalities. In the framework evaluation, the consequent event management and evaluation are performed of other significant measures, e.g. costs, deployment times and arrival service patterns. As a consequence, the architecture offers the tools needed to deal with the fog environment complexities and provides faster public e-service running times and significant cost benefits The present paper outlines the present infrastructure and proposes a different model in public e-service infrastructure, which will be a coordinated effort of Fog computing with an intelligent protocol of communication with a Service Oriented Architecture (SOA) incorporation and ultimately integration with the Agent-based SOA. This proposed framework will be capable of interchanging data by decomposing the quality of service (QoS) reliable and methodological with low latency, less bandwidth, heterogeneity in less time.
    Keywords: Fog Computing Framework; Agent-based SOA; intelligent protocol of communication; main functionalities; fog landscape; arrival service patterns; public e-service; fog environment; quality of service (QoS);.

  • A Novel Task Assignment Policies using Enhanced Hyper-Heuristic Approach in Cloud   Order a copy of this article
    by Krishnamoorthy N, Venkatachalam K, Manikandan R, Prabu P 
    Abstract: Cloud computing plays a vital role in all fields of todays business. The processor sharing server farm is one of the most used server farms in the cloud environment. The key challenge for the mentioned server farms is to provide an optimal scheduling policy to process the computational jobs in the cloud. Many scheduling policies were introduced and deployed by the existing approaches to build an optimal cloud environment. The existing approaches of the heuristic algorithms such as meta-heuristic and hyper-heuristic approaches were the most frequently used scheduling algorithms for the past years. These approaches work well only in the limited types of tasks and resources in a processor sharing server farms in the cloud environment. In the proposed system, novel task assignment policies have been proposed by enhancing the hyper-heuristic approach for the type Low task and high resource in the cloud environment. The results of the proposed approach are compared with the existing approaches and the performance evaluation of the proposed approach is also done. As a result, the proposed enhanced hyper-heuristic approach performs well for Processor sharing server farms in the cloud environment.
    Keywords: Keywords: Server Farms; Hyper-Heuristic; Make span; Cloud Computing.

  • A New method for human Activity Identification using Convolutional Neural Networks   Order a copy of this article
    by Prakash P S, Balakrishnan S, Venkatachalam K, Saravana Balaji B 
    Abstract: The recent applications like life-logging, body fitness monitoring and health tracking domains are using mobile sensors that are embedded in smart mobiles to identify human activities by the way of gathering human day-to-day behavior. Human activity Identification is becoming a major challenging task in society. This is partly because of the wide availability human activities and major variation in how well the given activity can be performed. The separation between the human activities is considered to be the critical task using features. This paper proposes novel techniques that extract the discriminative dimensions for human activity Identification automatically. Particularly, design a novel technique with (CNN) Convolutional Neural Networks in which it was used for catching dependency then size invariance that is available in signal. A deep convolutional neural network (DCNN) consists of many neural network layers. Two different types of layers, convolutional and pooling, are typically alternated. The depth of each filter increases from left to right in the network. The last stage is typically made of one or more fully connected layers. 3 people activities like walking, running then remaining still are collected from smart mobile sensor. The axis like x, y and z information were transferred with column vector magnitude information then these can be utilized input for studying or training CNN. Experimental results shows that the CNN based method achieves 93.67% accuracy compared with the baseline random forest approach 89.20%.
    Keywords: Keywords:Convolutional Neural Network; People Activity Identification; Random Forest.

  • PUF Based on Chip Comparison Technique for Trustworthy Scan Design Data Security against Side Channel Attack   Order a copy of this article
    by Shiny M I, Nirmala Devi M 
    Abstract: All the field of computational technology, while cloud computing, cryptography is vulnerable to different types of external threads. Scan based testing method is a well-known tool for testing integrated chips, but at the same time, it helps hacking the secret code from the chip. However, the scan hardware acts as a platform for attackers to hack the secret data from the chip. Most of the existing solutions are based on the alteration of the conventional scan structure with more complex design which focused only on security but violates the testability. In this paper, we propose a dynamic reconfiguration architecture using embedded PUF design, which protects the chip from brute force attack, hamming distance based attack with optimal deployment configuration. Additional on-chip comparison and masking also used to enhance security. The experimental results are evaluated on standard benchmark circuits
    Keywords: Security; DFT,Physical Unclonable Function (PUF); On chip Comparison; Hardware as a service; automatic reconfigurationrnrn.

  • Parallel Progressive Based Inductive Subspace and Fuzzy Based Firefly Algorithm for High Ensemble Data Clustering   Order a copy of this article
    by Karthika Dhanasekaran, Kalaiselvi K 
    Abstract: Recently, the focus remains on the progressive inductive based semi-supervised clustering ensemble background, where the techniques are similarly random hyperspace and the Constraint Propagation (CP) methods. Due to the improvement and digitalization of each area, huge datasets are being created quickly. Such huge dataset clustering is difficult for conventional sequential clustering methods suitable for higher computation time. Distributed parallel processing and methods are consequently useful towards attaining results and scalability constraints of clustering huge datasets. To solve this issue, several authors try to introduce new parallel clustering methods. In this work, the Parallel Incremental Support Semi-Supervised Subspace Clustering Ensemble (PIS3CE) algorithm, is based on MapReduce is proposed, which is an easy yet potent parallel processing method. Depending on micro-clusters and correspondence relative, introduce a clustering method that can be straightforwardly parallelized in MapReduce and completed in moderately only some MapReduce. However, the PIS3CE procedure, the choice of choosing centroid calculations were chosen utilizing the classifier as Improve Support-Vector Machines (ISVM). An ensemble cluster normalization is done via Fuzzy based Firefly Algorithm (FFA), and the normalized cut algorithm remains with the knowledge to take-out high dimensional data grouping. Outcomes demonstrate that the specified PIS3CE outline achieves well on three benchmark examples where the vector space is high and also it enhances the outcomes with high accurateness.
    Keywords: Clustering Incremental Ensemble Member Chosen (IEMC); Improved Support Vector Machine (ISVM); Constraint Propagation (CP); Parallel Incremental Support Semi-Supervised Subspace Clustering Ensemble (PIS3CE); MapReduce; Cluster ensemble; semi-supervised clustering and Data mining.

  • Secured File Transmission in Knowledge Management-Cloud (KM-C)   Order a copy of this article
    by Jayashri Nagasubramaniam, Kalaiselvi K. 
    Abstract: The victory of an Organization is primarily depending on continuous investment learning as well as obtaining some or more new knowledge which create new business and improve and enhances new performance and techniques. So, this Knowledge Management should be resulting to more succeeding, or even greater than that of organizations objectives. As per modern methodologies and also paradigms came and emerged, New techniques and efforts should be made in businesses to clearly in line with them, specifically in the area of Knowledge Management. Nowadays, more popular and efficient one is Cloud Computing Paradigm, because of its less cost, time and efforts to fulfill the need for software development. Also, it gives an excellent measures and ways to collect and reallocate knowledge. Let us discuss the risk factors in utilizing cloud computing in the field of knowledge management systems and their solutions.
    Keywords: Knowledge Management; Transmission; Risk Factor; Cloud Computing; Business and Technology; Cloud Computing in Knowledge; Cloud computing protection; Data and DES Encrypt; Organizations Objectives; Service-oriented Architecture (SOA); Cloud Computing (CC);.

  • An Effective Mechanism for Ontology based Cloud Service Discovery and Selection using Aneka Cloud Application Platform   Order a copy of this article
    by Manoranjan Parhi, Binod Kumar Pattanayak 
    Abstract: Now-a-days the cloud computing is growing exponentially and the preference to cloud services is getting increased day by day due to their cost saving benefits. These services mostly seem to be significantly identical in their functionality except their key attributes like storage, computational power, price etc. Till now there is no uniform specification for defining a service in cloud domain. In order to specify the identical operations and publish the services on the websites, different cloud service providers tend to use completely different vocabulary. The process of requesting for a cloud service becomes merely challenging task. Hence, a reasoning mechanism is immensely essential for service discovery that could resolve the resemblance appearing across different services by inferring with the respective cloud ontology. In this paper, an effective mechanism has been proposed using ontology for most relevant cloud service discovery and selection using a distributed cloud application platform called Aneka PaaS.
    Keywords: Cloud Computing; Cloud Service Discovery and Selection; Quality of Service (QoS); Cloud Ontology; Aneka Platform as a Service (PaaS).

  • Data Centric Redundancy Elimination for Network Data Traffic   Order a copy of this article
    by Sandhya Narayanan, Philip Samuel, Mariamma Chacko  
    Abstract: Network traffic occurring in the internet is a challenging issue due to the increase in internet users. Now a days internet traffic increases exponentially every month. Communication capability between the networks become difficult because of this heavy traffic. We propose Hashing Based Network Resilient Distribution (HBNRD) method to detect and eliminate duplicate data chunks in the packets of network layer using big data processing framework. HBNRD method helps to detect the similar files transferred through the internet and removes redundant data chunks. This results in fast communication and the proposed model attains fault tolerance because of resilient distributed approach. This data centric model can dynamically allocate the resources and can detect the data chunk repetition occurring during computation. Use of Center for Applied Internet Data Analysis (CAIDA) dataset shows that HBNRD improves the network performance by reducing the internet traffic redundancy by 60%. The data centric network traffic redundancy eliminated model is fast, scalable and resilient.
    Keywords: Network Data Traffic; Resilient Distribution; Hashing; Big Data Analytics; Redundancy Elimination.

  • Minimizing Power Utilization in Cloud Data Centers Using Optimized Virtual Machine Migration and Rack Consolidation.   Order a copy of this article
    by Hemanandhini I.G, Pavithra R, Sugantha Priyadharshini P 
    Abstract: Cloud computing is a disruptive technology used to maintain computational assets on large data centers along with the internet. The Cloud delivers on-demand applications and computing resources to its users. The Cloud data center has the ability to host various computing applications which executes from few seconds to hours. As the usage of cloud resources continues to become more and more advanced, the need for data centers grows faster. These cloud data centers use huge volumes of electricity which contributes to environmental drawbacks like carbon emission and global warming. Because the computers deployed in the data centers are working hard nonstop they get extremely hot, several cooling systems should be deployed to minimize the heat generated from the data centers and this also increase the maintenance cost. Here, the problem of high power usage in the data centers is addressed by Virtual Machine Migration and Consolidation technique. The virtualization has the ability to shift a Virtual Machine (VM) from one server to the other available servers using the VM migration technique. Here the VMs are migrated to other appropriate servers to decrease the total number of running physical server machines. This paper not only tries to reduce the number of currently running server machines to cut down the power used in the data centers but also concerns to shut down a considerable number of active racks so that unused routing and cooling equipments can be turned off thereby reducing the data center power consumption to the maximum and significantly contributing to the environment. Our work uses Modified Best Fit Decreasing algorithm (MBFD), Particle Swarm Optimization (PSO) and Hybrid Server and Rack Consolidation (HSRC) algorithms to consolidate the servers and racks.
    Keywords: Cloud Computing; Scheduling; Virtualization; Cloud Computing; Virtual Machine; Physical Machine; Virtual Machine Migration; VM Consolidation.

  • Enhancing the Job Scheduling Procedure to Develop an Efficient Cloud Environment using Near Optimal Clustering Algorithm   Order a copy of this article
    by Suganya R, NIJU P. JOSEPH, Rajadevi R, Ramamoorthy S 
    Abstract: In this internet era cloud computing is the major tool for processing the information exchanged in the web. As cloud being the vital part of the current business world, it should be handled properly and to be utilized for the efficient processing of business. There are various problems in the cloud computing, where the consumers as well as the service providers facing in their day to day cloud activities. Job scheduling problem plays a vital role in the cloud environment. To provide an efficient job scheduling environment, it is necessary to perform efficient resource clustering in the cloud environment. The traditional cloud environments failed to concentrate on the resource clustering, even though some of the existing systems provided the solution for this problem but not atleast near optimal or feasible solution. In this regard, the proposed system, concentrated on the resource clustering methodology by proposing an efficient resource clustering algorithm named Identicalness Split up Periodic Node Size (ISPNS) in the cloud environment. The proposed system is compared with the existing systems to justify the performance of the proposed resource clustering algorithm. As a result, the proposed system produces the near optimal solution for the resource clustering problem which will helps to provide an efficient job scheduling environment in the cloud environment in future.
    Keywords: Cloud Computing; Resource Clustering; Identicalness; split up; node size.

Special Issue on: IAIM2019 Advances in Data Science and Computing

  • Complex Event Processing for Effective Poultry Management in Farmlands with Predictive Analytics   Order a copy of this article
    by Imthyaz Sheriff, E. Syed Mohammed, Joshua J, Hema C 
    Abstract: This world is a bundle of events which are interconnected. The occurrence of one event may influence one or more events it is related to. The study of such real time events and their inter dependence is called complex event management. The challenge of complex event management is the ability to capture real time events, analyse and take decisions so as to make the system work in a most desired or optimum way. The focus of this study is on considering the real time happenings in a livestock management environment and applying predictive analytics after analysing the parameters that impact the complex event of effective livestock management. Predictive analytics is a form of data analytics which analyses both historical data as well as current live stream data to forecast the activities, behaviours and trends. Livestock management is a significant area for deploying predictive analytics as the behaviour patterns are highly varying and dependent on various complex events happening around. In Livestock management, our main focus is on Poultry. In this research work we have designed a system to continuously monitor the events happening in a poultry farm. Data is collected through sensors to detect the moisture content, light, time and weather conditions. Individual birds are RFID tagged to help in capturing the movement stream data. A cloud-based event and data management system has been developed and analysis is carried out on historical data as well as live stream data. The proposed model employs K-Means clustering algorithm for clustering the behaviour patterns of the poultry birds. Machine learning algorithms have been used to capture varied complex events that influence the well-being of the farm birds. The proposed system has been experimented on a real time farm with 846 country chickens. Our prediction algorithms have helped to achieve an accuracy of about 78%. The parameters that enormously impact on the behaviour of livestock management have been identified. Our system has been able to predict the unusual behaviour patterns in the livestock as well as foresee disease outbreak amongst chicken in the farm house. Our future work focuses on design and development of a complex event processing framework to cater to the effective management of livestock in a farm as a whole.
    Keywords: Predictive Analytics; Complex Event Processing; Data Mining; Data Analytics.