Forthcoming and Online First Articles

International Journal of Cloud Computing

International Journal of Cloud Computing (IJCC)

Forthcoming articles have been peer-reviewed and accepted for publication but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

Online First articles are published online here, before they appear in a journal issue. Online First articles are fully citeable, complete with a DOI. They can be cited, read, and downloaded. Online First articles are published as Open Access (OA) articles to make the latest research available as early as possible.

Open AccessArticles marked with this Open Access icon are Online First articles. They are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.

Register for our alerting service, which notifies you by email when new issues are published online.

We also offer which provide timely updates of tables of contents, newly published articles and calls for papers.

International Journal of Cloud Computing (66 papers in press)

Regular Issues

  • Complex Event Processing for Effective Poultry Management in Farmlands with Predictive Analytics   Order a copy of this article
    by Imthyaz Sheriff, E. Syed Mohammed, Joshua J, Hema C 
    Abstract: This world is a bundle of events which are interconnected. The occurrence of one event may influence one or more events it is related to. The study of such real time events and their inter dependence is called complex event management. The challenge of complex event management is the ability to capture real time events, analyse and take decisions so as to make the system work in a most desired or optimum way. The focus of this study is on considering the real time happenings in a livestock management environment and applying predictive analytics after analysing the parameters that impact the complex event of effective livestock management. Predictive analytics is a form of data analytics which analyses both historical data as well as current live stream data to forecast the activities, behaviours and trends. Livestock management is a significant area for deploying predictive analytics as the behaviour patterns are highly varying and dependent on various complex events happening around. In Livestock management, our main focus is on Poultry. In this research work we have designed a system to continuously monitor the events happening in a poultry farm. Data is collected through sensors to detect the moisture content, light, time and weather conditions. Individual birds are RFID tagged to help in capturing the movement stream data. A cloud-based event and data management system has been developed and analysis is carried out on historical data as well as live stream data. The proposed model employs K-Means clustering algorithm for clustering the behaviour patterns of the poultry birds. Machine learning algorithms have been used to capture varied complex events that influence the well-being of the farm birds. The proposed system has been experimented on a real time farm with 846 country chickens. Our prediction algorithms have helped to achieve an accuracy of about 78%. The parameters that enormously impact on the behaviour of livestock management have been identified. Our system has been able to predict the unusual behaviour patterns in the livestock as well as foresee disease outbreak amongst chicken in the farm house. Our future work focuses on design and development of a complex event processing framework to cater to the effective management of livestock in a farm as a whole.
    Keywords: Predictive Analytics; Complex Event Processing; Data Mining; Data Analytics.

  • Ontology Building for Patient Bioinformatics of the Smart Card Domain: Implementation Using Owl   Order a copy of this article
    by Waseem Alromima, Ahmed Alahmadi 
    Abstract: Abstract: Smarting cards play a very important part in facilitating the bioinformatics information process. The current problem is integrating information, such as for the structure of similar information regarding analysis and services. Therefore, patient information services need to be modelled and re-engineered in the area of e-governmental information sharing and processing to deliver information appropriately according to the citizen and situation. Semantic web technology-based ontology has brought a promising solution to these engineering problems. In this study, the main purpose is to provide each patient with a smart card that will hold all their bioinformatics for their entire life based on the proposed domain ontology. It will be recognized and used in all organizations related to e-health. The other aims is for automatic process of important medical documents and its related organizations, such as pharmacies, hospitals and clinics. The smart card can draw up the history of the patient in terms of illnesses and/or treatments; thus, facilitating the future management of his/her medical file. The proposed ontology for e-health information offers ease in introducing new bioinformatics information for patients services without moving the structure of the former ontology. The ontology created with the knowledge-based editor tool Prot
    Keywords: Semantic Web; Domain ontology; Services; owl; Citizens; e-health; Patient.

  • Machine Learning Classifiers with Preprocessing Techniques for Rumor Detection on Social Media: An Empirical Study   Order a copy of this article
    by Mohammed Al-Sarem, Muna Al-Harby, Essa Abdullah Hezzam 
    Abstract: The rapid increase in popularity of social media helped the users to easily post and share information with others. However, due to uncontrolled nature of social media platforms, such as Twitter and Facebook, it becomes easy to post fake news and misleading information. The task of detecting such problem is known as rumor detection. This task requires data analytics tools due to the massive amount of shared content and the rapid speed at which it is generated. In this work, the authors aimed to study the impact of different text preprocessing techniques on the performance of classifiers when performing rumor detection. The experiments performed on a dataset of tweets on emerging breaking news stories covered several events of Saudi political context (EBNS-SPC). The results have shown that preprocessing techniques have a significant impact on increasing the performance of machine learning methods such as support vector machine (SVM), Multinomial Na
    Keywords: Rumor Detection; Saudi Arabian News; Multinomial Naïve Bayes; Support Vector Machine; K-nearest Neighbor; Twitter Analysis.

  • A Discovery and Selection Protocol for Decentralized Cloudlet Systems   Order a copy of this article
    by Dilay Parmar, Padmaja Joshi, Udai Pratap Rao, A. Sathish Kumar, Ashwin Nivangune 
    Abstract: Cloudlets help in overcoming latency issue of clouds in mobile cloud computing to offload the computing tasks. Communication protocols are important part of the implementation of cloudlet based systems for the Mobile Cloudlet-Cloud environment. In this work, an approach for communication between entities in a decentralized cloudlet based systems is proposed. For that purpose, a cloudlet discovery protocol which is used for discovering cloudlets in Wi-Fi vicinity of Mobile Devices is proposed. A selection algorithm for selecting the suitable cloudlet from available discovered cloudlets is also proposed. Our proposed selection algorithm uses infrastructure specific criterion for selection decision, which makes the algorithm more generic to use.
    Keywords: Cloudlet; Mobile Cloud Computing; Discovery; Selection.

  • PRESERVING PERSONAL HEALTH RECORDS SECURITY AND PRIVACY USING C-R3D ALGORITHM AND MULTIMODAL BIOMETRIC AUTHENTICATION   Order a copy of this article
    by Meena Settu, Gayathri V 
    Abstract: Data security and privacy are staying one of the most significant concerns for cloud computing. The secrecy of the Personal Health Records (PHI) and Personally Identifiable Information (PII) is the main issue when financial cloud servers are utilized by healthcare associations to preserve the patients' health records since patient's information could be handled by numerous foundations for example, government and private emergency clinics and hospitals, general professionals and examination labs. Recent years, numerous intrusions on healthcare information intensified the requirement for tight security for healthcare data. Additionally, the security specialists state that such a large number of vulnerabilities are there at the Health and Humanities Service Systems Data (HHSSD). If it isn't alleviated, it could make an immense risk and potential threats to the HHSSD. So the security solutions must be expedient and simple to supplying and aiding high-level safety without compromising network performance and it is more essential to regulate critical layer of security to maintain the patients sensitive information. This paper proposes novel data encryption in healthcare cloud by applying C-R3D (Combined RSA and Triple DES) algorithm to encrypt every patient's personal health record file before moving into the cloud which ensures data confidentiality. In addition, Multimodal Biometric authentication has been connected, for example, integrated unique finger impression and iris authentication along with username and password which ensures the privacy of patients sensitive information stored in the healthcare cloud. Thus, the experimental outcomes demonstrate the effectiveness of the proposed framework
    Keywords: Data Security; Personal Health Records; Health and Humanities Service Systems Data (HHSSD); Combined RSA and Triple DES; Multimodal Biometric Authentication.

  • Major Drivers for the Rising Dominance of the Hyperscalers in the Infrastructure as a Service Market Segment   Order a copy of this article
    by Sebastian Floerecke, Christoph Ertl, Alexander Herzfeldt 
    Abstract: The rapidly growing worldwide market for Infrastructure as a Service (IaaS) is increasingly dominated by four hyperscalers Alibaba, Amazon Web Services (AWS), Google and Microsoft. On the flip side, the market share and number of small and medium-sized regional IaaS providers have been declining steadily over the past years. Astonishingly, this fight for market shares has been largely neglected by the research community so far. Against this background, the goal of this study is to identify the major drivers for this market development. To this end, 18 exploratory expert interviews were conducted with high-ranking employees of various successful regional IaaS providers in Germany. The results indicate that the central driver is the significant lower price of the hyperscalers offerings. Beyond that, eight additional important drivers, such as market presence, innovative strength, amount of financial and human resources and high and global availability as well as high user experience of the IaaS services, have been identified. This study sheds light on the IaaS market and opens up and supports future in-depth investigations of this duel. Regional IaaS providers can use these insights to unravel the IaaS market conditions in general and to better understand the decisive strengths of the hyperscalers in particular. Based on this knowledge, regional IaaS providers are enabled to develop strategies and business models for counteracting or at least decelerating the hyperscalers growing dominance.
    Keywords: Cloud Computing; Infrastructure as a Service (IaaS); Business Models; Hyperscalers; Regional IaaS Providers; Exploratory Expert Interviews; Theory for Explaining.

  • Intrusion Detection and Prevention of DDoS attacks in Cloud Computing Environment: A Review on Issues and Current Methods   Order a copy of this article
    by Kiruthika Devi, Subbulakshmi T 
    Abstract: Cloud computing has emerged as the most successful service model for the IT/ITES community due to the various long-term incentives offered in terms of reduced cost, availability, reliability and improved QoS to the cloud users. Most of the applications already migrated to centralized data centres in the cloud. Due to the growing needs of the business model, more small and medium enterprises rely on the cloud because little investment would suffice on the infrastructure and hardware/software. The most alarming cyber-attack in the cloud that interrupts the availability of the cloud services is Distributed Denial of Service (DDoS) attack. In this paper, various existing Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS) and their positioning in the cloud are investigated and the essence of the current techniques in the literature is briefed in detail. The comprehensive review on the latest IDS/IPS solutions and their capabilities to detect and prevent intrusions in the cloud are explored and the comparisons of the methodology provides the researchers with the security issues/challenges exposed in the cloud computing environment. The significance of the design of a secure framework for cloud is also being emphasized for achieving improved security in the cloud.
    Keywords: Cloud computing; DDoS; IDS; IPS; security.

  • Legal Issues in Consumer Privacy Protection in the Cloud Computing Environment: Analytic Study in GDPR, and USA Legislations   Order a copy of this article
    by Alaeldin Alkhasawneh 
    Abstract: Cloud computing services are considered one of the most important services provided for companies due to the various benefits they confer. However, data privacy is a big concern for users and laws covering this have many contradictions and require improvement. This paper discusses the laws governing privacy issue in cloud and highlights missing components that could be added to laws considered and also proposes laws amendments which may help create a better consumer experience, improved service and increased protection for personal data. At the end of the paper, a set of recommendations is proposed that should be followed by government and private companies which would increase the responsibility held by cloud computing service providers in case of failing to protect personal data from privacy invasion.
    Keywords: Consumer; Privacy; Cloud Computing; GDPR.

  • A Novel redundancy Technique to enhance the security of Cloud Computing   Order a copy of this article
    by Syed Ismail 
    Abstract: Cloud Computing is an emerging technology that offers computing, storage, and software as a service to ITorganizations and individuals. The users of the cloud can access the applications provided by it from anywhere, anytime, and anyplace in the world. Security is considered as a critical issue in the cloud environment. To prevent cloud resources from external threats, data leakage, and various attacks, security controls, and technological safeguards should be offered to the datacenters of the cloud. Additionally to integrity and availability cloud should also possess reliability. Reliability enables the users to completely forget about the availability and security of the data stored in the cloud without jeopardizing data loss.This paper proposes a novel approach known as the Multi-Cloud Database(MCDB) which uses multiple Cloud Service Providers (CSP) instead of a single CSP. For this purpose, a Shamir's secret sharing algorithm and a sequential Triple Modular Redundancy(TMR) technique are implemented toimprove the reliability and offer enhanced security to the MCDB. The proposed model is compared with one single cloud(SPORC) and four multi-cloud models(DepSky, HAIL, RACS,MCDB without TMR) in terms of Reliability, Integrity, Confidentiality, Availability, and Security. The maximum Reliability, Integrity, Confidentiality, Availability, and Security values obtained for the proposed model were 100%, 99%, 99%, 97%, and 99%.
    Keywords: Cloud Computing; Reliability; Security; Multi-cloud Database; Shamir's secret sharing algorithm; and Triple Modular Redundancy.

  • Modeling of a cloud platform via M/M1+M2/1 queues of a Jackson network   Order a copy of this article
    by Sivasamy Ramasamy, Paranjothi N 
    Abstract: Modeling of a cloud platform that can provide the best QoS (Qualityrnof Service) to minimize the average response times of its clients is investigatedrnvia an open Jackson network. Compact expressions for the input and outputrnparameters and measures of the proposed model are presented. Designing ofrnthe model involves the performance measures of M/M1+M2/1 queues with a K - policy.rnThis new cloud system is able to control virtual machines dynamically andrnto implement its operations to promote effectiveness in most of the commercialrnapplications.rn
    Keywords: Cloud computing; Open Jackson network; M/M/1 queue; Response time and Quality of Service.

  • Towards P2P Dynamic-Hash-Table based Public Auditing for Cloud Data Security and Integrity   Order a copy of this article
    by Raziqa Masood, Nitin Pandey, Q.P. Rana 
    Abstract: Cloud storage is the most demanded feature of cloud computing to provide outsourced data on-demand for both organizations and individuals. However, users are in a dilemma to trust over the cloud service providers (CSPs) regarding whether privacy is preserved, integrity is maintained, and security is guaranteed, towards the outsourced data. Therefore, it requires to develop an efficient auditing technique to provide confidence upon the data present in cloud storage. This article proposes a peer-to-peer (P2P) public auditing scheme to audit outsourced data using a dynamic hash table (DHT) to strengthen the users' trust, confidence, and availability over the outsourced data. Each DHT maintains the information of outsourced data, which helps the auditors to provide safety and integrity while auditing the data. Moreover, these auditors are organized into a structured P2P to accelerate the auditing along with the auditor's availability. Thus, the proposed scheme overcomes from a single point of failure. The computation cost and communication cost of our proposed protocol are compared with the existing methods using pairing-based cryptography (PBC) library, and it found to be an effective solution for public auditing on outsourced data.
    Keywords: Cloud Computing; Dynamic Hash Table; Outsourced Data Storage; Peer-to-Peer; Privacy; Public Auditing.

  • Impact of Internet of Things in Social and Agricultural Domains in rural Sector: A Case Study   Order a copy of this article
    by Zdzislaw Polkowski, Sambit Kumar Mishra, Brojo Kishore Mishra, Samarjeet Borah, Anita Mohanty 
    Abstract: The term internet of things (IoT) in general connects various types of things/objects to the internet with the help of various information perception devices towards exchanging information. It may be linked towards building future innovations with the help of smart objects connected globally and capable of sensing, communicating as well as sharing information for different applications. In this case, data may be one of the most valuable aspects of internet of things. Accordingly, the data linked to internet of things may have specific characteristics towards modernizing and improving the technologies associated with relational-based database management. Now a day it may create a big impact on our day to day activities. As an instance, mechanisms with solutions may be associated towards monitoring the public transports. In this regard, requisite sensors may be used to analyze the maintenance issues along with priority. As there may be a huge increase of devices, the amount of data generated may also be too large, the main aim in this case may be to organize the large amount of data to build a new future of computing by taking every smart object into a globally connected network. Similarly, application of Internet of Things (IoT) in agriculture and food may lead to the main challenges over the existing applications and technologies. Organizing food along with its security may be a major challenging issue due to the increase of the world population and the growing welfare in emerging economies. In this regards, it may be obvious to face the challenges through better sensing and monitoring of production through internet of things which may have contribution towards usage of farm resources, crop development, food processing along with clear understanding of the specific farming conditions. And also in this paper, it may be aimed to provide and implement adaptive, efficient remote and logistic operations by actuators. This approach may also be based on service oriented architecture and component technology, which may help to realize dynamic semantic integration.
    Keywords: Sensor; Actuator; Internet; Logistic operations; Semantic integration.

  • EARA- PSOCA: An Energy-Aware Resource Allocation and Particle Swarm Optimization (Pso) Based Cryptographic Algorithm In E-Health Care Cloud Environment   Order a copy of this article
    by Palani Subramanian, Rameshbabu K 
    Abstract: In cloud platforms, large amount of energy is consumed by execution of scientific workflow. So, VMs has to be deployed in energy efficient manner. Throughout the world, wide attention is attracted by cloud platforms energy consumption. Cooling systems, light systems, network peripherals, monitors, console, processors cooling fan, server running consumes large amount of power in a cloud data centre. In order to face these issues, Energy-aware Resource Allocation method is proposed in this work, which is termed as EnReal. For execution of scientific workflow, virtual machines are deployed dynamically which focus Current e-health standards and solutions. In general e-health systems, client platform security is not addressed by this method which is an important factor to be considered. In e-health domain infrastructures, for privacy establishment, new Particle Swarm Optimization (PSO) with cryptography-based security algorithm (PSOCA) is proposed in this work. Controlled environment can be created by this for datas privacy easy management as well as for security. For experimentation, CloudSim framework is adapted in cloud simulation environment. Hence the Proposed method are evaluated using various parameters like energy consumption and resource utilization
    Keywords: Cloud security; cloud computing; resource allocation; cryptography; Energy-aware method.

  • Feature selection using evolutionary algorithms: A data-constrained environment case study to predict tax defaulters   Order a copy of this article
    by Chethan Sharma, Manish Agnihotri, Aditya Rathod, Ajitha Shenoy K.B 
    Abstract: Multiple features are encountered while performing analysis on income tax data owing to its background in finance. In data constrained environments, feature selection techniques play an essential role to ensure the usage of this data in modern machine learning algorithms. In this paper, a novel method is introduced to predict tax defaulters from the given data using an ensemble of feature reduction in the first step and feeding those features to a proposed Neural Network. The feature reduction step utilizes various methods including Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO) in the performance analysis to determine the best approach. After completion of feature reduction, second stage deals with experiments on the architecture of a neural network for appropriate predictions. The proposed case study on PSO and ACO indicates substantial improvement in results of Tax Defaulter Prediction upon the usage of the feature subset selected by PSO. This research work has also successfully demonstrated the positive influence on the usage of Linear Discriminant Analysis (LDA) to perform dimensionality reduction of the unselected features to preserve the underlying patterns. The best results have been achieved using PSO for feature reduction with an accuracy of 79.2%.It highlights 5.4% improvement compared to the existing works.
    Keywords: Evolutionary Algorithm; Particle Swarm Optimization; Ant Colony Optimization; Linear Discriminant Analysis; Neural networks; Tax defaulter prediction.

  • Rule based Translation Surface for Telugu Nouns   Order a copy of this article
    by Prasad Rao P, Phani Kumar S 
    Abstract: This work proposes a noun analyzer program, which is a full- length translation surface for Telugu nouns. It compiles and analyses nouns of Telugu language into their roots and their constituent postpositions along with their grammatical properties. Grammatical properties at word level include finding a number, gender, person, and other associated features. The method employed in this paper is based on the Suffix-Order Pattern technique is used to develop a computational model for morphological analysis of nouns in Telugu. The Suffix-Order Pattern is a new technique in linguistic computation. The current work is an implementation of a word-level translation surface for Telugu. It is a java-based application developed for analyzing Telugu nouns. The procedure is theoretically justified and practically executed for a variety of words related to Telugu Nouns. The present proposal is a demonstration of XML based repository for all the root words of different categories of Telugu Nouns. It is an optimal organization of a linguistic database and its performance in the computational environment is highly appreciated.
    Keywords: Natural language processing; Linguistic computation; Suffix order pattern; Translation surface; Morphology; Machine translation.
    DOI: 10.1504/IJCC.2023.10039014
     
  • An Application of Taguchi L16 Method for optimization of Load balancing Process Parameters in Cloud Computing   Order a copy of this article
    by Shahbaz Afzal, G. Kavitha, Amir Ahmad Dar 
    Abstract: Cloud computing has emerged as a large scale distributed computing platform to maintain and deliver data, information, applications, web services, IT infrastructure and other cloud services on a global scale of users over internet. With a feature of global concurrent access to users on its finite resources, scheduling of tasks is an essential process in cloud computing to assign cloud user tasks on cloud resources. Under the circumstances of varying nature of user tasks, task scheduling and resource allocation mappings are not self sufficient to keep the overall cloud system functional in a balanced state. So, task scheduling in absence of proper load balancing techniques result in workload imbalanced machines with overloaded, under-loaded or idle resources. This has negative consequences on deliverable Quality of Service and profit. Hence, prior to designing a load balancing algorithm, it is essential to determine the input parameters that have much impact on the output / response variable to prevent load imbalances among cloud computing machines. The study investigates the impact of input parameters namely growth rate, magnitude of cloud task with respect to CPU or memory, initial population of tasks, and sampling interval, on the population of tasks N(t) with the help of Taguchi Design of Experiment. Taguchi L16 method is used for experimental setup and two statistical techniques - Analysis of Mean (ANOM) and Analysis of Variance (ANOVA) are used for performance analysis. ANOM is used to identify which input parameter has a significant effect on N(t) and it also provides the best optimal combination of input variables for which the virtual machine is stable. ANOVA is used to measure the percentage contribution of each input parameter on the response variable. From the experimental results, it is concluded that N0 has the most significant impact on N(t) with a percentage contribution of 37%. The whole setup was executed on the Minitab18 statistical software toolbox.
    Keywords: Cloud computing; load balancing; scheduling; virtual machines; control parameters; Taguchi method; ANOM; ANOVA; optimal combination.

  • FSACE: Finite State Automata based client-side Encryption for Secure data Deduplication in Cloud Computing   Order a copy of this article
    by Basappa Kodada, Demian Antony D'Mello 
    Abstract: Now a day, digital data are growing vastly that are generated from different source of media in unstructured manner. The maintenance and management of this high volume of data is very critical that guides the clients to make use of cloud storage service. In reality, the communication and computation overhead will be increased to manage these data by security expectations at cloud with duplicate entries. The data deduplication technique is widely used that reduces overhead on cloud service provider. The several approaches have been proposed by researcher to address the issues of data deduplication. The convergent encryption(CE) and its flavors are widely used in secure data deduplication to reduce network bandwidth usage, storage usage and storage cost and improves storage efficiency, but CE algorithm faces dictionary based brute-force attack and threats from inner and outer adversaries. In this paper, we propose FSA based client side encryption to accomplish secure data deduplication that provides data confidentiality and integrity for users data. The FSACE protocol achieves data access control by using Proof of ownership (PoW) challenge given to data owner. The security analysis indicates that, FSACE protocol is secure enough to protect data from inner and outer adversaries.We also demonstrates performance evaluation on obtain results that shows considerably decrease in communication and computation overhead and increase in storage efficiency.
    Keywords: Security;Encryption;Cryptography;Deduplication;Secure Deduplication;Proof of ownership;Data Security;Cloud Data Security.

  • Predictive Data Center Selection Scheme for Response Time Optimization in Cloud Computing   Order a copy of this article
    by Deepak Kapgate 
    Abstract: The quality of cloud computing services is evaluated based on various performance metrics out of which response time is most important. Nearly all cloud users demands its applications response time as minimum as possible, so to minimize overall system response time we have proposed Request Response Time prediction based data center (DC) selection algorithm in this work. Proposed DC selection algorithm uses results of optimization function for DC selection formulated based on M/M/m queuing theory, as present cloud scenario roughly obeys M/M/m queuing model. In cloud environment DC selection algorithms assessed based on their performance in practice, rather than how they are supposed to be used. Hence explained DC selection algorithm with various forecasting models is evaluated for minimum user application response time and response time prediction accuracy on various job arrival rates, real parallel workload types and forecasting model training set length. Finally performance of proposed DC selection algorithm with optimal forecasting model is compared with other DC selection algorithms on various cloud configurations, considering generic cloud environment.
    Keywords: Cloud Computing; Response Time Optimization; Time Series Forecasting; M/M/m Queuing Model.

  • 3S - Hierarchical Cluster-Based Energy-Efficient Data Aggregation Protocol in Wireless Sensor Network   Order a copy of this article
    by Arun Agarwal, Amita Dev 
    Abstract: Energy utilization is one of the most common challenges in Wireless Sensor networks, as frequent communication between the sensor nodes results in a huge energy drain. Moreover, a key challenge is to schedule the sensor network activities for data transmission by reducing energy utilization. To overcome this challenge, we have proposed a 3S- Hierarchical Cluster-Based Energy-Efficient Data Aggregation Protocol in Wireless Sensor Network, where 3S represents Smart Sensing, Selective Cluster Head Nomination, and Data Aggregation Schedule which reduces the energy consumption for prolonging network lifetime. The proposed technique has three phases: In the first phase, smart data sensing is applied to the predefined network model, which uses an unbounded buffer to store data values and reduces many transmissions to reduce energy dissipation. In the second phase, sensor nodes are being categorized based on their available energy, and their relative received signal strength indicator level and high-value nodes are treated as extended nodes to become the candidates for cluster head selection. In the third phase, an advance aggregation schedule is introduced, which alternatively demands sensor nodes to send their stored buffer based on node identification value categorized as even or odd node. Simulation analysis and results proved that our proposed algorithm could effectively enhance the network lifetime and reduce energy consumption while maintaining acceptable packet delivery ratios and delay values.
    Keywords: Aggregation; Data Management; Energy Efficiency; Residual Energy; Scheduling; Smart Sensing.

  • Implementation of Hybrid Adaptive System for SLA Violation Prediction in Cloud Computing   Order a copy of this article
    by Archana Pandita, Prabhat Kumar Upadhyay, Nisheeth Joshi 
    Abstract: The cloud has to commit with Service Level Agreements (SLA) which ensures a specific level of performance and sets a penalty if SLA is violated by the provider. These days, managing and applying penalties have become an essential and critical issue for Cloud Computing. In this research, Adaptive Neuro-Fuzzy Inference System (ANFIS) is used to develop proactive fault prediction model which has been designed by utilizing the power of Machine Learning and tested on the datasets to highlight the accurate models for faults prediction in the cloud environment. The suggested algorithm has achieved a percentage accuracy of 99.3% in detecting Violations. The performance of the proposed model has been compared with Bayesian Regularization and Scaled Conjugate Gradient methods which reflect the facts as obtained from the results that effectiveness of this scheme in terms of predicting systems Violation is more effective.
    Keywords: Adaptive Neuro-Fuzzy Inference System; Cloud computing; Cloud Service; Machine Learning; Service Level Agreement; Violation; Quality of Service; Prediction.
    DOI: 10.1504/IJCC.2022.10035787
     
  • PSO optimized Workflow Scheduling and VM Replacement algorithm using Gaming concept in Cloud Datacenter   Order a copy of this article
    by Narayani Raman, Aisha Banu Wahab 
    Abstract: The principal features of Cloud Computing are dynamic resource allocation and its pricing nature. This paper implies an algorithm that provides resources based on the demand to users in the Cloud Infrastructure as a service (IaaS) environment. This paper proposes an algorithm that optimizes workflow scheduling and VM replacement algorithm using Particle Swarm Optimization with the gaming theory concept (GTPSO-WSP). It enhances system performance with metrics such as cost and makespan. The proposed algorithm in the Cloud Computing environment has two phases. In the first phase, resources are allocated to the physical server based on a static scheduling algorithm. During the second phase, the proposed system applies the dynamic reconfiguration based on the GTPSO-WSP algorithm for reducing the cost and makespan of the workflow. In GTPSO-WSP, the multi-start method gives a solution to particle premature convergence. However, the experimental analysis in the WorkflowSim environment improves the makespan and monitory cost. The observed results indicate performance improvement of 4% in terms of makespan and 9% in terms of cost while comparing GTPSO-WSP with the traditional Particle Swarm Optimization (PSO) and Cuckoo Search algorithm.
    Keywords: Algorithm; Cloud Computing; Game Theory; Makespan; Optimization; Placement; physical server; Resource Allocation; Scheduling; Workflow.

  • A Novel Hybrid Algorithm for Workflow Scheduling in Cloud   Order a copy of this article
    by Isha Agarwal, Swati Gupta, Ravi S. Singh 
    Abstract: Cloud Computing is a service that provides its users all the computing facilities which can be accessed anywhere, at any time through the internet on a pay-per-use basis. There is a huge number of Cloud service providers receiving a large number of requests from multiple users around the world, scheduling plays a vital role in assigning those requests to it's requested resources. Task Scheduling is an NP-Hard problem in Cloud Environment due to which many heuristics and meta heuristics algorithms have been used for obtaining an optimised mapping. In this paper, we designed a Hybrid Jaya-Particle Swarm Optimization(PSO) algorithm. The proposed algorithm combines both the Jaya and PSO to provide us better quality results. Our algorithm is evaluated in terms of execution cost and execution time and achieved better results in comparison with Genetic Algorithm(GA), PSO, Honey Bee, Cat Swarm Optimization(CSO), Ant Colony Optimization(ACO) and Jaya.
    Keywords: Task Scheduling; cloud computing; workflow scheduling; Jaya; PSO; execution cost; running time.
    DOI: 10.1504/IJCC.2023.10038837
     
  • Automation of Franchise Based Data Storage, Management and Analysis Using Amazon Web Services(AWS)   Order a copy of this article
    by Shreya Oswal 
    Abstract: The wave of computer automation in business has revolutionized the way companies and employees interact with their customers and each other. Robotic Process Automation (RPA) not only mimics human actions involving complex, high volume, and routine tasks but has also extended the creative problem-solving capabilities and productivity of human beings and deliver superior business results. Amazon Web Services (AWS) has created a dramatic cultural shift in Infrastructure Provisioning from a fairly manual process of physical machines and software configuration. This paper proposes to use Amazon AWS to automate the task of scheduled uploading of data from different franchises, managing the database, analyzing the data and storing the data on the supervisors machine. This reduces the redundant tasks of daily uploading data to company servers, analyzing the data and then downloading the data. The Analysed data can be used by the company to improve the basic functioning and acknowledge various issues and problems. Thus it aims to offer Infrastructure as a Service (IaaS) by providing virtualized computing resources over the internet.
    Keywords: Amazon Web Services(AWS); AWS S3; AWS Lambda; DynamoDB; Robotic Process Automation (RPA).

  • KBSS: An Efficient Approach of Extracting Text Contents from Lecture Videos - Computational Intelligence Techniques   Order a copy of this article
    by Sreerama Murthy Velaga 
    Abstract: For the last few decades, there is a lot of research going on in the areas of image processing and text mining, and they became an emerging research area because an image or a video with cloud is a major source of data, whereas text is a prominent and direct source of information in a video lecture. The challenges that usually faced are converting the lecture video frames into binary conversion matrix, extracting image to text matrix, Defining the threshold value and classification. Here, in this paper an efficient approach for extracting text contents from Meta data lecture videos with cloud is proposed. We built a frame work KBSS in which the frames are converted into binary matrix, then extracting key factors with text matrix, then apply clustering with proposed similarity measures in-order to reduce the matrix and classification of text matrix using neural networks and checking proposed similarity measure with properties of each case wise. The objective is to extract text from Meta lecture videos with cloud and improving algorithm performance.
    Keywords: Lecture meta video; computational intelligence techniques; binary matrix; key factors; text and image mining.

  • A CLOUD-BASED IoT SMART WATER DISTRIBUTION FRAMEWORK UTILIZING BIP COMPONENT: JORDAN AS A MODEL   Order a copy of this article
    by Sawsan Alshattnawi, Anas Alsobeh 
    Abstract: Jordan is one of the poorest countries in water resources, estimated to be below the poverty line. Due to high population growth and development, water supply and demand needs a novel distribution water regime in Jordan. This paper presents a design-based Smart Water Distribution Model (SWDM), which integrates various technology solutions, such as Behavior-Interaction-Priority (BIP) components, Cloud computing, and the Internet of Things (IoT). BIP is a component model of design that includes - three aspects: behavior, interaction, and priority. IoT is a design for connected system components that collect data from physical devices to deliver executable insights. This paper proposes a BIP-IoT model to introduce the SWDM, which provides a dynamic smart-design scalable model that is implemented over cloud components to cope with the increasing challenges of the water distribution regime in Jordan. The paper analyzes the viability of this model and investigates an advantage in the reusable automation dynamic of SWDMs architecture. A composition component is integrated into the architecture that employs intelligence domain-independent planning to control execution. It also presents a high-level prototype cloud-based implementation of our proposed architectural model using smart artificial data analysis algorithms.
    Keywords: BIP Component Model; Cloud Computing; Internet of Things (IoT); Water Distribution Network; Wireless Sensor Network (WSN); Smart Water Distribution Management Model.

  • Data consistency protocol for multicloud systems   Order a copy of this article
    by Olga Kozina, Volodymyr Panchenko, Oleksii Kolomiitsev, Nataliia Stratiienko, Viktoriya Usik, Lyudmila Safoshkina, Yurii Kucherenko 
    Abstract: Using the resources of several cloud service providers (CSPs) to store, serve, and access users data can improve availability and reduce latency. However, the management of multicloud systems also poses an important challenge of how to guarantee that requests from any region to geo-distributed replicas of the database will content equivalent actual data, which is considered in this paper. The existing taxonomy of data consistency models allows to choose the required level of data consistency in cloud systems, however, the implementation of consistency protocols for multicloud systems requires a reasonable choice of middleware architecture and compromise decisions between response time and other constraints required by clients requirements. We propose consistency protocol based on the geo-distributed architecture of multicloud middleware to assign the ordering of numbers in a global sequence for incoming writing.
    Keywords: Data consistency; consistency protocol; consistency model; multi clouds; cloud service providers; multicloud systems; latency; geo-distributed database; response time; middleware architecture.

  • A distributed auction-based algorithm for virtual machine placement in multiplayer cloud gaming infrastructures   Order a copy of this article
    by Yassine Boujelben, Hasna Fourati 
    Abstract: Cloud gaming is an emerging service model that basically mimics the cloud computing model. Indeed, intensive computing tasks incurred by the graphical processing of the fairly complex game scenes are exported to remote cloud servers. While this would alleviate the hardware and software requirements on the gaming terminals, it poses serious problems of quality of service and experience. Furthermore, as the massive multiplayer gaming model becomes increasingly popular, computing resources are likely spread across multiple data centers and the need for a distributed assignment algorithm becomes paramount. In this paper, we are interested in the assignment of virtual machines hosted on rendering servers in a distributed cloud gaming infrastructure to requests sent by online gamers. We use the auction algorithm along with several efficient extensions to solve the virtual machine placement problem. We propose a completely distributed implementation technique without any shared memory for our algorithm called DVMP.
    Keywords: multiplayer cloud gaming; MCG; virtual machine placement; VMP; matchmaking; distributed auction algorithm; distributed VMP; gaming experience.
    DOI: 10.1504/IJCC.2024.10048138
     
  • Hybrelastic: A Hybrid Elasticity Strategy with Dynamic Thresholds for Microservice-based Cloud Applications   Order a copy of this article
    by Jose Augusto Accorsi, Rodrigo Da Rosa Righi, Vinicius F. Rodrigues, Cristiano André Costa, Dhananjay Singh 
    Abstract: Microservices-based architectures aim to divide the application’s functionality into small services so that each one of them can be scaled, managed, implemented, and updated individually. Currently, more and more microservices are used in application modelling, making them compatible with resource elasticity. In the literature, solutions employ elasticity to improve application performance; however, most of them are based on CPU utilisation metrics and only on reactive elasticity. In this context, this article proposes the hybrelastic model, which combines reactive and proactive elasticity with dynamically calculated thresholds for CPU and network metrics. The article presents three contributions in the context of microservices: 1) combination of two elasticity policies; 2) use of more than one elasticity evaluation metric; 3) use of dynamic thresholds to trigger elasticity. Experiments with hybrelastic demonstrate 10.31% higher performance and 20.28% lower cost compared to other executions without hybrelastic.
    Keywords: elasticity; reactive elasticity; proactive elasticity; scalability; dynamic thresholds; microservices.
    DOI: 10.1504/IJCC.2024.10048365
     
  • Efficient multi-level cloud-based agriculture storage management system   Order a copy of this article
    by Kuldeep Sambrekar, Vijay S. Rajpurohit 
    Abstract: Attaining good agriculture productivity aid countries' gross domestic product (GDP) growth. Guarantying food security for huge population across the globe possesses huge challenges. As a result, use of data analytic (DA), and internet of things (IoT) has been employed by various agencies such as remote sensing forecasting and GIS technology to build efficient agriculture management. Cloud computing platform has been adopted for storing and accessing these data remotely. Multi-cloud platform is adopted in existing approaches to provide quality of service (QoS) assurance and service level agreement (SLA) guarantee. However, these models are not efficient as it incurs latency and does not provision fault tolerance guarantee. For overcoming these research challenges, this work presents efficient multi-level cloud-based agriculture storage management system (EMLC-ASMS). Experiments are conducted on real-time data intensive and scientific application. The outcome shows EMLC-ASMS attain significant performance over existing model in terms of computation cost and latency.
    Keywords: cloud-based agricultural storage management; multi-level cloud storage; cloud storage optimisation; multi-cloud storage; efficient hierarchical cloud-based storage mechanism.
    DOI: 10.1504/IJCC.2022.10048900
     
  • Towards an efficient and secure computation over outsourced encrypted data using distributed hash table   Order a copy of this article
    by Raziqa Masood, Nitin Pandey, Q.P. Rana 
    Abstract: On-demand access to outsourced data from anywhere has diverted data owners' mind to store their data on the cloud servers instead of standalone devices. Security, privacy, and availability of data are still the major concerns that need to be addressed. A quick overcome for the users from these issues is to encrypt their data with their keys before uploading it to the cloud. However, computing over encrypted data still remains to be highly inefficient and impractical. In this paper, we propose an efficient and secure data outsourcing with the distribution of servers using a distributed hash table mechanism. It helps to compute over the data from multiple owners encrypted using different keys, without leaking the privacy. We observe that our proposed solution has less computation and communication cost from other existing mechanisms while is free from a single point of failure.
    Keywords: distributed hash table; DHT; data outsourcing; consistency hash function; homomorphic encryption; proxy re-encryption; peer-to-peer overlay; privacy; security.
    DOI: 10.1504/IJCC.2022.10048901
     
  • A correlation-based investigation of VM consolidation for cloud computing   Order a copy of this article
    by Nagma Khattar, Jaiteg Singh, Jagpreet Sidhu 
    Abstract: Virtual machine consolidation is of utmost importance in maintaining energy efficient cloud data centres. Tremendous amount of work is listed in literature for various phases of virtual machine consolidation (host underload detection, host overload detection, virtual machine selection and virtual machine placement). Benchmark algorithms proposed by pioneer researchers always cater as a base to develop other optimised algorithms. It seems essential to understand the behaviour of these algorithms for VM consolidation. There is a lack of analysis on these base techniques which otherwise can lead to more computationally intensive and multidimensional solution. The requirement to crucially investigate behaviour of these algorithms under various tunings, parameters and workloads is the need of the hour. This paper addresses the gap in literature and analyses the characteristics of these algorithms in-depth under various scenarios (workloads, parameters) to find the behavioural patterns of algorithms. This analysis also helps in identifying strength of relationship and correlation among parameters. Future research strategy to target the VM consolidation in cloud computing is also proposed.
    Keywords: host underload detection; host overload detection; virtual machine selection; VM consolidation; virtual machine placement; cloud computing.
    DOI: 10.1504/IJCC.2022.10048902
     
  • A case study on major cloud platforms digital forensics readiness - are we there yet?   Order a copy of this article
    by Ameer Pichan, Mihai Lazarescu, Sie Teng Soh 
    Abstract: Digital forensics is a post crime activity, carried out to identify the culprit responsible for the crime. The forensic activity requires the crime evidence that are typically found in a log that stores the events. Therefore, the logs detailing user activities are a valuable and critical source of information for digital forensics in the cloud computing environment. Cloud service providers (CSPs) usually provide logging services which records the activities and events with varying level of details. In this work, we present a detailed and methodological study of the logging services provided by three major CSPs, i.e., Amazon Web Services, Microsoft Azure and Google Cloud Platform, to elicit their forensic compliance. Our work aims to measure the forensic readiness of the three cloud platforms using their prime log services. More specifically, this paper: 1) proposes a systematic approach that specifies the cloud forensic requirements; 2) uses a generic case study of crime incident to evaluate the digital forensic readiness. It shows how to extract the crime evidence from the logs and validate them against a set of forensic requirements; 3) identifies and quantifies the gaps which the CSPs failed to satisfy.
    Keywords: cloud computing; cloud forensics; cloud log; evidence; forensic artefacts; digital investigation; digital forensics.
    DOI: 10.1504/IJCC.2022.10048903
     

Special Issue on: Cloud Computing Issues and Future Directions

  • Implementation of Multicloud Strategies for Healthcare Organizations to Avoid Cloud Sprawl   Order a copy of this article
    by Mohamed Uvaze Ahamed Ayoobkhan, ThamaraiSelvi R, Jayanthiladevi A 
    Abstract: Healthcare organizations are being overwhelmed by data, devices, and apps and disjointed, multiple cloud services. Well-heeled multicloud can provide a unified cloud model that provides greater control and scalability at reduced costs. Healthcare multicloud is turning into an appealing path for associations to manage the blast of advanced health care information in digital health, Internet of Things, associated gadgets and healthcare applications. As more human services associations grasp distributed computing, they are progressively going to a blend of open, private, and hybrid cloud administrations and foundation. In fact most of the health-care service organizations
    Keywords: Multicloud; Healthcare; HybridCloud; Cloud Environment.
    DOI: 10.1504/IJCC.2022.10041976
     
  • ALLOCATION OF CONFERENCE HALL BOOKING IN ANDROID APPLICATION USING CLOUD   Order a copy of this article
    by Mohana Prasad, A. Sai Eswar, A. Vijaya Manideep 
    Abstract: In each association of school there is constantly need of meeting rooms, to direct different occasions. It is discovered that there is one gathering hall in each institution, regardless of whether it is a schools and universities. A lot of divisions need to share this single meeting hall for directing its occasion. Henceforth there is dependably a plausibility of the hall being reserved by at least two departments around the same time. The conflict in timing will be known to the departments just when the day of the occasion has come to, at that point it will be past the point of no return and next to no time left for substitute course of action. The framework will be additionally being produced as an Android application, since numerous individuals today utilizes Android. Consequently a proficient and easy to use application is required to save the hall in advance and make the data accessible to others to check the status of the hall before booking. This procedure dependent on information putting away in cloud and client gets the notifications when they book the hall via sms or mail.
    Keywords: Hall Booking; Notification; Applicationrnrn.

  • Data Set Identification for prediction of Heart Diseases   Order a copy of this article
    by Palguna Kumar B., T.P. Latchoumi 
    Abstract: Over the generations, many techniques have been devised to predict or identify cardiovascular heart disease in advance. Datasets extracted from the UC Irvine (UCI) repository of machine learning plays a major role in predicting this disease. The extracted clinical datasets were huge in number and these entire datasets were not useful for the prediction of heart disease. Techniques that were used over these decades to overcome the existing issue, but most of these datasets are not accurate in making clinical decisions because of not taking proper dataset as input. This paper mainly focuses on preprocessing the needed dataset for predicting heart diseases accurately based on clinical decisions. The irrelevant data that need to be removed and process the identification of patterns that cause heart diseases. Finally, the selected datasets are analyzed with the UCI repository which is useful in designing the model to provide accurate results in predicting heart diseases.
    Keywords: Data Mining; Genetic Algorithms; Data Preprocessing; Feature selection; Knowledge Discovery Database.

  • Determining the Effectiveness of Drugs on a Mutating Strain of Tuberculosis Bacteria by using Tuberculosis Datasets under a Secure Cloud based Data Management   Order a copy of this article
    by Rishin Haldar, Swathi Jamjala Narayanan 
    Abstract: Drug-resistant Tuberculosis (TB) poses alarmingly high risk of mortality, due to complex mutations that the bacterial genes undergo in response to anti-tuberculosis drugs. In order to study the gene regions, where mutations have occurred in response to a specific drug, association mining, a machine learning technique was first applied on established datasets to group the individual gene-drug pairs and their corresponding reported mutations. Secondly, a simple, yet novel, effectiveness factor is proposed which evaluated the gene-drug pair by incorporating both the frequency and distribution of the mutations in a specific gene of bacteria. The proposed factor was generated for both single gene to single and multiple drugs. As the datasets provided mutations for a specific TB strain, H37Rv, the proposed factor helped in ranking the effectiveness of the anti-tuberculosis drugs for H37Rv. The proposed method can also be applied to any other TB strains, subject to the availability of datasets. The datasets as well as the information generated from the proposed study can be readily stored in a secure cloud storage system, either for public or private access and retrieval.
    Keywords: Drug resistant Tuberculosis; Mtb; Association Mining; gene mutation; drug recommendation.

  • Support Vector Machine Model for Performance Evaluation of Intrusion Detection in Cloud System   Order a copy of this article
    by Ved Prakash Mishra, Balvinder Shukla, A.Jayanthila Devi 
    Abstract: Intrusion detection and prevention in real time is becoming a challenge in this current moving digital world. Data and log details are growing in every minute. In this manuscript, a support vector machine (SVM) model is proposed and implemented, which is efficient, quick and capacity to handle large datasets. The basic idea of the proposed model is derived from the finite Newton method for classification problems. The experimental and comparative studies of proposed SVM is done with existing classification algorithms and related studies to assess the efficiency of the proposed SVM classification algorithm.
    Keywords: Support Vector Machine ; Intrusion Detection ; Cloud System.

  • Application of Data Mining on Cloud for Lung Cancer Prognosis   Order a copy of this article
    by Juliet Rajan 
    Abstract: Data is growing exponentially at a faster rate with the growing population. Today we are involved in building models to predict several diseases and cancer is one of the major diseases which requires a model to predict the disease at an early stage. Most of time, cancer is diagnosed at the later stage, i.e, Stage 4. Diagnosing cancer at Stage 1 can increase the survival rate of the patient by 85%. Hence the goal of the article is to predict cancer during Stage 1 itself. Another challenge the article addresses is handling huge amount of cancer data in order to come up with the model that performs accurate prediction.
    Keywords: Classification; Gradient Descent; Predictive Model; Support Vector machine; Cloud computing; Machine Learning; Precision; Recall; Classification Accuracy.

  • A Novel Approach Towards Tracing the parents of orphanage children and dead bodies in cloud and IoT Based Distributed Environment by integrating DNA databank with Aadhar and FIR databases   Order a copy of this article
    by Ved Prakash Mishra 
    Abstract: - An AAdhar card is a unique and authentic identity card in India which is being used as a valid identity proof for all types of day-to-day transactions including sale, purchase, opening bank account, air tickets, train tickets, bus tickets, and for getting benefits of government of India and state governments schemes. An AAdhar card of a person includes finger prints, thumbprints, iris images (left eye & right eye), and face image for a normal. But, for a differently able persons like blind, deaf, and physically handicapped persons it includes face, fingerprints, and thumbprints.rnThe authors believe that it is the basic right of every child / person to know the names of his / her biological parents. The technology of this earth is transforming rapidly and growing at tremendous speed. We the people of this planet achieved much technological advancements in the field of identity verification, information technology, and management skills. But, still a lot of progress is required in tracing the parents of orphanage children and tracing the relatives of unclaimed decomposed dead bodies. The parents of many orphanage children are alive and are desperately roaming here & there on this earth in the search of their children. But, it is unfortunate that the orphanage children of these parents are not able to meet with their parents and the technological advancement of this earth is feeling helplessness.rnIn this research work the authors have proposed a novel technique in which the Aadhar database is integrated with short tandem Repeat (STR) part of DNA database & first information report (FIR) lodged online in different police stations to trace the parents of orphanage children and unclaimed decomposed dead bodies using cloud computing, Internet of Things (IoT), spiral search and block chain technologies. T
    Keywords: Block-Chain Technologies; Cloud Computing Systems; Internet of Things; Orphanage Children; Short Tandem Repeat Part of DNA Sequence; Spiral Search.

  • Cloud Resource Management using Adaptive Firefly Algorithm and Artificial Neural Network   Order a copy of this article
    by S.K. Manigandan, Manjula S., Nagaraju V., Tapas Bapu B R, D. Ramya 
    Abstract: There has been a steady ascent in the prominence of the cloud computing worldview over the ongoing years. Cloud computing can be characterized in basic terms as one of the stages for giving conveyed figuring assets. The registering assets may incorporate capacity, data transfer capacity, memory space, handling components, etc. These assets are leased to the clients utilizing the compensation per-utilize model. The interest for the assets may not be static yet they can be mentioned on-request because of the developing web office. There are various variables which are basic for the achievement of a cloud framework that incorporates accessibility, unwavering quality and adaptability. These measurements vary dependent on the points of view. The clients of the cloud want to have a limited reaction time and cost yet then again, the cloud supplier centers around accomplishing proficient cloud assets distribution and limiting the support costs. Resource management is a practice of provisioning and managing the cloud resources efficiently. It also provides the techniques to provision the resources, schedule the jobs, balance the loads. This proposal provides a resource management technique for efficient provisioning of resources and the scheduling of jobs on the static and dynamic cloudlet requests.
    Keywords: Cloud computing; Cloudlet; Adaptive Firefly Algorithm; Artificial Neural Network.

  • AN INTELLIGENT CLOUD ASSISTANCE FOR HEALTHCARE SECTORS   Order a copy of this article
    by Jayanthiladevi A, Aithal P.S., Krishna Prasad K, Nandha Kumar K.G., Manivel Kandasamy 
    Abstract: E-healthcare systems have been measured as a drastically facilitating healthcare monitoring, earlier intervention and disease modelling and proof based medical consultancy using medical image feature extraction and text mining. Due to certain resource constraints encountered in wearable devices, it is essential to outsource frequently generated healthcare information of individuals to cloud systems. Regrettably, handling both computation and storage to some un-trusted entity is a sequence of privacy and security crisis. The prevailing approaches significantly concentrates on finely tunes privacy preservation statistical medical text analysis and access, which is barely considered as dynamic fluctuation of health condition and analysis of medical image. In this work, an effectual and secure privacy preservation dynamic medical modelling scheme has been anticipated as a healthcare service with cloud based e-healthcare solution which offers automated functions to acquire nearby hospitals to take proper treatment during disorder conditions. Predictive analysis over individual health information is performed using adaptive bagging model to alert the physicians and nurses during health disorder. While in emergency condition, alerts to nearby health care centres are triggered and appropriate treatment has been provided depend on individuals historical information attained from the unique id produced using healthcare service. Simulation is carried out in MATLAB environment. The performance analysis of anticipated model is superior than prevailing approaches.
    Keywords: Healthcare; cloud computing; Healthcare services; cloud assistance; alert triggeringrnrn.

  • Cloud Resources Allocation for Critical IaaS Services in Multi-Cloud Environment   Order a copy of this article
    by Driss Riane, Ahmed Ettalbi 
    Abstract: In this paper, we propose new algorithms to allocate cloud resources for a composition of IaaS services in multi-cloud environment. First, we use Gomory-Hu transformation in order to identify the critical components of the user request. Second, the proposed algorithms use the computing and networking costs to select best suitable clouds to host the critical components. Some simulation results are presented to evaluate the performances of our algorithms.
    Keywords: Cloud Computing; Interoperability; Cloud Networking; IaaS Composition; Optimization; Multi-Cloud Computing.

  • AUTHENTICATION FOR CLOUD COMPUTING SYSTEM THROUGH SMARTCARD   Order a copy of this article
    by Albert Mayan John 
    Abstract: The platform has wide consequence on information technology systems was cloud computing. Outflow can be reduced significantly from pay-per-use method. Society shifting from personal software and hardware system towards this platform which offers high flexibility. Main threat for cloud computing is safekeeping. Securing their system against external competitors by using secured connections and firewalls is done by most of those benefactors of the cloud. When as soon as that figures are send to outer party, the information secrecy will convert as main problematic, beyond that problematic of which those illegal users access that resource of cloud server can steal data of legal users and those legal users access that illegal server. For the security of users privacy, when the authorized cloud users access that service sources, they needs to verify the cloud server, and cloud server needs to identify those users login requests to make ensure the users are legal users. In this system we are proposing the virtual smart card ID produce with clutter techniques Which is used to prevent the fake cloud servers an also protect that cloud data from many hackers
    Keywords: Cloud computing; Virtual smartcard ID; Clutter techniques.

  • Semi Convergent Matrix Based Neural Predictive Classifier for Big Data Analytics and Prediction in Cloud Services   Order a copy of this article
    by Rajasekar R, Srinivasan K 
    Abstract: Big data analytics is a technique concerning gathering, organizing and analyzing huge units of records to find out patterns or useful information. Recently, many research works has been designed for large records analytics of climate data. In system in accordance with overcome certain challenge,Semi Convergent Matrix based Neural Predictive Classifier (SCM-NPC) Procedure is recommended that gives productive Big Data calculation and data partaking in Cloud Services. Initially, the SCM-NPC technique constructsSemi Convergent Matrix on distributed Big Data for improving the search accuracy of user requested information , Next, The SCM-NPC technique is used Neural Predictive Classifier for improving the prediction rate of climate data on Cloud Big Data. Finally, The SCM-NPC technique applies MapReduce function on Neural Class that provides efficient predictive analytics about climate data conditions on Cloud Big Data.The proposed SCM-NPC system conducts test takes a shot at parameter, for example, forecast rate, computation time and classification accuracy by using Amazon EC2 Cloud Big Data sets. The results show that SCM-NPC technique is to build the expectation level of atmosphere realities conditions on Big Data and furthermore decreases the calculation time when contrasted with best in class works.
    Keywords: Big Data; Cloud Services; Semi Convergent Matrix; climate data conditions; Neural Predictive Classifier; MapReduce function.

  • Decentralized Erasure Code for Hadoop Distributed Cloud File Systems   Order a copy of this article
    by Mohana Prasad, Kiriti S., V.T.Sudharshan Reddy 
    Abstract: Hadoop distributed file system (HDFS) has been developed for the data-oriented model, here data archival is also the concept for the inactive data removal. Data archival in HDFS is facing security issues like main data has been deleted or moved to some other place. For satisfying these issues, we have developed the unique architecture for securing the data movement in cloud-based HDFS, it will also predict the inactive data for the future purpose. Admin will do three types of activities namely, configuration, integration,and recovery by manually to see the malicious in a distributed system. If any unwanted data enters, then it will configure the system to security level programs. This area can also satisfy the cloud platform in the future.Cloud-based HDFS will give the data access comfortability to the users, we have chosen this area to reduce the attack as well as inconvenience. A number of unwanted intermediate activities like data stealing, data moving, data altering and data communication. This malicious will spoil every network, to reducing this area, we have proposed the security terms on designing the system also achieved the security level and communication speed as good on comparing the existing system values.
    Keywords: HDFS; Security,Admin; Integration; Performance; Data Communication.

  • Radio Frequency based Periodic Cloud Data Analysis for Smart Farming   Order a copy of this article
    by Deva Brinda, M, KANNADASAN. R, Sivaram M 
    Abstract: Radio frequency based smart farming are revolutionizing agriculture.Smart technologies, demanding different knowledge, skills and labour management among farmers which potentially change the culture of farming from hands-on and experience-driven management to a data-driven approach. To meet the growing demand for food, farm productivity should be improved by predicting crop performance in diverse environmental conditions. The uptake of Smart Farming technologies in the past two decades has been most prevalent in agricultural sectors. The farming becomes smarter with automation and Radio frequency technology. Radio frequency collect and manage data about crop performance, climate change, livestock welfare, resource shortages and atmosphere that frequently influence farming construction via smart phone applications where datas container remain retrieved wherever and ubiquitously aiding animate nursing and termination to termination attach amongst altogether the gatherings troubled. With smart sensor and equipment farmers can increase crop production with reduced cost and time. Smart Farming along with sensors offers the guarantee of expanding efficiency while diminishing generation costs and limiting ecological effects. The sensor data will be collected and stored in cloud. Analysis of farming is done with computing the data stored in cloud. Based on these cloud data, farmers perform farming in proper annual time. Cloud based storage is more appropriate for these computations. Various climatic conditions for farming is analysed with these cloud data.
    Keywords: Agricultural sector; Automation; Radio frequency; Smart farming; Sensor.

Special Issue on: Cloud Computing for Sustainable Intelligent Communications

  • Greedy Based Task Scheduling Algorithm for Minimizing Energy and Power Consumption for Virtual Machines in Cloud Environment   Order a copy of this article
    by Abdul Razaak MP, Gufran Ahmad Ansari 
    Abstract: The research in cloud computing has following significant elements, VMP referring to Virtual machine placement and energy efficiency. This paper proposes applying Evolutionary computing in VMP which will result in active physical servers reduction in numbers. This will also help in saving energy by scheduling underutilizing servers. Task Scheduling Process is recommended in this article which minimizes the energy consumed by active servers. Task scheduler is interlinked with cloud server with the help of VMP. Minimization Algorithm for Active physical servers (MAPS) is an algorithm which has been used in this proposal. Active information that flows between cloud server and VMP is controlled by MAPS. Java software is adopted to implement in this recommendation. Based on results of simulation, greedy algorithm which performs better is identified that results in improved energy efficiency.
    Keywords: Virtual machine placement; energy efficiency; cloud server; active servers.

  • Data Centric Redundancy Elimination for Network Data Traffic   Order a copy of this article
    by Sandhya Narayanan, Philip Samuel, Mariamma Chacko  
    Abstract: Network traffic occurring in the internet is a challenging issue due to the increase in internet users. Now a days internet traffic increases exponentially every month. Communication capability between the networks become difficult because of this heavy traffic. We propose Hashing Based Network Resilient Distribution (HBNRD) method to detect and eliminate duplicate data chunks in the packets of network layer using big data processing framework. HBNRD method helps to detect the similar files transferred through the internet and removes redundant data chunks. This results in fast communication and the proposed model attains fault tolerance because of resilient distributed approach. This data centric model can dynamically allocate the resources and can detect the data chunk repetition occurring during computation. Use of Center for Applied Internet Data Analysis (CAIDA) dataset shows that HBNRD improves the network performance by reducing the internet traffic redundancy by 60%. The data centric network traffic redundancy eliminated model is fast, scalable and resilient.
    Keywords: Network Data Traffic; Resilient Distribution; Hashing; Big Data Analytics; Redundancy Elimination.

  • Minimizing Power Utilization in Cloud Data Centers Using Optimized Virtual Machine Migration and Rack Consolidation.   Order a copy of this article
    by Hemanandhini I.G, Pavithra R, Sugantha Priyadharshini P 
    Abstract: Cloud computing is a disruptive technology used to maintain computational assets on large data centers along with the internet. The Cloud delivers on-demand applications and computing resources to its users. The Cloud data center has the ability to host various computing applications which executes from few seconds to hours. As the usage of cloud resources continues to become more and more advanced, the need for data centers grows faster. These cloud data centers use huge volumes of electricity which contributes to environmental drawbacks like carbon emission and global warming. Because the computers deployed in the data centers are working hard nonstop they get extremely hot, several cooling systems should be deployed to minimize the heat generated from the data centers and this also increase the maintenance cost. Here, the problem of high power usage in the data centers is addressed by Virtual Machine Migration and Consolidation technique. The virtualization has the ability to shift a Virtual Machine (VM) from one server to the other available servers using the VM migration technique. Here the VMs are migrated to other appropriate servers to decrease the total number of running physical server machines. This paper not only tries to reduce the number of currently running server machines to cut down the power used in the data centers but also concerns to shut down a considerable number of active racks so that unused routing and cooling equipments can be turned off thereby reducing the data center power consumption to the maximum and significantly contributing to the environment. Our work uses Modified Best Fit Decreasing algorithm (MBFD), Particle Swarm Optimization (PSO) and Hybrid Server and Rack Consolidation (HSRC) algorithms to consolidate the servers and racks.
    Keywords: Cloud Computing; Scheduling; Virtualization; Cloud Computing; Virtual Machine; Physical Machine; Virtual Machine Migration; VM Consolidation.

  • Enhancing the Job Scheduling Procedure to Develop an Efficient Cloud Environment using Near Optimal Clustering Algorithm   Order a copy of this article
    by Suganya R, NIJU P. JOSEPH, Rajadevi R, Ramamoorthy S 
    Abstract: In this internet era cloud computing is the major tool for processing the information exchanged in the web. As cloud being the vital part of the current business world, it should be handled properly and to be utilized for the efficient processing of business. There are various problems in the cloud computing, where the consumers as well as the service providers facing in their day to day cloud activities. Job scheduling problem plays a vital role in the cloud environment. To provide an efficient job scheduling environment, it is necessary to perform efficient resource clustering in the cloud environment. The traditional cloud environments failed to concentrate on the resource clustering, even though some of the existing systems provided the solution for this problem but not atleast near optimal or feasible solution. In this regard, the proposed system, concentrated on the resource clustering methodology by proposing an efficient resource clustering algorithm named Identicalness Split up Periodic Node Size (ISPNS) in the cloud environment. The proposed system is compared with the existing systems to justify the performance of the proposed resource clustering algorithm. As a result, the proposed system produces the near optimal solution for the resource clustering problem which will helps to provide an efficient job scheduling environment in the cloud environment in future.
    Keywords: Cloud Computing; Resource Clustering; Identicalness; split up; node size.
    DOI: 10.1504/IJCC.2023.10033597
     
  • Energy Saving Slot Allocation Based Multicast Routing in Cloud Wireless Mesh Network   Order a copy of this article
    by JeyaKarthic M, SUBALAKSHMI N 
    Abstract: In recent days, Cloud computing gains significant attention among researchers which offers more technical support. There are many techniques considered for cloud computing, but many parameters have affected the performance of cloud computing like planning, security and time issues. The previous methods focus on work programs based on priority activities, considering various traits to plan proposals for new task planning, workouts, and safety planning. Several steps have been suggested to provide practical guidelines and protocols to offer client requests to cloud ends. This work proposes a method for virtual cloud planning in a single or multiple data centers utilizing simple protocols. A new Load Balanced Amortized Multi-Scheduling Algorithm (LBAM) is proposed, which assigns the following cloud task based on active load on the cloud system. The proposed system calculates the multiple attribute weight for each workplace and introduces an application source that changes in application virtualization environment. For example, there are many data, each has access to different resources, but the importance is available for support and usage. The system calculates the cloud data weight based upon the allocation of data and its consequence based on the processing efficiency of the cloud machine. This works very well in most configurations, but virtual machines hold the ability to process load equally, speedy and with efficient memory management. A detailed comparative analysis is made with the existing methods namely Service Level Agreement (SLA), Fast Parallel Grid Generation Algorithm (FPGGA) and DJ-Scheduling (DJS) Algorithm. The obtained results indicated that the LBAM model is superior to compared methods under several aspects.
    Keywords: Cloud Computing; Cloud services; Data center; Distributed system; Parallel processing; Load balancing; Scheduling; Virtual Machine.

  • Fog Computing based Public E-Service application in Service Oriented Architecture   Order a copy of this article
    by Mohamed M. Abbassy, Waleed M. Ead 
    Abstract: The Fog Computing Framework provides the tools for handling public e-service resources through a real-world testbed in fog environment. The framework also facilitates communication between equipment, harmonies of fog tools and equipment, deployment of public e-services and the delivery of dynamic resources in fog landscape. In order to respond to various system events, such as device access, device failure, and device overload, the framework is also able to respond to these main functionalities. In the framework evaluation, the consequent event management and evaluation are performed of other significant measures, e.g. costs, deployment times and arrival service patterns. As a consequence, the architecture offers the tools needed to deal with the fog environment complexities and provides faster public e-service running times and significant cost benefits The present paper outlines the present infrastructure and proposes a different model in public e-service infrastructure, which will be a coordinated effort of Fog computing with an intelligent protocol of communication with a Service Oriented Architecture (SOA) incorporation and ultimately integration with the Agent-based SOA. This proposed framework will be capable of interchanging data by decomposing the quality of service (QoS) reliable and methodological with low latency, less bandwidth, heterogeneity in less time.
    Keywords: Fog Computing Framework; Agent-based SOA; intelligent protocol of communication; main functionalities; fog landscape; arrival service patterns; public e-service; fog environment; quality of service (QoS);.

  • A Novel Task Assignment Policies using Enhanced Hyper-Heuristic Approach in Cloud   Order a copy of this article
    by Krishnamoorthy N, Venkatachalam K, Manikandan R, Prabu P 
    Abstract: Cloud computing plays a vital role in all fields of todays business. The processor sharing server farm is one of the most used server farms in the cloud environment. The key challenge for the mentioned server farms is to provide an optimal scheduling policy to process the computational jobs in the cloud. Many scheduling policies were introduced and deployed by the existing approaches to build an optimal cloud environment. The existing approaches of the heuristic algorithms such as meta-heuristic and hyper-heuristic approaches were the most frequently used scheduling algorithms for the past years. These approaches work well only in the limited types of tasks and resources in a processor sharing server farms in the cloud environment. In the proposed system, novel task assignment policies have been proposed by enhancing the hyper-heuristic approach for the type Low task and high resource in the cloud environment. The results of the proposed approach are compared with the existing approaches and the performance evaluation of the proposed approach is also done. As a result, the proposed enhanced hyper-heuristic approach performs well for Processor sharing server farms in the cloud environment.
    Keywords: Keywords: Server Farms; Hyper-Heuristic; Make span; Cloud Computing.

  • A New method for human Activity Identification using Convolutional Neural Networks   Order a copy of this article
    by Prakash P S, Balakrishnan S, Venkatachalam K, Saravana Balaji B 
    Abstract: The recent applications like life-logging, body fitness monitoring and health tracking domains are using mobile sensors that are embedded in smart mobiles to identify human activities by the way of gathering human day-to-day behavior. Human activity Identification is becoming a major challenging task in society. This is partly because of the wide availability human activities and major variation in how well the given activity can be performed. The separation between the human activities is considered to be the critical task using features. This paper proposes novel techniques that extract the discriminative dimensions for human activity Identification automatically. Particularly, design a novel technique with (CNN) Convolutional Neural Networks in which it was used for catching dependency then size invariance that is available in signal. A deep convolutional neural network (DCNN) consists of many neural network layers. Two different types of layers, convolutional and pooling, are typically alternated. The depth of each filter increases from left to right in the network. The last stage is typically made of one or more fully connected layers. 3 people activities like walking, running then remaining still are collected from smart mobile sensor. The axis like x, y and z information were transferred with column vector magnitude information then these can be utilized input for studying or training CNN. Experimental results shows that the CNN based method achieves 93.67% accuracy compared with the baseline random forest approach 89.20%.
    Keywords: Keywords:Convolutional Neural Network; People Activity Identification; Random Forest.

  • PUF Based on Chip Comparison Technique for Trustworthy Scan Design Data Security against Side Channel Attack   Order a copy of this article
    by Shiny M I, Nirmala Devi M 
    Abstract: All the field of computational technology, while cloud computing, cryptography is vulnerable to different types of external threads. Scan based testing method is a well-known tool for testing integrated chips, but at the same time, it helps hacking the secret code from the chip. However, the scan hardware acts as a platform for attackers to hack the secret data from the chip. Most of the existing solutions are based on the alteration of the conventional scan structure with more complex design which focused only on security but violates the testability. In this paper, we propose a dynamic reconfiguration architecture using embedded PUF design, which protects the chip from brute force attack, hamming distance based attack with optimal deployment configuration. Additional on-chip comparison and masking also used to enhance security. The experimental results are evaluated on standard benchmark circuits
    Keywords: Security; DFT,Physical Unclonable Function (PUF); On chip Comparison; Hardware as a service; automatic reconfigurationrnrn.

  • Parallel Progressive Based Inductive Subspace and Fuzzy Based Firefly Algorithm for High Ensemble Data Clustering   Order a copy of this article
    by Karthika Dhanasekaran, Kalaiselvi K 
    Abstract: Recently, the focus remains on the progressive inductive based semi-supervised clustering ensemble background, where the techniques are similarly random hyperspace and the Constraint Propagation (CP) methods. Due to the improvement and digitalization of each area, huge datasets are being created quickly. Such huge dataset clustering is difficult for conventional sequential clustering methods suitable for higher computation time. Distributed parallel processing and methods are consequently useful towards attaining results and scalability constraints of clustering huge datasets. To solve this issue, several authors try to introduce new parallel clustering methods. In this work, the Parallel Incremental Support Semi-Supervised Subspace Clustering Ensemble (PIS3CE) algorithm, is based on MapReduce is proposed, which is an easy yet potent parallel processing method. Depending on micro-clusters and correspondence relative, introduce a clustering method that can be straightforwardly parallelized in MapReduce and completed in moderately only some MapReduce. However, the PIS3CE procedure, the choice of choosing centroid calculations were chosen utilizing the classifier as Improve Support-Vector Machines (ISVM). An ensemble cluster normalization is done via Fuzzy based Firefly Algorithm (FFA), and the normalized cut algorithm remains with the knowledge to take-out high dimensional data grouping. Outcomes demonstrate that the specified PIS3CE outline achieves well on three benchmark examples where the vector space is high and also it enhances the outcomes with high accurateness.
    Keywords: Clustering Incremental Ensemble Member Chosen (IEMC); Improved Support Vector Machine (ISVM); Constraint Propagation (CP); Parallel Incremental Support Semi-Supervised Subspace Clustering Ensemble (PIS3CE); MapReduce; Cluster ensemble; semi-supervised clustering and Data mining.

  • Secured File Transmission in Knowledge Management-Cloud (KM-C)   Order a copy of this article
    by Jayashri Nagasubramaniam, Kalaiselvi K. 
    Abstract: The victory of an Organization is primarily depending on continuous investment learning as well as obtaining some or more new knowledge which create new business and improve and enhances new performance and techniques. So, this Knowledge Management should be resulting to more succeeding, or even greater than that of organizations objectives. As per modern methodologies and also paradigms came and emerged, New techniques and efforts should be made in businesses to clearly in line with them, specifically in the area of Knowledge Management. Nowadays, more popular and efficient one is Cloud Computing Paradigm, because of its less cost, time and efforts to fulfill the need for software development. Also, it gives an excellent measures and ways to collect and reallocate knowledge. Let us discuss the risk factors in utilizing cloud computing in the field of knowledge management systems and their solutions.
    Keywords: Knowledge Management; Transmission; Risk Factor; Cloud Computing; Business and Technology; Cloud Computing in Knowledge; Cloud computing protection; Data and DES Encrypt; Organizations Objectives; Service-oriented Architecture (SOA); Cloud Computing (CC);.

  • An Effective Mechanism for Ontology based Cloud Service Discovery and Selection using Aneka Cloud Application Platform   Order a copy of this article
    by Manoranjan Parhi, Binod Kumar Pattanayak 
    Abstract: Now-a-days the cloud computing is growing exponentially and the preference to cloud services is getting increased day by day due to their cost saving benefits. These services mostly seem to be significantly identical in their functionality except their key attributes like storage, computational power, price etc. Till now there is no uniform specification for defining a service in cloud domain. In order to specify the identical operations and publish the services on the websites, different cloud service providers tend to use completely different vocabulary. The process of requesting for a cloud service becomes merely challenging task. Hence, a reasoning mechanism is immensely essential for service discovery that could resolve the resemblance appearing across different services by inferring with the respective cloud ontology. In this paper, an effective mechanism has been proposed using ontology for most relevant cloud service discovery and selection using a distributed cloud application platform called Aneka PaaS.
    Keywords: Cloud Computing; Cloud Service Discovery and Selection; Quality of Service (QoS); Cloud Ontology; Aneka Platform as a Service (PaaS).

  • A Customer Churn Prediction Model in Telecom Industry Using Improvised_XGBOOST   Order a copy of this article
    by Swetha P, Dayananda R B 
    Abstract: Telecom industry has become part of humans daily routine and the rapid increase over the last two decades results in tremendous competition among the telecom service providers. Further proliferation in the telecom industry, made these service providers survival challenging in the market. Hence, to stabilize such issues, service providers should be aware of the features that make the customer churn or who are willing to churn. However, the performances of predictive models gets highly affected when the real-world data set is considered due to highly imbalanced dataset. In this research work, we focused on developing a churn prediction model named Improvised_XGBoost; at first pre-processing mechanism is developed for various data, later considering the XGBoost as base model three distinctive algorithm for finding the optimal split of decision tree, handling the large dataset and to avoid the missing values were deployed. Furthermore feature function is developed for efficient feature handling; feature function and XGBoost combined produces an efficient model for customer churn prediction. We evaluate our proposed model considering two established and popular dataset i.e. South Asia GSM and churn-bigml dataset. Furthermore, our proposed model achieved almost absolute accuracy more than 99% considering the various performance metric such as accuracy, precision, recall and F1-measure.
    Keywords: churn prediction; telecommunication; prediction model; Improvised_XGBoost.

  • A HYBRID ENCRYPTION FOR SECURE DATA DEDUPLICATION IN CLOUD   Order a copy of this article
    by Silambarasan Elkana Ebinazer 
    Abstract: Cloud Computing (CC) is a cost-effective platform for users to store their data on the internet rather than investing in additional devices for storage. Data Deduplication (DD) defines a process of eliminating redundant data in the cloud by leaving a single copy. This paper provides a hybrid encryption scheme using Genetic Algorithm (GA) and Attribute-Based Encryption (ABE) to ensure secure data transmission in the cloud and preventing the data deduplication by using Message Locked Encryption (MLE). The hybrid encryption scheme provides secure data transmission in the cloud. From the simulation output, the time requirement is reduced up to 94% and consumes 43% bandwidth compared with existing works. The proposed method tends to be quite effective in reducing storage and bandwidth costs.
    Keywords: Data Deduplication; Genetic Algorithm; Attribute-Based Encryption; Block Level Deduplication; Cloud Computing.
    DOI: 10.1504/IJCC.2023.10042950
     
  • Ad-Hoc Networks: New Detection and Prevention Approach to Malicious Attacks using Honeypot   Order a copy of this article
    by Avijit Mondal, Radha Tamal Goswami 
    Abstract: The traditional security system fails when a novel attack comes into the system. For efficient detection and Prevention of malware threats and Intrusion Prevention Framework, which will invigilate entire network activity. Known attack identification and performance analysis can be made through it. After identification, it will send information to the concern of administrative authority. Honeypot can be designed as an Intrusion Detection System. In an ad-hoc network, this Honeypot technology can be used to collect malicious traffic activities. Our strategy is to track and avoid the dissemination of harmful activities in an ad hoc network utilizing technologies named Honeypot.
    Keywords: Malware intrusion; wireless network (W.N.).; Ad-Hoc Networks; New Detection and Prevention ; Malicious Attacks; Honeypot.

  • Development of Innovative Cloud-based IoMD Architecture for Elder Population Monitoring   Order a copy of this article
    by Nasr Musaed Almurisi, Srinivasulu Tadisetty 
    Abstract: The rapid evolution in IoT technologies has facilitated the development of smart applications in different sectors. One of the most significant applications in IoT fields is the Healthcare System (HS) that receives tremendous attention from research academy and industry. Nowadays, the IoT paradigm allows several smart objects to be interconnected with medical devices resulting in emerging the concept of Internet of Medical Things (IoMT). In the past few years, many HS architectures and approaches have been proposed for integrating internet with medical things. However, the performance of existing architecture is decreased when medical data is increased. Processing a large amount of data in the cloud requires more effective algorithms and innovative solutions to fulfill the task. Therefore, this article proposed a novel Cloud-based Internet of Medical Device (IoMD) architecture to overcome the limitations of conventional HS. In fact, the presented framework is based on three main technologies; IoT, cloud computing, and artificial intelligence for the better development of innovative healthcare applications. Our architecture integrates IoT with medical devices and artificial intelligence algorithms are implemented in the cloud to handle the received data. To test and validate the developed framework, we have considered some of the daily activities of elder people. Since medical data are reaching the cloud platform, hence we proposed a neural network method called Deep LSTM that runs in the cloud platform and capable of processing a large amount of data in a fast and efficient manner. A real dataset from UCL Machine Learning Repository is used to train the proposed Deep LSTM for recognizing the daily activities. However, the analysis of the results confirms that our method can classify the activities with an accuracy of 98.60%.
    Keywords: IoT; IoMD; Artificial Intelligence; Cloud Computing; LSTM.

  • Distributed multi-cluster dynamic Q-routing for large size traffic grids   Order a copy of this article
    by Imad Lamouik, Ali Yahyaouy, My Abdelouahed SABRI 
    Abstract: It is undeniable that we are witnessing the birth of the next generation of smart vehicles that are getting more and more capable of performing a multitude of driving-related tasks without any human intervention; staring from simple cruise control and lane following to complete autonomy of the vehicle, where the passenger has to only choose his destination, and the vehicle on-board computer will take care of the rest. However, to achieve full autonomy, a fast and reliable routing strategy must exist to ensure optimal path computation. Yet, given the large size of modern cities and the volume of daily traffic, most static and centralized algorithms have huge limitations and will not solve congestion problems. Therefore, in this article, we will propose an architecture that exploits machine learning power, especially Q-routing, coupled with network clustering techniques, to offer a distributed routing solution in a real-size network by partitioning the traffic grid into a manageable size. Furthermore, we will present simulation results that conclude that the proposed architecture offers an effective routing solution in significantly faster computation time.
    Keywords: Traffic control ; Congestion ; Routing ; Network clustering ; spectral clustering ; Q-routing.

  • Self-evident rapid and Scalable fortification Encryption with data access organize in multiuser Cloud Environments   Order a copy of this article
    by Mohan Nagamunthala, Manjula R 
    Abstract: Increasing internet connectivity transforms the small business owners to the large business dealers. Here the cloud storage is platform for the transforming the data between the owners and their clients. This data transformation leads a way to data redundancy and data loss. So there is a need for the secure data transmission and the deduplication in cloud storage. To handle the situation cloud storage providers offers cloud encryption technique. The proposed works apply the encryption technique between the cloud owners and clients. A novel design of auditor section controls the data integrity with security for the cloud owners and multiple cloud clients. The auditor integrity model controls the data upload activity among the clients and the cloud servers. It generates the ownership proof among the cloud owners. The proposed work can eliminate the multiple clients to save the same data copy in the cloud server. The elimination improves the proper utilization of cloud storage resource. Encryption techniques perform on hybrid block cipher algorithms like the Triple DES, Blowfish increases the security analysed over the Java Software
    Keywords: client; cloud server; owners; encryption; decryption; security; auditor.

  • Resource-Aware Routing in Opportunistic Networks: Existing Protocols and Open Research Issues   Order a copy of this article
    by Rani , Amita Malik 
    Abstract: Opportunistic networks provide effective communication in challenging scenarios such as interplanetary communication, post-disaster situation etc. Various routing techniques in opportunistic networks have been devised to attain high delivery ratio, low latency or low overhead. But the nodes (devices) in Opportunistic networks have limited battery, buffer space and bandwidth, thus designing efficient routing protocol in Opportunistic networks always remains a challenging task. Several research efforts have been made in Opportunistic networks, and several review papers also exist presenting features of opportunistic networks, applications, mobility models, tools, challenges, taxonomies of routing, energy-efficient routing, buffer management policies and incentive schemes, but a comprehensive review on resource-aware routing appears to be missing. So, we propose a novel taxonomy for categorizing existing resource-aware routing protocols proposed in this field. These routing protocols are mainly categorized into energy-aware, buffer aware, bandwidth aware, and hybrid routing that considers the energy of nodes, buffer space, bandwidth, and multiple parameters to make forwarding or routing decision in opportunistic networks. Further, we discuss various existing resource-aware routing protocols with their strengths and weaknesses, leading to open research issues. This survey aims to identify resources used in opportunistic networks and help future researchers devise a novel resource-aware routing protocol. The paper highlights open research issues in Opportunistic networks for future researchers and summarise all investigated resource-aware routing protocols.
    Keywords: Routing; Delay Tolerant Networks; Resource- Aware Routing; Opportunistic Networks rnrnrn.

  • Hybrid Dense Matching Features for Cloud Based Face Recognition   Order a copy of this article
    by Shreekumar Thottappuram, K. Karunakara 
    Abstract: Cloud computing is a computing service done not on a local device but an internet connection to a data centre infrastructure. The cloud computing system also provides a scalability solution where cloud computing can increase the resources needed when doing larger data processing. In recent years, Face Bio-Metric plays an important role in biometric authentication, where security is the primary concern. The two major difficulties to be addressed in Face Recognition are Illumination and Pose variation. This work proposes a cloud computing-based Face Recognition technique by consolidating 4-patch LBP and the local landmark features to the acquired feature vector. After that, the SVM is utilized to recognize the individual. The feature set used for training and testing isalso made available in the Cloud.At some stage in the training phase, the kernel parameters of SVM are optimized with GWOto improve the recognition performance. This method yields a maximum recognition accuracy of 99.00% with LFW and 98.0% accuracy with YTF.
    Keywords: Local Binary Pattern; feature extraction; Support Vector Machine; Grey Wolf Optimization; cloud computing.