Title: Decentralised erasure code for Hadoop distributed cloud file systems
Authors: K. Mohana Prasad; S. Kiriti; V.T. Sudharshan Reddy; Albert Mayan John
Addresses: Computer Science Engineering, Sathyabama University, India ' Computer Science Engineering, Sathyabama University, India ' Computer Science Engineering, Sathyabama University, India ' Computer Science Engineering, Sathyabama University, India
Abstract: Hadoop distributed file system (HDFS) has been developed for the data-oriented model; here data archival is also the concept for the inactive data removal. Data archival in HDFS is facing security issues like having the main data be deleted or moved to some other place. For satisfying these issues, we have developed the unique architecture for securing the data movement in cloud-based HDFS; it will also predict the inactive data for the future purpose. Admin will do three types of activities namely, configuration, integration, and recovery manually to see the malicious in a distributed system. If any unwanted data enters, then it will configure the system to security level programs. This area can also satisfy the cloud platform in the future. Cloud-based HDFS will give the users comfortability during data access. We have chosen this area to reduce the attack as well as inconveniency. A number of unwanted intermediate activities are data stealing, data moving, data altering and data communication. These malicious activities will spoil every network, and to reduce this area, we have proposed the security terms on designing the system. The proposed system has achieved greater security level and communication speed compared to existing system values.
Keywords: Hadoop distributed file system; HDFS; security; admin; integration; performance; data communication.
International Journal of Cloud Computing, 2022 Vol.11 No.5/6, pp.552 - 559
Received: 11 Aug 2019
Accepted: 28 Mar 2020
Published online: 02 Feb 2023 *