You can view the full text of this article for free using the link below.

Title: Optimising data quality of a data warehouse using data purgation process

Authors: Neha Gupta

Addresses: Faculty of Computer Applications, Manav Rachna International Institute of Research and Studies, Faridabad, 121002, India

Abstract: The rapid growth of data collection and storage services has impacted the quality of the data. Data purgation process helps in maintaining and improving the data quality when the data is subject to extract, transform and load (ETL) methodology. Metadata may contain unnecessary information which can be defined as dummy values, cryptic values or missing values. The present work has improved the EM algorithm with dot product to handle cryptic data, DBSCAN method with Gower metrics has been implemented to ensure dummy values, Wards algorithm with Minkowski distance has been applied to improve the results of contradicting data and K-means algorithm along with Euclidean distance metrics has been applied to handle missing values in a dataset. These distance metrics have improved the data quality and also helped in providing consistent data to be loaded into a data warehouse. The proposed algorithms have helped in maintaining the accuracy, integrity, consistency, non-redundancy of data in a timely manner.

Keywords: data warehouse; DW; data quality; DQ; extract; transform and load; ETL; data purgation; DP.

DOI: 10.1504/IJDMMM.2023.129961

International Journal of Data Mining, Modelling and Management, 2023 Vol.15 No.1, pp.102 - 131

Received: 05 Aug 2020
Accepted: 07 Apr 2021

Published online: 04 Apr 2023 *

Full-text access for editors Full-text access for subscribers Free access Comment on this article