Enhancing Data Quality at ETL Stage of Data Warehousing

Enhancing Data Quality at ETL Stage of Data Warehousing

Neha Gupta, Sakshi Jolly
Copyright: © 2021 |Pages: 18
DOI: 10.4018/IJDWM.2021010105
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Data usually comes into data warehouses from multiple sources having different formats and are specifically categorized into three groups (i.e., structured, semi-structured, and unstructured). Various data mining technologies are used to collect, refine, and analyze the data which further leads to the problem of data quality management. Data purgation occurs when the data is subject to ETL methodology in order to maintain and improve the data quality. The data may contain unnecessary information and may have inappropriate symbols which can be defined as dummy values, cryptic values, or missing values. The present work has improved the expectation-maximization algorithm with dot product to handle cryptic data, DBSCAN method with Gower metrics to ensure dummy values, Wards algorithm with Minkowski distance to improve the results of contradicting data and K-means algorithm along with Euclidean distance metrics to handle missing values in a dataset. These distance metrics have improved the data quality and also helped in providing consistent data to be loaded into a data warehouse.
Article Preview
Top

1. Introduction

1.1 Data Quality at ETL in Data Warehouse

Information warehousing is a network of decision support tools focused on supporting the analyst's knowledge to make quicker and better decisions. Data warehouse is the compilation of non-volatile, subject-oriented and organized data. The purposes of data warehouse involve:

  • 1.

    Data Extraction – It is used to collect information from various heterogeneous resources.

  • 2.

    Data Cleaning - It is used to spot and correct the flaws in the collected data.

  • 3.

    Data Transformation – It converts the information from legal presentation to warehouse format.

  • 4.

    Data Loading – It includes arranging, associating, evaluating, partitioning, checking integrities, and developing entities.

  • 5.

    Refreshing – It is the method of updating the data sources to the warehouse.

In many domains, the data warehouse information can be studied and referred for multiple purposes. It is used by rearranging the products and handling the product portfolios relating to the sales and profit of the year to tune production strategies. The information is used for market research by analyzing the customer's interest, their buying time, budget cycle, etc. This analysis plays a vital role in supporting the management of customer relationships in order to make changes according to demand and conditions.

As discussed above, data is the main fuel for any kind of prediction and any kind of operation as well. The initial requirement is to understand the data warehouse components and the data warehouse framework along with the operations which can be done with the data. The information collected from the various resources could be historical information or the accumulated information called as data warehouse. There are various security and quality issues with the data collected from various sources. Data will be collected from various repositories having different formats for the same type of data. To understand the concept, various data formats available in data warehouses have been explained as follows:

1.1.1. Structured Data

This can be historical data that can be compared with database information. The data in the databases can be stored in the form of rows and columns and can be manipulated in the form of rows and columns with simple queries. The queries must target the variables of particular row and column and with simple basic operations on the databases, so that the data can be understood. Most representations of data will be text format and other formats of the database.

1.1.2. Unstructured Data

This type of data usually has multimedia content. The multimedia consists of different formats of images, videos and audios. Sometimes we need to consider the concept of live streaming. Live streaming is the data that is captured using Apache spark as the main base. It consists of components that can handle live streaming information. It can be the main source of operations on big data, cloud computing and can also be the source of data management.

1.1.3. Semi – Structured Data

This format ensures connectivity to the web application. Every web application has some sort of data transfer mechanism and a framework that can hold the data from the user to the database. The information will be carried by those frameworks to the database. In this scenario, data quality cannot be manipulated as there are different technologies that can be used for the information transfer from the client.

A decision making and the implementation of the predictions can be done with the help of valuable information. This kind of information should be with valid quality metrics and the metrics need to be followed to maintain the great accuracy of the data.

Extract, Transformation and Load (ETL) is the process of handling the data quality with the data warehouse and the process of data quality will be affected when in process of pre-processing.

The quality of data can be slightly compromised based on its functions, such as extraction, transformation, cleaning, and loading. Data is affected by several processes depending upon its environment. Even though, after cleaning and filling, there may be residual dirty data, which should be reported and these remaining dirty data can be are the reason for failure during the process of data cleaning.

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024)
Volume 19: 6 Issues (2023)
Volume 18: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing