Data Profiling

Data Profiling

What Is Data Profiling?

Data profiling is the process of reviewing source data, understanding structure, content and interrelationships, and identifying potential for data projects.?

Data profiling is a crucial part of:

  • Data warehouse and business intelligence (DW/BI) projects—data profiling can uncover data quality issues in data sources, and what needs to be corrected in ETL.
  • Data conversion and migration projects—data profiling can identify data quality issues, which you can handle in scripts and data integration tools copying data from source to target. It can also uncover new requirements for the target system.
  • Source system data quality projects—data profiling can highlight data which suffers from serious or numerous quality issues, and the source of the issues (e.g. user inputs, errors in interfaces, data corruption).

Data profiling involves:

  • Collecting descriptive statistics like min, max, count and sum.
  • Collecting data types, length and recurring patterns.
  • Tagging data with keywords, descriptions or categories.
  • Performing data quality assessment, risk of performing joins on the data.
  • Discovering metadata and assessing its accuracy.
  • Identifying distributions, key candidates, foreign-key candidates, functional dependencies, embedded value dependencies, and performing inter-table analysis.

Types of data profiling

There are three main types of data profiling:


Structure discovery

  • Validating that data is consistent and formatted correctly, and performing mathematical checks on the data (e.g. sum, minimum or maximum). Structure discovery helps understand how well data is structured—for example, what percentage of phone numbers do not have the correct number of digits.?

Data profiling and data quality analysis best practices

Basic data profiling techniques:

  • Distinct count and percent—identifies natural keys, distinct values in each column that can help process inserts and updates. Handy for tables without headers.
  • Percent of zero / blank / null values—identifies missing or unknown data. Helps ETL architects setup appropriate default values.
  • Minimum / maximum / average string length—helps select appropriate data types and sizes in target database. Enables setting column widths just wide enough for the data, to improve performance.

要查看或添加评论,请登录

Nivedita singh的更多文章

  • Front-End vs. Back-End: What’s the Difference?

    Front-End vs. Back-End: What’s the Difference?

    Front-End Development Front-end development focuses on the user-facing side of a website. Front-end developers ensure…

  • Talend

    Talend

    What is Talend? Talend is an open source software platform which offers data integration and data management solutions.…

  • Snowflake

    Snowflake

    Snowflake Inc. is a cloud computing–based data cloud company based in Bozeman, Montana.

  • Data Engineering

    Data Engineering

    In the modern world, it is tough to think of any industry that has not been revolutionized by data science. Although…

  • Data Scrubbing

    Data Scrubbing

    What is Data Scrubbing? If in the course of doing household chores, someone told you to clean the floor, you most…

  • Computer Vision

    Computer Vision

    What is computer vision? Computer vision is a field of artificial intelligence (AI) that enables computers and systems…

  • CSS

    CSS

    What is CSS? Cascading Style Sheets (CSS) is used to format the layout of a webpage. With CSS, you can control the…

  • Microsoft 365

    Microsoft 365

    Microsoft 365 is a product family of productivity software, collaboration and cloud-based services owned by Microsoft…

    2 条评论
  • Front-End Developer

    Front-End Developer

    Front-End Front-End Development Front-end development focuses on the user-facing side of a website. Front-end…

  • Data Mining

    Data Mining

    Data mining is the process of extracting and discovering patterns in large data sets involving methods at the…

社区洞察

其他会员也浏览了