Helpful Extract & Load Practices for High-Quality Raw Data? : #3/5

Helpful Extract & Load Practices for High-Quality Raw Data : #3/5

ELT is becoming the default choice for data architectures and yet, many best practices focus primarily on “T”: the transformations.

But the extract and load phase is where data quality is determined for transformation and beyond. As the saying goes, “garbage in, garbage out.”

Robust EL pipelines provide the foundation for delivering accurate, timely, and error-free data.

Here are top data practices used & loved by the industry experts that will drive up quality for all your data sets, no matter what tool you use.

Setting the Stage: We need E&L practices, because “copying raw data” is more complex than it sounds.

The idea of ELT as a pattern sounds easy, “just copy over the source data first and then run your transformations over the raw data inside your own space”. However, both the word “copy” as well as “raw data” are two words with hidden obstacles.?

“Copying” sounds easy. But source data changes, and unless you know what changed, “copying” turns out to be more complicated than you think. Imagine a production table of 150 million “orders” that does come with a “timestamp” but not with a “modified” data. And yes, these exist, all over the place. How do you know that orders got modified, and if so, which ones? For instance, how would you know which orders got “canceled”, an operation that usually takes place in the same data record and just “modifies” it in place.

“Raw data” sounds clear. And yet the idea of extracting & loading implicitly means that usually, you copy between two different technical systems A and B, where you need to adjust the data to match system, B. You ingest from REST APIs and put it into Snowflake, or from an Oracle database into Redshift. Every time you change systems, you will need to modify the “raw data” to adhere to the rules of the new system. You need to do type casting, you need to think about whether you want to “flatten JSONs”, or whether you want to add additional metadata to your data.?

Simply “copying raw data” will bring up new questions every time you add a new data source or data target to your list. Even if it is just a new table from the same production database you’ve always been copying from.?

These practices will be your guide whenever you take on a new source for ingesting data into your data system.

Tip #3: Don’t flatten during EL, do it one stage later.

A lot of the source systems you ingest will return arrays, JSONs, or other nested objects with some kind of hierarchy you will want to break down for further processing. But not at the ingestion level. Take the raw data and dump it as it is in your data system, then have a process do your “flattening”.?

A very typical example are JSON objects. Your source might contain a large JSON object and you would like to have it processed into individual columns inside your Snowflake database. This practice suggests to first have a raw table with just your metadata columns and one column “JSON_blob” containing the JSON object. In a secondary step you can then process this data into columns.

No alt text provided for this image

The reason for this is that flattening involves business logic. It involves you knowing what properties are “always there”. If you flatten on ingestion, your ingestion process might break because one JSON object is NULL, or one JSON object doesn’t come with one expected value. It is always easier to take the already ingested data and rerun your flattener than it is to run your ingestion + flattening process together.

Additional tip: The same practice leads to avoiding type casting (if at all possible) on ingestion. I would recommend doing typecasting after ingestion.

要查看或添加评论,请登录

Dr. RVS Praveen Ph.D的更多文章

社区洞察

其他会员也浏览了