Helpful Extract & Load Practices for High-Quality Raw Data : #5/5
ELT is becoming the default choice for data architectures and yet, many best practices focus primarily on “T”: the transformations.
But the extract and load phase is where data quality is determined for transformation and beyond. As the saying goes, “garbage in, garbage out.”
Robust EL pipelines provide the foundation for delivering accurate, timely, and error-free data.
Here are top data practices used & loved by the industry experts that will drive up quality for all your data sets, no matter what tool you use.
Setting the Stage: We need E&L practices, because “copying raw data” is more complex than it sounds.
The idea of ELT as a pattern sounds easy, “just copy over the source data first and then run your transformations over the raw data inside your own space”. However, both the word “copy” as well as “raw data” are two words with hidden obstacles.?
“Copying” sounds easy. But source data changes, and unless you know what changed, “copying” turns out to be more complicated than you think. Imagine a production table of 150 million “orders” that does come with a “timestamp” but not with a “modified” data. And yes, these exist, all over the place. How do you know that orders got modified, and if so, which ones? For instance, how would you know which orders got “canceled”, an operation that usually takes place in the same data record and just “modifies” it in place.
“Raw data” sounds clear. And yet the idea of extracting & loading implicitly means that usually, you copy between two different technical systems A and B, where you need to adjust the data to match system, B. You ingest from REST APIs and put it into Snowflake, or from an Oracle database into Redshift. Every time you change systems, you will need to modify the “raw data” to adhere to the rules of the new system. You need to do type casting, you need to think about whether you want to “flatten JSONs”, or whether you want to add additional metadata to your data.?
Simply “copying raw data” will bring up new questions every time you add a new data source or data target to your list. Even if it is just a new table from the same production database you’ve always been copying from.?
领英推荐
These practices will be your guide whenever you take on a new source for ingesting data into your data system.
Tip?#5: Do not transform data on ingestion, not even slightly unless you have to.
There are good reasons to transform data on the fly on ingestion, but almost every case you will think of also works without. There are two good reasons for transforming data on the fly: legal and security. For all other reasons, you should try to first ingest data, and then run a small transformation on the ingested data.?
If you do opt to do a “on the fly” transformation while ingesting data, make sure you make it fail proof. Try to only add data or subtract data, not modify it.?
By default, if you do want to do transformations, it is always better to either create a mapping table or ingest a mapping table and do the join there.
You can do so by using mechanisms like “dbt seeds” or ingest Google Sheets maintained by an external contributor.?
These practices will ensure your pipelines are robust and thus serve high quality and recent data.