The ETL to ELT to EtLT Evolution, and data pipelines

The ETL to ELT to EtLT Evolution, and data pipelines

Article 2 of 2

Originally published on our blog .

Click here for the previous article: Data warehouse, data lake, and the features of ETL and ELT


For many years, data warehouses with ETL and data lakes with ELT have evolved in parallel worlds. But recently, ELT processing has become common in organizations that augment their legacy ETL tools with newer, cheaper ELT processes to populate their data warehouse schemas — laying the foundation for the EtLT methodology.?

This change can play out over three phases. Can you tell which is in use at your organization today??

Phase 1: Sequence Data Lakes and Data Warehouses

A common approach to harnessing the flexibility and low cost of data lakes is to replace a costly legacy ETL tool with a data lake, and program transformations inside the data lake.? However, to reduce the impact on the business, a data warehouse remains in use. This results in a combination of the ETL and ELT patterns:

  • Extract from Sources: This step is the same as the ELT for data lakes described above, but different from extraction performed by a traditional ETL in that there is no monolithic ETL engine. Shifting between these two patterns usually involves migration to new tooling to acquire data from sources.?
  • Transform in the Data Lake: Transformations in data lakes typically involve deep data engineering skills, using languages like Python, pySpark, and SQL, and data engines like Spark.??
  • Loading the Data Warehouse: The final result of this processing is then loaded into the data warehouse schema, where analysts and business users recognize their data structures and use them as before. The pattern remains the same even if the warehouse schema is migrated from an on-prem system to a cloud-based data warehouse.?

Sequencing Data Lakes and Data Warehouses

This combination of patterns is commonplace in many enterprises today. Unfortunately, it has become the source of complexity and inefficiency across data groups and sprawl across infrastructures.?

Phase 2: Consolidate ETL and ELT?

The costs of cloud data warehouses have dropped sufficiently to where the maintenance of a separate data lake makes less economic sense. In addition, some cloud data warehouses like Snowflake are expanding their features to match the diverse and flexible data processing methodologies of data lakes.?

As a result, transformation processing inside data warehouses is becoming more accessible, and maintaining a separate data lake is no longer needed. This is leading many enterprises to consolidate their data operations onto the newest generation of cloud-based data warehouses.

Consolidating ETL and ELT Inside the Cloud Data Warehouse


Meanwhile, some data lakes like Databricks are adding structures that match the ease of use of data warehouses, reducing the benefits of loading data into a separate data warehouse. In some organizations, these changes are causing a tug-of-war on whether to retire data warehouses altogether.?

Recognizing this convergence of data lakes and data warehouses, several vendors are now positioning their data platforms as “lakehouses,” all-in-one solutions capable of storing and processing any type of data.

Consolidating ETL and ELT Inside a Lakehouse


The disintegration of the differences between the two patterns has led to a new balkanization of tools along the value chain instead:?

  • Extraction tools like Fivetran that only move data from source systems into the data warehouse or data lake;?
  • Transformation tools like dbt and Matillion that help analysts build their transformation logic in the form of easy-to-program SQL logic, and then run that logic inside the data warehouse or data lake;
  • So-called “reverse ETL” tools that selectively reintegrate data analytics back into applications.?
  • Orchestration tools like Airflow are required to manage the flow across tools.

As a result, distinct tooling for ETL and ELT is disappearing, and the concept of loading (the “L” in ETL and ELT) has diminished altogether. Instead, the growing scale of data operations has given rise to a new set of concerns for which the balkanized tools are ill-suited:

  • Metadata Generation: The extraction process acquires data from various sources, but it is also the perfect opportunity to capture rich metadata about it and make it available to validate and drive subsequent transformation logic.?
  • Stepwise Transformation: Structuring data transformation in sequential steps provides clarity and control over sophisticated data operations such as business validation, data normalization, and analytics functions. By processing it with transparent stepwise transformations, data can achieve data warehouse integrity without leaving the environment. These processes can implement techniques familiar to traditional data warehouse ETL, including normalization, validation for nonrepudiation, referential integrity enforcement, and idempotency enforcement, but without the cost of monolithic ETL tools. This approach can be strengthened by the use of Directed Acyclic Graphs (DAG), and monitoring that supports timely developer interventions.?
  • Data Validation: At any point of extracting and transforming data, validation techniques can be applied to maintain data quality and improve confidence when data is accessed by analysts and business users.
  • Loading a Stable Schema: The final transformation steps can simply terminate in tables structured like a classic data warehouse model, where data analysts can run queries just as they did in their previous data warehouse. There may also be a need for selective propagation to other applications, similar to the reverse ETL features discussed before.?

Phase 3: The EtLT Approach

While ETL has its roots in the rigidity of early data warehouses, ELT was born from the flexibility of data lakes. However, with the immense growth in data generation and the complexities involved in handling it, even ELT is seeing an evolution.?

A new approach, EtLT — and even beyond, in its ongoing iterations of EtLTLTLT… — is surfacing in the data landscape. At its core, EtLT encapsulates "Extract, transform, Load, Transform," though we’ve also seen it called "Extract, tweak, Load, Transform." But, this is more than just a quirky sequence of letters. It's indicative of a deeper shift where data, once prepared, isn't static but is continuously refined and republished in a data catalog to respond to diverse and evolving needs. This dynamic and ever-evolving nature of the EtLT approach, with its emphasis on continuous refinement, is a pivotal reason why the concept of the data mesh has gained such traction and popularity.

  • The EtL Phase: In this foundational stage, data engineers focus on extracting relevant information from varied and often unstructured sources. Their task is straightforward: take the raw data and transform it into a structured, coherent format. They prioritize timely data delivery and accuracy in this transformation, setting the stage for more in-depth processing.
  • The Indefinite TL Phase: Here, the process dives deeper. Data analysts, SQL developers, and AI engineers take over, translating intricate enterprise business rules into SQL statements. Their primary objectives are to perform in-depth analyses and ensure the resultant data aligns with business requirements and can power AI applications. In this continuous phase, data is always on the move. It's refined, republished, and reutilized, staying ever-relevant to evolving business needs.

Harnessing the Power of Data Pipelines

The EtLT approach stands out as the standard and prevailing demand in today's data space. However, to implement this approach effectively, one needs the right technology in place. As James Densmore highlights in the "Data Pipelines Pocket Reference 2021 ," data pipelines are integral for a successful EtLT deployment, to the extent that the efficient execution and management of the EtLT model are now synonymous with the concept of data pipelines.

While data pipelines have long been tools in the arsenal of data engineers, they are now re-emerging as the most adaptable and potent data processing architecture.?

  • Pattern Consolidation: Data pipelines converge the patterns of both data lakes and data warehouses, while addressing many new concerns that arise from the sheer scale of modern data operations. Data engineers and analysts can apply data pipeline methodologies to deliver the fluidity of data lakes along with the structured discipline of data warehouses.
  • Data Pipeline Platform: Data pipelines open the door to many different data processing objectives, and replace the need for narrow ETL and ELT processes and tools. Data engineers can organize extraction, transformation, and loading into any number of sequential operations, apply powerful new methodologies, and achieve unprecedented control and flexibility to meet emerging data analytics opportunities.
  • Accelerating Transparency and Velocity: With these new powers come new responsibilities, among which is the need for unprecedented clarity and transparency of what is happening at every step in every pipeline at any time. Data teams may consider adopting a unified data pipeline platform with automation to unlock these capabilities with operational tested frameworks, and achieve unprecedented velocity in delivering business insights.

Enterprises have an opportunity to undergo a metamorphosis. By rethinking traditional ETL as a mainstay of data handling, their data practice can evolve to accommodate data pipelines, and power up their use of both data lakes and data warehouses to tackle the complexities of their modern data landscapes.?

要查看或添加评论,请登录

Ascend.io的更多文章

社区洞察

其他会员也浏览了