Medallion Architecture framework within the Microsoft Fabric (Bronze Layer) - Part 1
Generated by https://deepai.org

Medallion Architecture framework within the Microsoft Fabric (Bronze Layer) - Part 1

It's well-known that Microsoft Fabric amalgamates various data platforms/solutions, operating in an integrated and efficient manner. It's impressive to witness Microsoft's achievement, considering what many refer to as Modern Data Warehouse architecture and Fabric's capabilities. Data ingestion can be accomplished in at least three distinct ways. The provision of the data model for end-user utilization considers multiple data sources, all integrated seamlessly. Fabric is inclusive; if data resides on another cloud provider, we can connect to GCP or AWS and view the data without moving it (using shortcuts). However, in this post, I would like to talk about how to implement the Medallion Architecture on the Fabric platform easily and intuitively.

Medallion Architecture, a brief explanation

The Medallion Architecture is a design pattern for data storage and pipeline management in Azure Databricks and Delta Lake. The term "Medallion" metaphorically describes the three layers of the architecture:

  1. Bronze Layer (Raw Layer): This is the ingestion layer, where raw data are stored as they are, without any transformation or cleansing. Data from various sources are loaded into this layer, providing a single repository for storage before processing.
  2. Silver Layer (Clean Layer): In this layer, the raw data from the Bronze layer is cleaned, transformed, corrected, and enriched. The data is formatted, inconsistencies are addressed, and transformations are applied to make the data more useful for analysis and business processes.
  3. Gold Layer (Business Layer): The final layer contains highly refined data ready for high-performance reporting, advanced analytics, and decision-making. These reliable and accurate data support critical operations and business intelligence.

The Medallion Architecture is appreciated for its structured and modular approach, which allows for efficient maintenance, facilitates data governance, and enhances data quality and integrity throughout the analytical process.

I won't go into an extensive explanation of the Medallion Architecture. Instead, I'll use metaphors to aim for clarity. However, I've left a link here for you to take into more detail if you'd like. link.

The relationship between Medallion Architecture and the metals

The hierarchy of bronze, silver, and gold is a tradition dating back to antiquity, commonly used to signify levels of achievement, with gold being deemed the most precious, followed by silver, and then bronze. In historiography, the concept of the 'Metal Ages,' such as the Bronze Age and the Iron Age, marked historical periods characterized by using tools and weapons crafted from these metals. Bronze was the first of these metals to be widely utilized, succeeded by silver, while gold has always been rare and valued from the beginning. Let's see it from different perspectives:

  1. Historical Value and Rarity: Historically, gold has been considered more valuable than silver, and silver is more useful than bronze. This valuation is attributed to their rarity, durability, and desirable physical properties like lustre and malleability.
  2. Use in Currency: Gold and silver have been utilized to mint coins across numerous cultures due to their esteemed value and lasting nature. Bronze, though also used in coinage, was of lesser value.
  3. Cultural and Symbolic Associations: In many societies, gold has been linked to the divine and the sublime owing to its radiant sheen and its association with the colour of the sun. Silver and bronze also have symbolic meanings but are typically regarded as less prestigious than gold.
  4. Awards and Recognition: The use of these metals to denote first, second, and third place dates back to the modern Olympics, which began awarding these medals in the Summer Games of 1904. This tradition further solidified the hierarchy of these metals as standard ranking indicators.

Detail of "Portrait of Adele Bloch-Bauer I"

What is modern data architecture?

The concept of modern architecture indeed progresses beyond what we've seen in the works of Ralph Kimball and Bill Inmon. However, this advancement considers new technologies that were absent during the era of these two giants. Those who have read 'The Data Warehouse Toolkit' will understand precisely what I mean. If you have not read it, Immon and Kimball reference data technology, particularly in data warehousing and business intelligence.

Ralph Kimball is known as the pioneer of dimensional data warehousing design. As I mentioned, he authored several influential books, including "The Data Warehouse Toolkit," a foundational guide for data warehousing professionals. Kimball's dimensional model is widely adopted in the industry and is valued for its simplicity and effectiveness in organizing data for analysis.

On the other hand, Bill Inmon is known for advocating the corporate data warehouse approach. He proposed the concept of a centralized corporate data warehouse, where all of an organization's data is integrated into a single location. His approach emphasizes data consistency and quality, aiming to create a single, reliable source for analysis and decision-making.

While Immon and Kimball have different approaches to data warehousing, both have contributed significantly to the field, providing frameworks and methodologies that have helped shape how organizations handle their data for analysis and business intelligence. Their ideas remain fundamental for professionals and companies seeking to extract valuable insights from their data.

Looking at the Medallion Architecture allows us to recognize much of what Kimball and Inmon laid out in their writings, but current technologies enable us to go further, exponentially expanding data exploration capabilities.

Where does the old meet the new approach? Reexamining the layers available in the Medallion Architecture, let's delve into our comparisons, and please feel free to disagree or add new comparisons here.

Modern data architecture harnesses state-of-the-art data infrastructure to provide scalable value using the latest cloud computing technologies. It aims to facilitate seamless storage, transformation, and large-scale data ingestion. Inspired by advancements in cloud storage and processing, modern data architecture leverages cloud-based services and technologies to deliver significant value in the realm of big data.

Car Dealership model

Imagine you're the owner of a car dealership. At this dealership, we offer more than just car sales; there's also a technical assistance service, an accessories shop, automotive aesthetic treatments, and what matters most—our customers. In our car dealership, we operate an ERP system, but in line with the reality of many businesses, we also have various parallel systems functioning alongside it.

Let's take into account four sources within this model. We have customer data managed in Salesforce, while our core financial processes are streamlined within SAP Business One. In addition, the wizards in our finance department perform unimaginable feats with Excel spreadsheets. Lastly, we utilize a proprietary system dedicated to overseeing vehicle maintenance.

Staging Area or Landing Zone / Bronze Layer

Although each department believes it already has the necessary integrations, the recommended approach in this scenario is to copy raw data from their original sources to a specific location, which will serve as the starting point of our journey. In traditional methodologies, this was referred to as the 'Staging Area,' while in Modern Data Architecture, it is known as the 'Landing Zone.'

Even though the Landing Zone isn't depicted in the Medallion Architecture diagram, creating a designated area for incoming data is expected. The Bronze Layer also serves as an arrival point; your approach will depend on your strategy. Given that the Lakehouse enables us to store a large volume of data at a meagre cost, I recommend setting up a Landing Zone organized by specific folders for each data source if you have many of them.

For the Bronze Layer, the strategy should involve selecting only the columns you need from each source without performing any transformations. Thus, in this model, the Bronze Layer will contain exactly what is present in the Landing Zone, albeit with a reduced number of columns – or potentially the same amount, depending on the requirements.

This is the initial step toward setting up your Lakehouse. While we're familiar with this process in traditional architectures, where we'd duplicate these data using ETL (Extract, Transform, Load) tools with an emphasis on the 'E' for extraction, the question arises: how does this work within Microsoft Fabric?

Let's discuss Microsoft Fabric - Bronze Layer.


Applying Medallion Architecture using Microsoft solutions is relatively straightforward. The approach aligns with the methodology's recommendations, and with Microsoft Fabric, you have numerous pathways to deploy this architecture. Below is an image taken from the Microsoft website.

Picture from Microsoft website

Returning to our hypothetical scenario of the car dealership, all our source systems would serve as the data sources in this image. Initially, we would need to load this data into our Bronze Layer following a traditional Data Warehouse approach. With Microsoft Fabric, you can perform this data ingestion and movement using three primary tools: pipelines, dataflows, or notebooks.

Below are two approaches to organizing the artifacts we have at the present moment. In the Microsoft Fabric environment, as shown in Image 1, I created a folder to manage the source systems' data sources and another folder to represent my Bronze layer.


Image 1 - Artifacts organization


In Image 2, you'll see another approach where we've already included the Data sources within the Bronze Layer folder.

Image 2 - Artifacts organization

To be honest, up to this point, the approach you choose matters little as long as it aligns with your organization's Data Governance practices and the standards that must be followed. And at this moment, you might be wondering, where's the magic? Well, there is yet to be any magic here, but Microsoft offers an exciting feature called Shortcut. With it, you can create connections to your data sources and read data from there without needing data movement... and this is where the magic begins...

Let's not delve too deeply into this functionality here, but we'll certainly be using it, especially in multi-cloud environments or where data connections are needed in distributed settings, such as multinational corporations with operations in multiple countries.

Following the model from Image 1 of our example, I'll read the data from the financial spreadsheet system and make it available in the Bronze layer folder.

As mentioned, you have three ways to perform this data movement or mirroring (pipelines, dataflows, or notebooks). I'll provide examples for all three to evaluate each one. In the end, we'll have three data points in our Bronze layer folder, each representing a method.


Let's discuss when to use Dataflow, Pipeline, or Notebook in Microsoft Fabric.

Dataflow

  • Scenario: Data ingestion, data transformation, data structuring, data profiling.
  • Primary Developer Persona: Data engineer, data integrator, business analyst.
  • Primary Developer Skill Set: ETL, M, SQL.
  • Data Volume: Low to high.
  • Development Interface: Notebook, Spark job definition.
  • Sources: Over 150 connectors.
  • Destinations: Lakehouse, Azure SQL Database, Azure Data Explorer, Azure Synapse Analytics.
  • Transformation Complexity: Low to high (over 300 transformation functions).
  • Description: Dataflows allow you to create flexible workflows that meet your business needs. You can create logical groupings of activities, including dataflow, for cleaning and preparing your data.

I highly recommend using Dataflow. If you know data flow and have created a report in PowerBI, you are ready to use it. Here, you'll put all of that into practice. It's a highly user-friendly and visually appealing tool. Anyone can do something in Dataflow, whether reading a file or writing it to another location. Congratulations, Microsoft.????????????

Pipeline

  • Scenario: Data lake and data warehouse migration, data ingestion, lightweight transformation.
  • Primary Developer Persona: Data engineer, data integrator.
  • Primary Developer Skill Set: ETL, SQL, JSON.
  • Data Volume: Low to high.
  • Development Interface: Wizard, refined Power Query.
  • Sources: Over 30 connectors.
  • Destinations: Over 18 connectors (Lakehouse, Azure SQL Database, Azure Data Explorer, Azure Synapse Analytics).
  • Transformation Complexity: Low (type conversion, column mapping, file merge/split, hierarchical levelling).
  • Description: Copy activity is a low-code and no-code option to move petabytes of data from various sources to lakehouses and warehouses, either ad-hoc or through scheduling.

I'm a fan of this technical literature, but a significant factor that leads me to choose to use dataflow is when I have a low level of data transformation and a high volume of data.

Notebook

  • Scenario: When you need to execute specific parts of code, pass parameters, and have more control over logic.
  • Description: You can run a dataflow from a notebook, but it's recommended to first execute part of the code via the notebook through a pipeline and then run the dataflow within the pipeline as well

This is where the magic unfolds! In this space, the possibilities are limitless! You can manipulate your data in any way you desire, utilizing code, SQL, Spark, or Python. There have been instances where notebooks were indispensable for data manipulation, and I've executed calls from these notebooks within a Dataflow. However, coding is the only way forward to genuinely leveraging this functionality.


Let's go ahead and wrap up.

To conclude the topic of data ingestion in the bronze layer, let's list the critical aspects of the bronze layer grouped into four items:

  • Raw Data Ingestion:

  1. The bronze layer is where raw data from various sources is ingested.
  2. It retains the data in its original, unprocessed state.
  3. Data can come from batch processing or real-time streaming.
  4. Historical records are maintained, allowing for system state recreation.

  • Incremental Updates and Growth:

  1. Over time, the bronze layer accumulates data incrementally.
  2. New data is added, capturing changes and additions.
  3. This layer provides the foundation for subsequent processing.

  • Data Quality and Validation in Silver Layer:

  1. The silver layer builds upon the bronze layer.
  2. It validates, cleans, and enriches the data.
  3. Data duplication is eliminated, and quality is improved.

  • Highly Refined Analytics in Gold Layer:

  1. The gold layer contains aggregated and refined data.
  2. It is the basis for analytics, machine learning, and production systems.

In the next article, we will discuss how this data is moved to the silver layer and how we represent this layer within Microsoft Fabric. Additionally, we will explore the importance of having this intermediate layer before defining the final data model in the gold layer. Stay tuned! ??







Brent Solomon

Empowering Our Clients in Their Moment of Service

5 个月

Love this

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了