Mounting Lakehouse in Fabric Notebook

Mounting Lakehouse in Fabric Notebook

To mount another Lakehouse in the notebook, use the below code. It should be noted that Lakehouses mounted at the runtime like this will not be visible in the lineage view, at least for now.


mssparkutils.fs.mount("abfss://<workspace_id>@onelake.dfs.fabric.microsoft.com/<lakehouse_id>", "<mountPoint>") #mountPoint such as '/lakehouse/default'        


After mounting, I am able to see the newly mounted Lakehouse and the scope as "job" instead of the "default_lh"

Note that you do not need to mount a Lakehouse for reading and writing with spark. You can use the full abfss path. However, mounting is required for pandas as pandas requires local file path.


You can use below code snippet to dynamically mount any lakehouse or warehouse in a notebook and query the files and tables :


import os
import pandas as pd

workspaceID = "<>"
lakehouseID = "<>"
mount_name = "/temp_mnt"

base_path = f"abfss://{workspaceID}@onelake.dfs.fabric.microsoft.com/{lakehouseID}/"
mssparkutils.fs.mount(base_path, mount_name)
mount_points = mssparkutils.fs.mounts()
local_path = next((mp["localPath"] for mp in mount_points if mp["mountPoint"] == mount_name), None)

print(local_path)

print(os.path.exists(local_path)) #check if location exists
print(os.listdir(local_path + "/Files")) # for files
print(os.listdir(local_path + "/Tables")) # for tables

df = pd.read_csv(local_path + "/Files/"+ "<file_name.csv>")
        


YashwaNt Singh

Without DATA YOU’RE just a another person with an OPINION ??Ex-CTS/HCLite/Nokia/IBMer/TechM | Talks About Data Engineering & Modernisation

2 个月

Useful tips bhai sahab ??

回复

要查看或添加评论,请登录

RAJEEV KUMAR的更多文章

社区洞察

其他会员也浏览了