Skip to main content
ADLS Datalake

ADLS Datalake

PROD
In this section, we provide guides and references to use the ADLS Datalake connector. Configure and schedule ADLS Datalake metadata and profiler workflows from the OpenMetadata UI:

How to Run the Connector Externally

To run the Ingestion via the UI you’ll need to use the OpenMetadata Ingestion Container, which comes shipped with custom Airflow plugins to handle the workflow deployment. If, instead, you want to manage your workflows externally on your preferred orchestrator, you can check the following docs to run the Ingestion Framework anywhere.

Requirements

Note: ADLS Datalake connector supports extracting metadata from file types JSON, CSV, TSV & Parquet.

ADLS Permissions

To extract metadata from Azure ADLS (Storage Account - StorageV2), you will need an App Registration with the following permissions on the Storage Account:
  • Storage Blob Data Reader
  • Storage Queue Data Reader

Python Requirements

We have support for Python versions 3.9-3.11

Azure installation

pip3 install "openmetadata-ingestion[datalake-azure]"

Metadata Ingestion

All connectors are defined as JSON Schemas. Here you can find the structure to create a connection to Datalake. In order to create and run a Metadata Ingestion workflow, we will follow the steps to create a YAML configuration able to connect to the source, process the Entities if needed, and reach the OpenMetadata server. The workflow is modeled around the following JSON Schema.

1. Define the YAML Config

This is a sample config for Datalake using Azure:

2. Run with the CLI

First, we will need to save the YAML file. Afterward, and with all requirements installed, we can run:
metadata ingest -c <path-to-yaml>
Note that from connector to connector, this recipe will always be the same. By updating the YAML configuration, you will be able to extract metadata from different sources.

dbt Integration

You can learn more about how to ingest dbt models’ definitions and their lineage here.