connectors

No menu items for this category
Azure Data Factory
Azure Data Factory
PROD
Available In
Feature List
Pipelines
Pipeline Status
Lineage
Owners
Tags

In this section, we provide guides and references to use the Azure Data Factory connector.

Configure and schedule Azure Data Factory metadata and profiler workflows from the OpenMetadata UI:

To run the Ingestion via the UI you'll need to use the OpenMetadata Ingestion Container, which comes shipped with custom Airflow plugins to handle the workflow deployment.

If, instead, you want to manage your workflows externally on your preferred orchestrator, you can check the following docs to run the Ingestion Framework anywhere.

The Ingestion framework uses Azure Data Factory APIs to connect to the Data Factory and fetch metadata.

You can find further information on the Azure Data Factory connector in the docs.

Ensure that the service principal or managed identity you’re using has the necessary permissions in the Data Factory resource (Reader, Contributor or Data Factory Contributor role at minimum).

We have support for Python versions 3.8-3.11

To run the Data Factory ingestion, you will need to install:

All connectors are defined as JSON Schemas. Here you can find the structure to create a connection to Data Factory.

In order to create and run a Metadata Ingestion workflow, we will follow the steps to create a YAML configuration able to connect to the source, process the Entities if needed, and reach the OpenMetadata server.

The workflow is modeled around the following JSON Schema

This is a sample config for Data Factory:

clientId: To get the Client ID (also known as application ID), follow these steps:

  1. Log into Microsoft Azure.
  2. Search for App registrations and select the App registrations link.
  3. Select the Azure AD app you're using for this connection.
  4. From the Overview section, copy the Application (client) ID.

clientSecret: To get the client secret, follow these steps:

  1. Log into Microsoft Azure.
  2. Search for App registrations and select the App registrations link.
  3. Select the Azure AD app you're using for this connection.
  4. Under Manage, select Certificates & secrets.
  5. Under Client secrets, select New client secret.
  6. In the Add a client secret pop-up window, provide a description for your application secret. Choose when the application should expire, and select Add.
  7. From the Client secrets section, copy the string in the Value column of the newly created application secret.

tenantId: To get the tenant ID, follow these steps:

  1. Log into Microsoft Azure.
  2. Search for App registrations and select the App registrations link.
  3. Select the Azure AD app you're using for Power BI.
  4. From the Overview section, copy the Directory (tenant) ID.

accountName: Here are the step-by-step instructions for finding the account name for an Azure Data Lake Storage account:

  1. Sign in to the Azure portal and navigate to the Storage accounts page.
  2. Find the Data Lake Storage account you want to access and click on its name.
  3. In the account overview page, locate the Account name field. This is the unique identifier for the Data Lake Storage account.
  4. You can use this account name to access and manage the resources associated with the account, such as creating and managing containers and directories.

subscription_id: Your Azure subscription’s unique identifier. In the Azure portal, navigate to Subscriptions > Your Subscription > Overview. You’ll see the subscription ID listed there.

resource_group_name: This is the name of the resource group that contains your Data Factory instance. In the Azure portal, navigate to Resource Groups. Find your resource group, and note the name.

factory_name: The name of your Data Factory instance. In the Azure portal, navigate to Data Factories and find your Data Factory. The Data Factory name will be listed there.

run_filter_days: The days range when filtering pipeline runs. It specifies how many days back from the current date to look for pipeline runs, and filter runs within the given period of days. Default is 7 days. Optional

The sourceConfig is defined here:

  • dbServiceNames: Database Service Name for the creation of lineage, if the source supports it.

  • includeTags: Set the 'Include Tags' toggle to control whether to include tags as part of metadata ingestion.

  • includeUnDeployedPipelines: Set the 'Include UnDeployed Pipelines' toggle to control whether to include un-deployed pipelines as part of metadata ingestion. By default it is set to true

  • markDeletedPipelines: Set the Mark Deleted Pipelines toggle to flag pipelines as soft-deleted if they are not present anymore in the source system.

  • pipelineFilterPattern and chartFilterPattern: Note that the pipelineFilterPattern and chartFilterPattern both support regex as include or exclude.

  • includeOwners: Set the 'Include Owners' toggle to control whether to include owners to the ingested entity if the owner email matches with a user stored in the OM server as part of metadata ingestion. If the ingested entity already exists and has an owner, the owner will not be overwritten.It supports boolean values either true or false.

  • overrideLineage: Set the 'Override Lineage' toggle to control whether to override the existing lineage. It supports boolean values either true or false.

  • overrideMetadata: Set the 'Override Metadata' toggle to control whether to override the existing metadata in the OpenMetadata server with the metadata fetched from the source. If the toggle is set to true, the metadata fetched from the source will override the existing metadata in the OpenMetadata server. If the toggle is set to false, the metadata fetched from the source will not override the existing metadata in the OpenMetadata server. This is applicable for fields like description, tags, owner and displayName. It supports boolean values either true or false.

To send the metadata to OpenMetadata, it needs to be specified as type: metadata-rest.

The main property here is the openMetadataServerConfig, where you can define the host and security provider of your OpenMetadata installation.

Logger Level

You can specify the loggerLevel depending on your needs. If you are trying to troubleshoot an ingestion, running with DEBUG will give you far more traces for identifying issues.

JWT Token

JWT tokens will allow your clients to authenticate against the OpenMetadata server. To enable JWT Tokens, you will get more details here.

You can refer to the JWT Troubleshooting section link for any issues in your JWT configuration.

Store Service Connection

If set to true (default), we will store the sensitive information either encrypted via the Fernet Key in the database or externally, if you have configured any Secrets Manager.

If set to false, the service will be created, but the service connection information will only be used by the Ingestion Framework at runtime, and won't be sent to the OpenMetadata server.

Store Service Connection

If set to true (default), we will store the sensitive information either encrypted via the Fernet Key in the database or externally, if you have configured any Secrets Manager.

If set to false, the service will be created, but the service connection information will only be used by the Ingestion Framework at runtime, and won't be sent to the OpenMetadata server.

SSL Configuration

If you have added SSL to the OpenMetadata server, then you will need to handle the certificates when running the ingestion too. You can either set verifySSL to ignore, or have it as validate, which will require you to set the sslConfig.caCertificate with a local path where your ingestion runs that points to the server certificate file.

Find more information on how to troubleshoot SSL issues here.

filename.yaml

First, we will need to save the YAML file. Afterward, and with all requirements installed, we can run:

Note that from connector to connector, this recipe will always be the same. By updating the YAML configuration, you will be able to extract metadata from different sources.