Skip to main content

Run Externally

Learn how to configure the dbt workflow externally to ingest dbt data from your data sources. Once the metadata ingestion runs correctly and we are able to explore the service Entities, we can add the dbt information. This will populate the dbt tab from the Table Entity Page. dbt We can create a workflow that will obtain the dbt information from the dbt files and feed it to OpenMetadata. The dbt Ingestion will be in charge of obtaining this data.

1. Define the YAML Config

Select the yaml config from one of the below sources: The dbt files should be present on the source mentioned and should have the necessary permissions to be able to access the files. Enter the name of your database service from OpenMetadata in the serviceName key in the yaml

1. AWS S3 Buckets

In this configuration we will be fetching the dbt manifest.json, catalog.json and run_results.json files from an S3 bucket.
1

dbtConfigType

  • dbtConfigType: s3
AWS Access Key Credentials awsAccessKeyId and awsSecretAccessKey are used to authenticate and authorize programmatic requests to AWS services. An access key consists of:
  • Access Key ID (for example, AKIAIOSFODNN7EXAMPLE)
  • Secret Access Key (for example, wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY)
Both values must be provided together when using static credentials. For more information, see Managing access keys. AWS Session Token awsSessionToken is required when using temporary security credentials, such as those obtained via AWS STS. The session token must be provided along with the access key ID and secret access key for the duration of the session. AWS Region awsRegion specifies the AWS Region where the target service is deployed (for example, us-east-1). This is the only required parameter when configuring an AWS connection. Other credentials can be resolved automatically using environment variables, AWS profiles, or IAM roles. Learn more in the AWS Regions and Availability Zones documentation. Custom Endpoint URL endPointURL is an optional custom endpoint used to connect to an AWS service. You may want to specify this when:
  • Using VPC endpoints
  • Connecting to local or AWS-compatible services
  • Overriding the default regional endpoint
See AWS service endpoints for details. AWS Profile Name profileName specifies the AWS CLI profile to use for authentication. Profiles store credentials and configuration in AWS config files. If not specified, the default profile is used. Learn more about Named profiles for the AWS CLI. Assume Role ARN assumeRoleArn is the Amazon Resource Name (ARN) of the IAM role to assume. This is commonly used for:
  • Cross-account access
  • Delegated permissions
  • Enhanced security setups
This field is required when using Assume Role authentication. See the AssumeRole API reference. Assume Role Session Name assumeRoleSessionName identifies the assumed role session. This value helps uniquely identify a session when the same role is assumed multiple times or by different principals. If not provided, the default value OpenMetadataSession is used. Assume Role Source Identity assumeRoleSourceIdentity is an optional source identity passed when assuming a role. This value is recorded in AWS CloudTrail logs and can be used to trace actions performed using the assumed role. See Source Identity in AssumeRole.

dbt Prefix Configuration

dbtPrefixConfig: Optional config to specify the bucket name and directory path where the dbt files are stored. If config is not provided ingestion will scan all the buckets for dbt files. dbtBucketName: Name of the bucket where the dbt files are stored. dbtObjectPrefix: Path of the folder where the dbt files are stored. Follow the documentation here to configure multiple dbt projects

Source Config

dbtUpdateDescriptions: Configuration to update the description from dbt or not. If set to true descriptions from dbt will override the already present descriptions on the entity. For more details visit here
dbtUpdateOwners: Configuration to update the owner from dbt or not. If set to true owners from dbt will override the already present owners on the entity. For more details visit here
includeTags: true or false, to ingest tags from dbt. Default is true.
dbtClassificationName: Custom OpenMetadata Classification name for dbt tags.
databaseFilterPattern, schemaFilterPattern, tableFilterPattern: Add filters to filter out models from the dbt manifest. Note that the filter supports regex as include or exclude. You can find examples here
To send the metadata to OpenMetadata, it needs to be specified as type: metadata-rest.
The main property here is the openMetadataServerConfig, where you can define the host and security provider of your OpenMetadata installation.
Logger LevelYou can specify the loggerLevel depending on your needs. If you are trying to troubleshoot an ingestion, running with DEBUG will give you far more traces for identifying issues.
JWT TokenJWT tokens will allow your clients to authenticate against the OpenMetadata server. To enable JWT Tokens, you will get more details here.You can refer to the JWT Troubleshooting section link for any issues in your JWT configuration.
Store Service ConnectionIf set to true (default), we will store the sensitive information either encrypted via the Fernet Key in the database or externally, if you have configured any Secrets Manager.If set to false, the service will be created, but the service connection information will only be used by the Ingestion Framework at runtime, and won’t be sent to the OpenMetadata server.
SSL ConfigurationIf you have added SSL to the OpenMetadata server, then you will need to handle the certificates when running the ingestion too. You can either set verifySSL to ignore, or have it as validate, which will require you to set the sslConfig.caCertificate with a local path where your ingestion runs that points to the server certificate file.Find more information on how to troubleshoot SSL issues here.
ingestionPipelineFQNFully qualified name of ingestion pipeline, used to identify the current ingestion pipeline.
source:
  type: dbt
  serviceName: service_name
  sourceConfig:
    config:
      type: DBT
dbtConfigSource:
        dbtConfigType: s3
dbtSecurityConfig:
         awsAccessKeyId: KEY
         awsSecretAccessKey: SECRET
        # awsSessionToken: TOKEN
         awsRegion: us-east-2
        # endPointURL: https://athena.us-east-2.amazonaws.com/custom
        # profileName: profile
        # assumeRoleArn: "arn:partition:service:region:account:resource"
        # assumeRoleSessionName: session
        # assumeRoleSourceIdentity: identity
        dbtPrefixConfig:
          dbtBucketName: bucket_name
          dbtObjectPrefix: main_dir/dbt_files/
      # dbtUpdateDescriptions: true or false
      # dbtUpdateOwners: true or false
      # includeTags: true or false
      # dbtClassificationName: dbtTags
      # databaseFilterPattern:
      #   includes:
      #     - .*db.*
      #   excludes:
      #     - .*demo.*
      # schemaFilterPattern:
      #   includes:
      #     - .*schema.*
      #   excludes:
      #     - .*demo.*
      # tableFilterPattern:
      #   includes:
      #     - .*table.*
      #   excludes:
      #     - .*demo.*
sink:
  type: metadata-rest
  config: {}
workflowConfig:
  loggerLevel: INFO  # DEBUG, INFO, WARNING or ERROR
  openMetadataServerConfig:
    hostPort: "http://localhost:8585/api"
    authProvider: openmetadata
    securityConfig:
      jwtToken: "{bot_jwt_token}"
    ## Store the service Connection information
    storeServiceConnection: true  # false
    ## Secrets Manager Configuration
    # secretsManagerProvider: aws, azure or noop
    # secretsManagerLoader: airflow or env
    ## If SSL, fill the following
    # verifySSL: validate  # or ignore
    # sslConfig:
    #   caCertificate: /local/path/to/certificate
# ingestionPipelineFQN: <service name>.<ingestion name> ## e.g., "my_redshift.metadata"

2. Google Cloud Storage Buckets

In this configuration we will be fetching the dbt manifest.json, catalog.json and run_results.json files from a GCS bucket.
1

dbtConfigType

  • dbtConfigType: gcs
2

credentials

credentials: You can authenticate with your GCS instance using either GCP Credentials Path where you can specify the file path of the service account key or you can pass the values directly by choosing the GCP Credentials Values from the service account key file.You can checkout this documentation on how to create the service account keys and download it.gcpConfig:1. Passing the raw credential values provided by GCS. This requires us to provide the following information, all provided by GCS:
  • type: Credentials Type is the type of the account, for a service account the value of this field is service_account. To fetch this key, look for the value associated with the type key in the service account key file.
  • projectId: A project ID is a unique string used to differentiate your project from all others in Google Cloud. To fetch this key, look for the value associated with the project_id key in the service account key file.
  • privateKeyId: This is a unique identifier for the private key associated with the service account. To fetch this key, look for the value associated with the private_key_id key in the service account file.
  • privateKey: This is the private key associated with the service account that is used to authenticate and authorize access to GCS. To fetch this key, look for the value associated with the private_key key in the service account file.
  • clientEmail: This is the email address associated with the service account. To fetch this key, look for the value associated with the client_email key in the service account key file.
  • clientId: This is a unique identifier for the service account. To fetch this key, look for the value associated with the client_id key in the service account key file.
  • authUri: This is the URI for the authorization server. To fetch this key, look for the value associated with the auth_uri key in the service account key file. The default value to Auth URI is https://accounts.google.com/o/oauth2/auth.
  • tokenUri: The Google Cloud Token URI is a specific endpoint used to obtain an OAuth 2.0 access token from the Google Cloud IAM service. This token allows you to authenticate and access various Google Cloud resources and APIs that require authorization. To fetch this key, look for the value associated with the token_uri key in the service account credentials file. Default Value to Token URI is https://oauth2.googleapis.com/token.
  • authProviderX509CertUrl: This is the URL of the certificate that verifies the authenticity of the authorization server. To fetch this key, look for the value associated with the auth_provider_x509_cert_url key in the service account key file. The Default value for Auth Provider X509Cert URL is https://www.googleapis.com/oauth2/v1/certs
  • clientX509CertUrl: This is the URL of the certificate that verifies the authenticity of the service account. To fetch this key, look for the value associated with the client_x509_cert_url key in the service account key file.
2. Passing a local file path that contains the credentials:
  • gcpCredentialsPath
  • If you prefer to pass the credentials file, you can do so as follows:
source:
  type: dbt
  serviceName: service_name
  sourceConfig:
    config:
      type: DBT
      dbtConfigSource:
        dbtConfigType: gcs
        dbtSecurityConfig:
          gcpConfig: <path to file>
  • If you want to use ADC authentication for gcs you can just leave the GCP credentials empty. This is why they are not marked as required.
source:
  type: dbt
  serviceName: service_name
  sourceConfig:
    config:
      type: DBT
      dbtConfigSource:
        dbtConfigType: gcs
        dbtSecurityConfig:
          gcpConfig: {}

dbt Prefix Configuration

dbtPrefixConfig: Optional config to specify the bucket name and directory path where the dbt files are stored. If config is not provided ingestion will scan all the buckets for dbt files. dbtBucketName: Name of the bucket where the dbt files are stored. dbtObjectPrefix: Path of the folder where the dbt files are stored. Follow the documentation here to configure multiple dbt projects

Source Config

dbtUpdateDescriptions: Configuration to update the description from dbt or not. If set to true descriptions from dbt will override the already present descriptions on the entity. For more details visit here
dbtUpdateOwners: Configuration to update the owner from dbt or not. If set to true owners from dbt will override the already present owners on the entity. For more details visit here
includeTags: true or false, to ingest tags from dbt. Default is true.
dbtClassificationName: Custom OpenMetadata Classification name for dbt tags.
databaseFilterPattern, schemaFilterPattern, tableFilterPattern: Add filters to filter out models from the dbt manifest. Note that the filter supports regex as include or exclude. You can find examples here
To send the metadata to OpenMetadata, it needs to be specified as type: metadata-rest.
The main property here is the openMetadataServerConfig, where you can define the host and security provider of your OpenMetadata installation.
Logger LevelYou can specify the loggerLevel depending on your needs. If you are trying to troubleshoot an ingestion, running with DEBUG will give you far more traces for identifying issues.
JWT TokenJWT tokens will allow your clients to authenticate against the OpenMetadata server. To enable JWT Tokens, you will get more details here.You can refer to the JWT Troubleshooting section link for any issues in your JWT configuration.
Store Service ConnectionIf set to true (default), we will store the sensitive information either encrypted via the Fernet Key in the database or externally, if you have configured any Secrets Manager.If set to false, the service will be created, but the service connection information will only be used by the Ingestion Framework at runtime, and won’t be sent to the OpenMetadata server.
SSL ConfigurationIf you have added SSL to the OpenMetadata server, then you will need to handle the certificates when running the ingestion too. You can either set verifySSL to ignore, or have it as validate, which will require you to set the sslConfig.caCertificate with a local path where your ingestion runs that points to the server certificate file.Find more information on how to troubleshoot SSL issues here.
ingestionPipelineFQNFully qualified name of ingestion pipeline, used to identify the current ingestion pipeline.
source:
  type: dbt
  serviceName: service_name
  sourceConfig:
    config:
      type: DBT
dbtConfigSource:
        dbtConfigType: gcs
dbtSecurityConfig:
          gcpConfig:
            type: My Type
            projectId: project ID
            privateKeyId: us-east-2
            privateKey: |
              -----BEGIN PRIVATE KEY-----
              Super secret key
              -----END PRIVATE KEY-----
            clientEmail: [email protected]
            clientId: 1234
            authUri: https://accounts.google.com/o/oauth2/auth (default)
            tokenUri: https://oauth2.googleapis.com/token (default)
            authProviderX509CertUrl: https://www.googleapis.com/oauth2/v1/certs (default)
            clientX509CertUrl: https://cert.url (URI)
        dbtPrefixConfig:
          dbtBucketName: bucket_name
          dbtObjectPrefix: main_dir/dbt_files/
      # dbtUpdateDescriptions: true or false
      # dbtUpdateOwners: true or false
      # includeTags: true or false
      # dbtClassificationName: dbtTags
      # databaseFilterPattern:
      #   includes:
      #     - .*db.*
      #   excludes:
      #     - .*demo.*
      # schemaFilterPattern:
      #   includes:
      #     - .*schema.*
      #   excludes:
      #     - .*demo.*
      # tableFilterPattern:
      #   includes:
      #     - .*table.*
      #   excludes:
      #     - .*demo.*
sink:
  type: metadata-rest
  config: {}
workflowConfig:
  loggerLevel: INFO  # DEBUG, INFO, WARNING or ERROR
  openMetadataServerConfig:
    hostPort: "http://localhost:8585/api"
    authProvider: openmetadata
    securityConfig:
      jwtToken: "{bot_jwt_token}"
    ## Store the service Connection information
    storeServiceConnection: true  # false
    ## Secrets Manager Configuration
    # secretsManagerProvider: aws, azure or noop
    # secretsManagerLoader: airflow or env
    ## If SSL, fill the following
    # verifySSL: validate  # or ignore
    # sslConfig:
    #   caCertificate: /local/path/to/certificate
# ingestionPipelineFQN: <service name>.<ingestion name> ## e.g., "my_redshift.metadata"

3. Azure Storage Buckets

In this configuration we will be fetching the dbt manifest.json, catalog.json and run_results.json files from a Azure Storage bucket.
1

dbtConfigType

  • dbtConfigType: azure
2

Client ID

  • Client ID: This is the unique identifier for your application registered in Azure AD. It’s used in conjunction with the Client Secret to authenticate your application.
3

Client Secret

  • Client Secret: A key that your application uses, along with the Client ID, to access Azure resources.
  1. Log into Microsoft Azure.
  2. Search for App registrations and select the App registrations link.
  3. Select the Azure AD app you’re using for this connection.
  4. Under Manage, select Certificates & secrets.
  5. Under Client secrets, select New client secret.
  6. In the Add a client secret pop-up window, provide a description for your application secret. Choose when the application should expire, and select Add.
  7. From the Client secrets section, copy the string in the Value column of the newly created application secret.
4

Tenant ID

  • Tenant ID: The unique identifier of the Azure AD instance under which your account and application are registered.
To get the tenant ID, follow these steps:
  1. Log into Microsoft Azure.
  2. Search for App registrations and select the App registrations link.
  3. Select the Azure AD app you’re using for Power BI.
  4. From the Overview section, copy the Directory (tenant) ID.
5

Account Name

  • Account Name: The name of your ADLS account.
Here are the step-by-step instructions for finding the account name for an Azure Data Lake Storage account:
  1. Sign in to the Azure portal and navigate to the Storage accounts page.
  2. Find the Data Lake Storage account you want to access and click on its name.
  3. In the account overview page, locate the Account name field. This is the unique identifier for the Data Lake Storage account.
  4. You can use this account name to access and manage the resources associated with the account, such as creating and managing containers and directories.

dbt Prefix Configuration

dbtPrefixConfig: Optional config to specify the bucket name and directory path where the dbt files are stored. If config is not provided ingestion will scan all the buckets for dbt files. dbtBucketName: Name of the bucket where the dbt files are stored. dbtObjectPrefix: Path of the folder where the dbt files are stored. Follow the documentation here to configure multiple dbt projects

Source Config

dbtUpdateDescriptions: Configuration to update the description from dbt or not. If set to true descriptions from dbt will override the already present descriptions on the entity. For more details visit here
dbtUpdateOwners: Configuration to update the owner from dbt or not. If set to true owners from dbt will override the already present owners on the entity. For more details visit here
includeTags: true or false, to ingest tags from dbt. Default is true.
dbtClassificationName: Custom OpenMetadata Classification name for dbt tags.
databaseFilterPattern, schemaFilterPattern, tableFilterPattern: Add filters to filter out models from the dbt manifest. Note that the filter supports regex as include or exclude. You can find examples here
To send the metadata to OpenMetadata, it needs to be specified as type: metadata-rest.
The main property here is the openMetadataServerConfig, where you can define the host and security provider of your OpenMetadata installation.
Logger LevelYou can specify the loggerLevel depending on your needs. If you are trying to troubleshoot an ingestion, running with DEBUG will give you far more traces for identifying issues.
JWT TokenJWT tokens will allow your clients to authenticate against the OpenMetadata server. To enable JWT Tokens, you will get more details here.You can refer to the JWT Troubleshooting section link for any issues in your JWT configuration.
Store Service ConnectionIf set to true (default), we will store the sensitive information either encrypted via the Fernet Key in the database or externally, if you have configured any Secrets Manager.If set to false, the service will be created, but the service connection information will only be used by the Ingestion Framework at runtime, and won’t be sent to the OpenMetadata server.
SSL ConfigurationIf you have added SSL to the OpenMetadata server, then you will need to handle the certificates when running the ingestion too. You can either set verifySSL to ignore, or have it as validate, which will require you to set the sslConfig.caCertificate with a local path where your ingestion runs that points to the server certificate file.Find more information on how to troubleshoot SSL issues here.
ingestionPipelineFQNFully qualified name of ingestion pipeline, used to identify the current ingestion pipeline.
source:
  type: dbt
  serviceName: service_name
  sourceConfig:
    config:
      type: DBT
dbtConfigSource:
        dbtConfigType: azure
dbtSecurityConfig:
          clientId: clientId
clientSecret: clientSecret
tenantId: tenantId
accountName: accountName
          clientSecret: clientSecret
          tenantId: tenantId
          accountName: accountName
        dbtPrefixConfig:
          dbtBucketName: bucket_name
          dbtObjectPrefix: main_dir/dbt_files/
      # dbtUpdateDescriptions: true or false
      # dbtUpdateOwners: true or false
      # includeTags: true or false
      # dbtClassificationName: dbtTags
      # databaseFilterPattern:
      #   includes:
      #     - .*db.*
      #   excludes:
      #     - .*demo.*
      # schemaFilterPattern:
      #   includes:
      #     - .*schema.*
      #   excludes:
      #     - .*demo.*
      # tableFilterPattern:
      #   includes:
      #     - .*table.*
      #   excludes:
      #     - .*demo.*
sink:
  type: metadata-rest
  config: {}
workflowConfig:
  loggerLevel: INFO  # DEBUG, INFO, WARNING or ERROR
  openMetadataServerConfig:
    hostPort: "http://localhost:8585/api"
    authProvider: openmetadata
    securityConfig:
      jwtToken: "{bot_jwt_token}"
    ## Store the service Connection information
    storeServiceConnection: true  # false
    ## Secrets Manager Configuration
    # secretsManagerProvider: aws, azure or noop
    # secretsManagerLoader: airflow or env
    ## If SSL, fill the following
    # verifySSL: validate  # or ignore
    # sslConfig:
    #   caCertificate: /local/path/to/certificate
# ingestionPipelineFQN: <service name>.<ingestion name> ## e.g., "my_redshift.metadata"

4. Local Storage

In this configuration, we will be fetching the dbt manifest.json, catalog.json and run_results.json files from the same host that is running the ingestion process.
1

dbtConfigType

  • dbtConfigType: local
2

dbtCatalogFilePath

  • dbtCatalogFilePath: catalog.json file path to extract dbt models with their column schemas.
3

dbtManifestFilePath

  • dbtManifestFilePath (Required): manifest.json file path to extract dbt models with their column schemas.
4

dbtRunResultsFilePath

  • dbtRunResultsFilePath: run_results.json file path to extract dbt models tests and test results metadata. Tests from dbt will only be ingested if this file is present.

Source Config

dbtUpdateDescriptions: Configuration to update the description from dbt or not. If set to true descriptions from dbt will override the already present descriptions on the entity. For more details visit here
dbtUpdateOwners: Configuration to update the owner from dbt or not. If set to true owners from dbt will override the already present owners on the entity. For more details visit here
includeTags: true or false, to ingest tags from dbt. Default is true.
dbtClassificationName: Custom OpenMetadata Classification name for dbt tags.
databaseFilterPattern, schemaFilterPattern, tableFilterPattern: Add filters to filter out models from the dbt manifest. Note that the filter supports regex as include or exclude. You can find examples here
To send the metadata to OpenMetadata, it needs to be specified as type: metadata-rest.
The main property here is the openMetadataServerConfig, where you can define the host and security provider of your OpenMetadata installation.
Logger LevelYou can specify the loggerLevel depending on your needs. If you are trying to troubleshoot an ingestion, running with DEBUG will give you far more traces for identifying issues.
JWT TokenJWT tokens will allow your clients to authenticate against the OpenMetadata server. To enable JWT Tokens, you will get more details here.You can refer to the JWT Troubleshooting section link for any issues in your JWT configuration.
Store Service ConnectionIf set to true (default), we will store the sensitive information either encrypted via the Fernet Key in the database or externally, if you have configured any Secrets Manager.If set to false, the service will be created, but the service connection information will only be used by the Ingestion Framework at runtime, and won’t be sent to the OpenMetadata server.
SSL ConfigurationIf you have added SSL to the OpenMetadata server, then you will need to handle the certificates when running the ingestion too. You can either set verifySSL to ignore, or have it as validate, which will require you to set the sslConfig.caCertificate with a local path where your ingestion runs that points to the server certificate file.Find more information on how to troubleshoot SSL issues here.
ingestionPipelineFQNFully qualified name of ingestion pipeline, used to identify the current ingestion pipeline.
source:
  type: dbt
  serviceName: service_name
  sourceConfig:
    config:
      type: DBT
dbtConfigSource:
        dbtConfigType: local
dbtCatalogFilePath: path/to/catalog.json
dbtManifestFilePath: path/to/manifest.json
dbtRunResultsFilePath: path/to/run_results.json
        dbtCatalogFilePath: path/to/catalog.json
        dbtManifestFilePath: path/to/manifest.json
        dbtRunResultsFilePath: path/to/run_results.json
      # dbtUpdateDescriptions: true or false
      # dbtUpdateOwners: true or false
      # includeTags: true or false
      # dbtClassificationName: dbtTags
      # databaseFilterPattern:
      #   includes:
      #     - .*db.*
      #   excludes:
      #     - .*demo.*
      # schemaFilterPattern:
      #   includes:
      #     - .*schema.*
      #   excludes:
      #     - .*demo.*
      # tableFilterPattern:
      #   includes:
      #     - .*table.*
      #   excludes:
      #     - .*demo.*
sink:
  type: metadata-rest
  config: {}
workflowConfig:
  loggerLevel: INFO  # DEBUG, INFO, WARNING or ERROR
  openMetadataServerConfig:
    hostPort: "http://localhost:8585/api"
    authProvider: openmetadata
    securityConfig:
      jwtToken: "{bot_jwt_token}"
    ## Store the service Connection information
    storeServiceConnection: true  # false
    ## Secrets Manager Configuration
    # secretsManagerProvider: aws, azure or noop
    # secretsManagerLoader: airflow or env
    ## If SSL, fill the following
    # verifySSL: validate  # or ignore
    # sslConfig:
    #   caCertificate: /local/path/to/certificate
# ingestionPipelineFQN: <service name>.<ingestion name> ## e.g., "my_redshift.metadata"

5. File Server

In this configuration we will be fetching the dbt manifest.json, catalog.json and run_results.json files from an HTTP or File Server.
1

dbtConfigType

  • dbtConfigType: http
2

dbtCatalogHttpPath

  • dbtCatalogHttpPath: catalog.json http path to extract dbt models with their column schemas.
3

dbtManifestHttpPath

  • dbtManifestHttpPath (Required): manifest.json http path to extract dbt models with their column schemas.
4

dbtRunResultsHttpPath

  • dbtRunResultsHttpPath: run_results.json http path to extract dbt models tests and test results metadata. Tests from dbt will only be ingested if this file is present.

Source Config

dbtUpdateDescriptions: Configuration to update the description from dbt or not. If set to true descriptions from dbt will override the already present descriptions on the entity. For more details visit here
dbtUpdateOwners: Configuration to update the owner from dbt or not. If set to true owners from dbt will override the already present owners on the entity. For more details visit here
includeTags: true or false, to ingest tags from dbt. Default is true.
dbtClassificationName: Custom OpenMetadata Classification name for dbt tags.
databaseFilterPattern, schemaFilterPattern, tableFilterPattern: Add filters to filter out models from the dbt manifest. Note that the filter supports regex as include or exclude. You can find examples here
To send the metadata to OpenMetadata, it needs to be specified as type: metadata-rest.
The main property here is the openMetadataServerConfig, where you can define the host and security provider of your OpenMetadata installation.
Logger LevelYou can specify the loggerLevel depending on your needs. If you are trying to troubleshoot an ingestion, running with DEBUG will give you far more traces for identifying issues.
JWT TokenJWT tokens will allow your clients to authenticate against the OpenMetadata server. To enable JWT Tokens, you will get more details here.You can refer to the JWT Troubleshooting section link for any issues in your JWT configuration.
Store Service ConnectionIf set to true (default), we will store the sensitive information either encrypted via the Fernet Key in the database or externally, if you have configured any Secrets Manager.If set to false, the service will be created, but the service connection information will only be used by the Ingestion Framework at runtime, and won’t be sent to the OpenMetadata server.
SSL ConfigurationIf you have added SSL to the OpenMetadata server, then you will need to handle the certificates when running the ingestion too. You can either set verifySSL to ignore, or have it as validate, which will require you to set the sslConfig.caCertificate with a local path where your ingestion runs that points to the server certificate file.Find more information on how to troubleshoot SSL issues here.
ingestionPipelineFQNFully qualified name of ingestion pipeline, used to identify the current ingestion pipeline.
source:
  type: dbt
  serviceName: service_name
  sourceConfig:
    config:
      type: DBT
dbtConfigSource:
        dbtConfigType: http
dbtCatalogHttpPath: http://path-to-catalog.json
dbtManifestHttpPath: http://path-to-manifest.json
dbtRunResultsHttpPath: http://path-to-run_results.json
        dbtCatalogHttpPath: http://path-to-catalog.json
        dbtManifestHttpPath: http://path-to-manifest.json
        dbtRunResultsHttpPath: http://path-to-run_results.json
      # dbtUpdateDescriptions: true or false
      # dbtUpdateOwners: true or false
      # includeTags: true or false
      # dbtClassificationName: dbtTags
      # databaseFilterPattern:
      #   includes:
      #     - .*db.*
      #   excludes:
      #     - .*demo.*
      # schemaFilterPattern:
      #   includes:
      #     - .*schema.*
      #   excludes:
      #     - .*demo.*
      # tableFilterPattern:
      #   includes:
      #     - .*table.*
      #   excludes:
      #     - .*demo.*
sink:
  type: metadata-rest
  config: {}
workflowConfig:
  loggerLevel: INFO  # DEBUG, INFO, WARNING or ERROR
  openMetadataServerConfig:
    hostPort: "http://localhost:8585/api"
    authProvider: openmetadata
    securityConfig:
      jwtToken: "{bot_jwt_token}"
    ## Store the service Connection information
    storeServiceConnection: true  # false
    ## Secrets Manager Configuration
    # secretsManagerProvider: aws, azure or noop
    # secretsManagerLoader: airflow or env
    ## If SSL, fill the following
    # verifySSL: validate  # or ignore
    # sslConfig:
    #   caCertificate: /local/path/to/certificate
# ingestionPipelineFQN: <service name>.<ingestion name> ## e.g., "my_redshift.metadata"

6. dbt Cloud

In this configuration we will be fetching the dbt manifest.json, catalog.json and run_results.json files from dbt cloud APIs. The Account Viewer permission is the minimum requirement for the dbt cloud token.
The dbt Cloud workflow leverages the dbt Cloud v2 APIs to retrieve dbt run artifacts (manifest.json, catalog.json, and run_results.json) and ingest the dbt metadata.It uses the /runs API to obtain the most recent successful dbt run, filtering by account_id, project_id and job_id if specified. The artifacts from this run are then collected using the /artifacts API.Refer to the code here
1

dbtConfigType

  • dbtConfigType: cloud
2

dbtCloudAuthToken

  • dbtCloudAuthToken: Please follow the instructions in dbt Cloud’s API documentation to create a dbt Cloud authentication token. The Account Viewer permission is the minimum requirement for the dbt cloud token.
3

dbtCloudAccountId

  • dbtCloudAccountId (Required): To obtain your dbt Cloud account ID, sign in to dbt Cloud in your browser. Take note of the number directly following the accounts path component of the URL — this is your account ID.
For example, if the URL is https://cloud.getdbt.com/#/accounts/1234/projects/6789/dashboard/, the account ID is 1234.
4

dbtCloudJobId

  • dbtCloudJobId: In case of multiple jobs in a dbt cloud account, specify the job’s ID from which you want to extract the dbt run artifacts. If left empty, the dbt artifacts will be fetched from the most recent run on dbt cloud.
After creating a dbt job, take note of the url which will be similar to https://cloud.getdbt.com/#/accounts/1234/projects/6789/jobs/553344/. The job ID is 553344.The value entered should be a numeric value.
5

dbtCloudProjectId

  • dbtCloudProjectId: In case of multiple projects in a dbt cloud account, specify the project’s ID from which you want to extract the dbt run artifacts. If left empty, the dbt artifacts will be fetched from the most recent run on dbt cloud.
To find your project ID, sign in to your dbt cloud account and choose a specific project. Take note of the url which will be similar to https://cloud.getdbt.com/#/accounts/1234/settings/projects/6789/, the project ID is 6789.The value entered should be a numeric value.
6

dbtCloudUrl

  • dbtCloudUrl: URL to connect to your dbt cloud instance. E.g., https://cloud.getdbt.com or https://emea.dbt.com/.

Source Config

dbtUpdateDescriptions: Configuration to update the description from dbt or not. If set to true descriptions from dbt will override the already present descriptions on the entity. For more details visit here
dbtUpdateOwners: Configuration to update the owner from dbt or not. If set to true owners from dbt will override the already present owners on the entity. For more details visit here
includeTags: true or false, to ingest tags from dbt. Default is true.
dbtClassificationName: Custom OpenMetadata Classification name for dbt tags.
databaseFilterPattern, schemaFilterPattern, tableFilterPattern: Add filters to filter out models from the dbt manifest. Note that the filter supports regex as include or exclude. You can find examples here
To send the metadata to OpenMetadata, it needs to be specified as type: metadata-rest.
The main property here is the openMetadataServerConfig, where you can define the host and security provider of your OpenMetadata installation.
Logger LevelYou can specify the loggerLevel depending on your needs. If you are trying to troubleshoot an ingestion, running with DEBUG will give you far more traces for identifying issues.
JWT TokenJWT tokens will allow your clients to authenticate against the OpenMetadata server. To enable JWT Tokens, you will get more details here.You can refer to the JWT Troubleshooting section link for any issues in your JWT configuration.
Store Service ConnectionIf set to true (default), we will store the sensitive information either encrypted via the Fernet Key in the database or externally, if you have configured any Secrets Manager.If set to false, the service will be created, but the service connection information will only be used by the Ingestion Framework at runtime, and won’t be sent to the OpenMetadata server.
SSL ConfigurationIf you have added SSL to the OpenMetadata server, then you will need to handle the certificates when running the ingestion too. You can either set verifySSL to ignore, or have it as validate, which will require you to set the sslConfig.caCertificate with a local path where your ingestion runs that points to the server certificate file.Find more information on how to troubleshoot SSL issues here.
ingestionPipelineFQNFully qualified name of ingestion pipeline, used to identify the current ingestion pipeline.
source:
  type: dbt
  serviceName: service_name
  sourceConfig:
    config:
      type: DBT
dbtConfigSource:
        dbtConfigType: cloud
dbtCloudAuthToken: AUTH_TOKEN
dbtCloudAccountId: ACCOUNT_ID
dbtCloudJobId: JOB_ID
dbtCloudProjectId: PROJECT_ID
dbtCloudUrl: https://cloud.getdbt.com
        dbtCloudAuthToken: AUTH_TOKEN
        dbtCloudAccountId: ACCOUNT_ID
        dbtCloudJobId: JOB_ID
        dbtCloudProjectId: PROJECT_ID
        dbtCloudUrl: https://cloud.getdbt.com
      # dbtUpdateDescriptions: true or false
      # dbtUpdateOwners: true or false
      # includeTags: true or false
      # dbtClassificationName: dbtTags
      # databaseFilterPattern:
      #   includes:
      #     - .*db.*
      #   excludes:
      #     - .*demo.*
      # schemaFilterPattern:
      #   includes:
      #     - .*schema.*
      #   excludes:
      #     - .*demo.*
      # tableFilterPattern:
      #   includes:
      #     - .*table.*
      #   excludes:
      #     - .*demo.*
sink:
  type: metadata-rest
  config: {}
workflowConfig:
  loggerLevel: INFO  # DEBUG, INFO, WARNING or ERROR
  openMetadataServerConfig:
    hostPort: "http://localhost:8585/api"
    authProvider: openmetadata
    securityConfig:
      jwtToken: "{bot_jwt_token}"
    ## Store the service Connection information
    storeServiceConnection: true  # false
    ## Secrets Manager Configuration
    # secretsManagerProvider: aws, azure or noop
    # secretsManagerLoader: airflow or env
    ## If SSL, fill the following
    # verifySSL: validate  # or ignore
    # sslConfig:
    #   caCertificate: /local/path/to/certificate
# ingestionPipelineFQN: <service name>.<ingestion name> ## e.g., "my_redshift.metadata"

2. Run the dbt ingestion

After saving the YAML config, we will run the command for dbt ingestion
metadata ingest -c <path-to-yaml>