
dbt
PRODFeature List
✓ Metadata
✓ Queries
✓ Lineage
✓ Tags
✓ Tiers
✓ Domains
✓ Custom Properties
✓ Glossary
✓ Owners
✓ Descriptions
✓ Tests
✓ Exposures
How to Run the Connector Externally
To run the Ingestion via the UI you’ll need to use the OpenMetadata Ingestion Container, which comes shipped with custom Airflow plugins to handle the workflow deployment. If, instead, you want to manage your workflows externally on your preferred orchestrator, you can check the following docs to run the Ingestion Framework anywhere.Requirements
You must have access to dbt artifacts. At minimum, themanifest.json file is required. The catalog.json and run_results.json files are optional but recommended for richer metadata.
For dbt Cloud, create a service token with the Account Viewer permission and collect the account, project, and job IDs if you want to target a specific run.
Python Requirements
To run the dbt ingestion, install:dbt Ingestion
All connectors are defined as JSON Schemas. You can find the structure for dbt workflows in the OpenMetadata spec repository.1. Define the YAML Config
Choose one of the following dbt artifact sources:- AWS S3 Buckets
- Google Cloud Storage Buckets
- Azure Storage Buckets
- Local Storage
- File Server
- dbt Cloud