
Redshift
PROD- Requirements
- Metadata Ingestion
- Query Usage
- Lineage
- Data Profiler
- Data Quality
- dbt Integration
- Enable Security
- Reverse Metadata
How to Run the Connector Externally
To run the Ingestion via the UI you’ll need to use the OpenMetadata Ingestion Container, which comes shipped with custom Airflow plugins to handle the workflow deployment. If, instead, you want to manage your workflows externally on your preferred orchestrator, you can check the following docs to run the Ingestion Framework anywhere.Requirements
Redshift user must grantSELECT privilege on table SVV_TABLE_INFO to fetch the metadata of tables and views. For more information visit here.
The Redshift connector supports both Amazon Redshift Provisioned (cluster) and Amazon Redshift Serverless starting from release 1.11.5. The connector automatically detects the deployment type during ingestion and uses the appropriate system views for query and lineage extraction. No additional configuration changes are required.
Python Requirements
To run the Redshift ingestion, you will need to install:Metadata Ingestion
All connectors are defined as JSON Schemas. Here you can find the structure to create a connection to Redshift. In order to create and run a Metadata Ingestion workflow, we will follow the steps to create a YAML configuration able to connect to the source, process the Entities if needed, and reach the OpenMetadata server. The workflow is modeled around the following JSON Schema Note: During the metadata ingestion for redshift, the tables in which the distribution style i.eDISTSTYLE is not AUTO will be marked as partitioned tables
It is recommmended to exclude the schema “information_schema” from the metadata ingestion as it contains system tables and views.
1. Define the YAML Config
This is a sample config for Redshift:2. Run with the CLI
First, we will need to save the YAML file. Afterward, and with all requirements installed, we can run:Query Usage
The Query Usage workflow will be using thequery-parser processor.
After running a Metadata Ingestion workflow, we can run Query Usage workflow.
While the serviceName will be the same to that was used in Metadata Ingestion, so the ingestion bot can get the serviceConnection details from the server.
1. Define the YAML Config
This is a sample config for Usage:2. Run with the CLI
After saving the YAML config, we will run the command the same way we did for the metadata ingestion:Lineage
After running a Metadata Ingestion workflow, we can run Lineage workflow. While theserviceName will be the same to that was used in Metadata Ingestion, so the ingestion bot can get the serviceConnection details from the server.
1. Define the YAML Config
This is a sample config for Lineage:- You can learn more about how to configure and run the Lineage Workflow to extract Lineage data from here
2. Run with the CLI
After saving the YAML config, we will run the command the same way we did for the metadata ingestion:Data Profiler
The Data Profiler workflow will be using theorm-profiler processor.
After running a Metadata Ingestion workflow, we can run the Data Profiler workflow.
While the serviceName will be the same to that was used in Metadata Ingestion, so the ingestion bot can get the serviceConnection details from the server.
1. Define the YAML Config
This is a sample config for the profiler:- You can learn more about how to configure and run the Profiler Workflow to extract Profiler data and execute the Data Quality from here
2. Run with the CLI
After saving the YAML config, we will run the command the same way we did for the metadata ingestion:ingest, we are using the profile command to select the Profiler workflow.
Auto Classification
The Auto Classification workflow will be using theorm-profiler processor.
After running a Metadata Ingestion workflow, we can run the Auto Classification workflow.
While the serviceName will be the same to that was used in Metadata Ingestion, so the ingestion bot can get the serviceConnection details from the server.
1. Define the YAML Config
This is a sample config for the Auto Classification Workflow:2. Run with the CLI
After saving the YAML config, we will run the command the same way we did for the metadata ingestion:Data Quality
Adding Data Quality Test Cases from yaml config
When creating a JSON config for a test workflow the source configuration is very simple.serviceName (this name needs to be unique) and entityFullyQualifiedName (the entity for which we’ll be executing tests against) keys.
Once you have defined your source configuration you’ll need to define te processor configuration.
"orm-test-runner". For accepted test definition names and parameter value names refer to the tests page.
You can keep your YAML config as simple as follows if the table already has tests.
Key reference:
forceUpdate: if the test case exists (base on the test case name) for the entity, implements the strategy to follow when running the test (i.e. whether or not to update parameters)testCases: list of test cases to add to the entity referenced. Note that we will execute all the tests present in the Table.name: test case nametestDefinitionName: test definitioncolumnName: only applies to column test. The name of the column to run the test againstparameterValues: parameter values of the test
sink and workflowConfig will have the same settings as the ingestion and profiler workflow.
Full yaml config example
How to Run Tests
To run the tests from the CLI execute the following commandSecuring Redshift Connection with SSL in OpenMetadata
To configure SSL for secure connections between OpenMetadata and a Redshift database, Redshift offers various SSL modes, each providing different levels of connection security. When running the ingestion process externally, specify the SSL mode to be used for the Redshift connection, such asprefer, verify-ca, allow, and others. Once you’ve chosen the SSL mode, provide the CA certificate for SSL validation (caCertificate). Only the CA certificate is required for SSL validation in Redshift.