Skip to main content
Snowplow

Snowplow

BETA
In this section, we provide guides and references to use the Snowplow connector. Configure and schedule Snowplow metadata workflow from the OpenMetadata UI:

How to Run the Connector Externally

To run the Ingestion via the UI you’ll need to use the OpenMetadata Ingestion Container, which comes shipped with custom Airflow plugins to handle the workflow deployment. If, instead, you want to manage your workflows externally on your preferred orchestrator, you can check the following docs to run the Ingestion Framework anywhere.

Requirements

Snowplow BDP (Business Data Platform)

For Snowplow BDP deployments, you’ll need:
  • Console URL: The URL of your Snowplow Console (e.g., https://console.snowplowanalytics.com)
  • API Key: An API key with read access to your Snowplow organization
  • Organization ID: Your Snowplow BDP organization identifier

Snowplow Community Edition

For self-hosted Community Edition deployments, you’ll need:
  • Configuration Path: The path to your Snowplow configuration files
  • Iglu Server URL (optional): If you’re using an Iglu Server for schema management

Metadata Ingestion

1. Define the YAML Config

This is a sample config for Snowplow:
  • You can learn more about how to configure and run the Ingestion Framework here.

2. Run the Command

After saving the YAML config, run the following command:
metadata ingest -c <path-to-yaml>

Data Model

The Snowplow connector extracts the following metadata:
  • Pipelines: Each Snowplow pipeline is imported with its configuration
  • Pipeline Components: Collectors, enrichments, and loaders are imported as pipeline tasks
  • Event Schemas: Iglu schemas are imported as table entities showing the structure of events
  • Lineage: Data flow from pipelines to destination tables is captured

Supported Destinations

The connector can track lineage to the following Snowplow loader destinations:
  • Amazon Redshift
  • Google BigQuery
  • Snowflake
  • Databricks
  • PostgreSQL
  • Amazon S3 (Data Lake)
  • Google Cloud Storage
  • Azure Data Lake Storage

Troubleshooting

Connection Errors

If you encounter connection errors:
  1. For BDP: Verify your API key has the necessary permissions and the organization ID is correct
  2. For Community: Ensure the configuration path exists and is readable

Missing Schemas

If Iglu schemas are not being imported:
  1. For BDP: Check that your API key has access to the Iglu repositories
  2. For Community: Verify the Iglu server URL is accessible or local schema files are present

Performance

For large deployments with many schemas:
  • Use pipeline and schema filter patterns to limit the scope of ingestion
  • Consider running the ingestion during off-peak hours