Skip to main content
S3 Storage

S3 Storage

PROD
This page contains the setup guide and reference information for the S3 connector. Configure and schedule S3 metadata workflows from the CLI:

How to Run the Connector Externally

To run the Ingestion via the UI you’ll need to use the OpenMetadata Ingestion Container, which comes shipped with custom Airflow plugins to handle the workflow deployment. If, instead, you want to manage your workflows externally on your preferred orchestrator, you can check the following docs to run the Ingestion Framework anywhere.

Requirements

OpenMetadata 1.0 or later

To deploy OpenMetadata, check the Deployment guides.
To run the metadata ingestion, we need the following permissions in AWS:

S3 Permissions

For all the buckets that we want to ingest, we need to provide the following:
  • s3:ListBucket
  • s3:GetObject
  • s3:GetBucketLocation
  • s3:ListAllMyBuckets Note that the Resources should be all the buckets that you’d like to scan. A possible policy could be:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:ListBucket",
                "s3:GetBucketLocation",
                "s3:ListAllMyBuckets"
            ],
            "Resource": [
                "arn:aws:s3:::*"
            ]
        }
    ]
}

CloudWatch Permissions

Which is used to fetch the total size in bytes for a bucket and the total number of files. It requires:
  • cloudwatch:GetMetricData
  • cloudwatch:ListMetrics The policy would look like:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "cloudwatch:GetMetricData",
                "cloudwatch:ListMetrics"
            ],
            "Resource": "*"
        }
    ]
}

Python Requirements

We have support for Python versions 3.9-3.11
To run the Athena ingestion, you will need to install:
pip3 install "openmetadata-ingestion[athena]"

OpenMetadata Manifest

In any other connector, extracting metadata happens automatically. In this case, we will be able to extract high-level metadata from buckets, but in order to understand their internal structure we need users to provide an openmetadata.json file at the bucket root. Supported File Formats: [ "csv", "tsv", "avro", "parquet", "json", "json.gz", "json.zip" ] You can learn more about this here. Keep reading for an example on the shape of the manifest file.

OpenMetadata Manifest

Our manifest file is defined as a JSON Schema, and can look like this:

Global Manifest

You can also manage a single manifest file to centralize the ingestion process for any container, named openmetadata_storage_manifest.json. You can also keep local manifests openmetadata.json in each container, but if possible, we will always try to pick up the global manifest during the ingestion.

Metadata Ingestion

All connectors are defined as JSON Schemas. Here you can find the structure to create a connection to Athena. In order to create and run a Metadata Ingestion workflow, we will follow the steps to create a YAML configuration able to connect to the source, process the Entities if needed, and reach the OpenMetadata server. The workflow is modeled around the following JSON Schema

1. Define the YAML Config

2. Run with the CLI

First, we will need to save the YAML file. Afterward, and with all requirements installed, we can run:
metadata ingest -c <path-to-yaml>
Note that from connector to connector, this recipe will always be the same. By updating the YAML configuration, you will be able to extract metadata from different sources.