Skip to main content
Iceberg

Iceberg

BETA
In this section, we provide guides and references to use the Iceberg connector. Configure and schedule Iceberg metadata workflows from the OpenMetadata UI:

Requirements

The requirements actually depend on the Catalog and the FileSystem used. In a nutshell, the used credentials must have access to reading the Catalog and the Metadata File.

Glue Catalog

Must have glue:GetDatabases, and glue:GetTables permissions to be able to read the Catalog. Must also have the s3:GetObject permission for the location of the Iceberg tables.

DynamoDB Catalog

Must have dynamodb:DescribeTable and dynamodb:GetItem permissions on the Iceberg Catalog table. Must also have the s3:GetObject permission for the location of the Iceberg tables.

Hive / REST Catalog

It depends on where and how the Hive / Rest Catalog is setup and where the Iceberg files are stored.

Metadata Ingestion

1

Visit the Services Page

Click `Settings` in the side navigation bar and then `Services`. The first step is to ingest the metadata from your sources. To do that, you first need to create a Service connection first. This Service will be the bridge between OpenMetadata and your source system. Once a Service is created, it can be used to configure your ingestion workflows.Visit Services Page
2

Create a New Service

Click on _Add New Service_ to start the Service creation.Create a new Service
3

Select the Service Type

Select Iceberg as the Service type and click _Next_.Select Service
4

Name and Describe your Service

Provide a name and description for your Service.

Service Name

OpenMetadata uniquely identifies Services by their **Service Name**. Provide a name that distinguishes your deployment from other Services, including the other Iceberg Services that you might be ingesting metadata from. Note that when the name is set, it cannot be changed.Add New Service
5

Configure the Service Connection

In this step, we will configure the connection settings required for Iceberg. Please follow the instructions below to properly configure the Service to read from your sources. You will also find helper documentation on the right-hand side panel in the UI.Configure Service connection

Connection Details

1

Connection Details

When using a Hybrid Ingestion Runner, any sensitive credential fields—such as passwords, API keys, or private keys—must reference secrets using the following format:
password: secret:/my/database/password
This applies only to fields marked as secrets in the connection form (these typically mask input and show a visibility toggle icon). For a complete guide on managing secrets in hybrid setups, see the Hybrid Ingestion Runner Secret Management Guide.
Glue Catalog
  • AWS Credentials DynamoDB Catalog
  • Table Name: DynamoDB Table that works as the Iceberg Catalog.
  • AWS Credentials Hive Catalog
  • Uri: Uri to the Hive Metastore. For Example: ‘thrift://localhost:9083’
  • File System REST Catalog
  • Uri: Uri to the REST Catalog. For Example: ‘http://rest-catalog/ws’.
  • Credential (Optional): OAuth2 credential to be used on the authentication flow.
    • Client ID: OAuth2 Client ID.
    • Client Secret: OAuth2 Client Secret.
  • Token (Optional): Bearer Token to use for the ‘Authorization’ header.
  • SSL (Optional):
    • CA Certificate Path: Path to the CA Bundle.
    • Client Certificate Path: Path to the Client Certificate.
    • Private Key Path: Path to the Private Key Certificate.
  • Sigv4 (Optional): Needed if signing requests using AWS SigV4 protocol.
    • Signing AWS Region: AWS Region to use when signing a request.
    • Signing Name: Name to use when signing a request.
  • File System Common
  • Database Name (Optional): Custom Database Name for your Iceberg Service. If it is not set it will be ‘default’.
  • Warehouse Location (Optional): Custom Warehouse Location. Most Catalogs already have the Warehouse Location defined properly and this shouldn’t be needed. In case of a custom implementation you can pass the location here. For example: ‘s3://my-bucket/warehouse/’
  • Ownership Property: Table property to look for the Owner. It defaults to ‘owner’. The Owner should be the same e-mail set on the OpenMetadata user/group.

File System

AWS Credentials

  • AWS Access Key ID & AWS Secret Access Key: When you interact with AWS, you specify your AWS security credentials to verify who you are and whether you have permission to access the resources that you are requesting. AWS uses the security credentials to authenticate and authorize your requests (docs). Access keys consist of two parts: An access key ID (for example, AKIAIOSFODNN7EXAMPLE), and a secret access key (for example, wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY). You must use both the access key ID and secret access key together to authenticate your requests. You can find further information on how to manage your access keys here.
  • AWS Region: Each AWS Region is a separate geographic area in which AWS clusters data centers (docs). As AWS can have instances in multiple regions, we need to know the region the service you want reach belongs to. Note that the AWS Region is the only required parameter when configuring a connection. When connecting to the services programmatically, there are different ways in which we can extract and use the rest of AWS configurations. You can find further information about configuring your credentials here.
  • AWS Session Token (optional): If you are using temporary credentials to access your services, you will need to inform the AWS Access Key ID and AWS Secrets Access Key. Also, these will include an AWS Session Token. You can find more information on Using temporary credentials with AWS resources.
  • Endpoint URL (optional): To connect programmatically to an AWS service, you use an endpoint. An endpoint is the URL of the entry point for an AWS web service. The AWS SDKs and the AWS Command Line Interface (AWS CLI) automatically use the default endpoint for each service in an AWS Region. But you can specify an alternate endpoint for your API requests. Find more information on AWS service endpoints.
  • Profile Name (Not Supported): A named profile is a collection of settings and credentials that you can apply to a AWS CLI command. When you specify a profile to run a command, the settings and credentials are used to run that command. Multiple named profiles can be stored in the config and credentials files. You can inform this field if you’d like to use a profile other than default. Find here more information about Named profiles for the AWS CLI.
  • Assume Role Arn (Not Supported): Typically, you use AssumeRole within your account or for cross-account access. In this field you’ll set the ARN (Amazon Resource Name) of the policy of the other account. A user who wants to access a role in a different account must also have permissions that are delegated from the account administrator. The administrator must attach a policy that allows the user to call AssumeRole for the ARN of the role in the other account. This is a required field if you’d like to AssumeRole. Find more information on AssumeRole.
When using Assume Role authentication, ensure you provide the following details:
  • AWS Region: Specify the AWS region for your deployment.
  • Assume Role ARN: Provide the ARN of the role in your AWS account that OpenMetadata will assume.
  • Assume Role Session Name (Not Supported): An identifier for the assumed role session. Use the role session name to uniquely identify a session when the same role is assumed by different principals or for different reasons. By default, we’ll use the name OpenMetadataSession. Find more information about the Role Session Name.
  • Assume Role Source Identity (Not Supported): The source identity specified by the principal that is calling the AssumeRole operation. You can use source identity information in AWS CloudTrail logs to determine who took actions with a role. Find more information about Source Identity.

Azure Credentials

  • Client ID : Client ID of the data storage account
  • Client Secret : Client Secret of the account
  • Tenant ID : Tenant ID under which the data storage account falls
  • Account Name : Account Name of the data Storage
2

Test the Connection

Once the credentials have been added, click on Test Connection and Save the changes.Test Connection
3

Configure Metadata Ingestion

In this step we will configure the metadata ingestion pipeline, Please follow the instructions belowConfigure Metadata IngestionConfigure Metadata Ingestion

Metadata Ingestion Options

If the owner’s name is openmetadata, you need to enter [email protected] in the name section of add team/user form, click here for more info.
  • Name: This field refers to the name of ingestion pipeline, you can customize the name or use the generated name.
  • Database Filter Pattern (Optional): Use to database filter patterns to control whether or not to include database as part of metadata ingestion.
    • Include: Explicitly include databases by adding a list of comma-separated regular expressions to the Include field. OpenMetadata will include all databases with names matching one or more of the supplied regular expressions. All other databases will be excluded.
    • Exclude: Explicitly exclude databases by adding a list of comma-separated regular expressions to the Exclude field. OpenMetadata will exclude all databases with names matching one or more of the supplied regular expressions. All other databases will be included.
  • Schema Filter Pattern (Optional): Use to schema filter patterns to control whether to include schemas as part of metadata ingestion.
    • Include: Explicitly include schemas by adding a list of comma-separated regular expressions to the Include field. OpenMetadata will include all schemas with names matching one or more of the supplied regular expressions. All other schemas will be excluded.
    • Exclude: Explicitly exclude schemas by adding a list of comma-separated regular expressions to the Exclude field. OpenMetadata will exclude all schemas with names matching one or more of the supplied regular expressions. All other schemas will be included.
  • Table Filter Pattern (Optional): Use to table filter patterns to control whether to include tables as part of metadata ingestion.
    • Include: Explicitly include tables by adding a list of comma-separated regular expressions to the Include field. OpenMetadata will include all tables with names matching one or more of the supplied regular expressions. All other tables will be excluded.
    • Exclude: Explicitly exclude tables by adding a list of comma-separated regular expressions to the Exclude field. OpenMetadata will exclude all tables with names matching one or more of the supplied regular expressions. All other tables will be included.
  • Enable Debug Log (toggle): Set the Enable Debug Log toggle to set the default log level to debug.
  • Mark Deleted Tables (toggle): Set the Mark Deleted Tables toggle to flag tables as soft-deleted if they are not present anymore in the source system.
  • Mark Deleted Tables from Filter Only (toggle): Set the Mark Deleted Tables from Filter Only toggle to flag tables as soft-deleted if they are not present anymore within the filtered schema or database only. This flag is useful when you have more than one ingestion pipelines. For example if you have a schema
  • includeTables (toggle): Optional configuration to turn off fetching metadata for tables.
  • includeViews (toggle): Set the Include views toggle to control whether to include views as part of metadata ingestion.
  • includeTags (toggle): Set the ‘Include Tags’ toggle to control whether to include tags as part of metadata ingestion.
  • includeOwners (toggle): Set the ‘Include Owners’ toggle to control whether to include owners to the ingested entity if the owner email matches with a user stored in the OM server as part of metadata ingestion. If the ingested entity already exists and has an owner, the owner will not be overwritten.
  • includeStoredProcedures (toggle): Optional configuration to toggle the Stored Procedures ingestion.
  • includeDDL (toggle): Optional configuration to toggle the DDL Statements ingestion.
  • queryLogDuration (Optional): Configuration to tune how far we want to look back in query logs to process Stored Procedures results.
  • queryParsingTimeoutLimit (Optional): Configuration to set the timeout for parsing the query in seconds.
  • useFqnForFiltering (toggle): Regex will be applied on fully qualified name (e.g service_name.db_name.schema_name.table_name) instead of raw name (e.g. table_name).
  • Incremental (Beta): Use Incremental Metadata Extraction after the first execution. This is done by getting the changed tables instead of all of them. Only Available for BigQuery, Redshift and Snowflake
    • Enabled: If True, enables Metadata Extraction to be Incremental.
    • lookback Days: Number of days to search back for a successful pipeline run. The timestamp of the last found successful pipeline run will be used as a base to search for updated entities.
    • Safety Margin Days: Number of days to add to the last successful pipeline run timestamp to search for updated entities.
  • Threads (Beta): Use a Multithread approach for Metadata Extraction. You can define here the number of threads you would like to run concurrently. For further information please check the documentation on Metadata Ingestion - Multithreading
Note that the right-hand side panel in the OpenMetadata UI will also share useful documentation when configuring the ingestion.
4

Schedule the Ingestion and Deploy

Scheduling can be set up at an hourly, daily, weekly, or manual cadence. The timezone is in UTC. Select a Start Date to schedule for ingestion. It is optional to add an End Date.Review your configuration settings. If they match what you intended, click Deploy to create the service and schedule metadata ingestion.If something doesn’t look right, click the Back button to return to the appropriate step and change the settings as needed.After configuring the workflow, you can click on Deploy to create the pipeline.Schedule the Workflow
5

View the Ingestion Pipeline

Once the workflow has been successfully deployed, you can view the Ingestion Pipeline running from the Service Page.View Ingestion Pipeline
If AutoPilot is enabled, workflows like usage tracking, data lineage, and similar tasks will be handled automatically. Users don’t need to set up or manage them - AutoPilot takes care of everything in the system.

Securing Rest Catalog Connection with SSL in OpenMetadata

When using SSL to establish secure connections between OpenMetadata and Rest Catalog, you can specify the caCertificate to provide the CA certificate used for SSL validation. Alternatively, if both client and server require mutual authentication, you’ll need to use all three parameters: ssl_key, ssl_cert, and ssl_ca. In this case, ssl_cert is used for the client’s SSL certificate, ssl_key for the private key associated with the SSL certificate, and ssl_ca for the CA certificate to validate the server’s certificate. SSL Configuration