deployment

No menu items for this category

Upgrade on Docker

To run OpenMetadata with Docker, you can simply download the docker-compose.yml file. Optionally, we added some Named Volumes to handle data persistence.

You can find more details about Docker deployment here

Below we have highlighted the steps needed to upgrade to the latest version with Docker. Make sure to also look here for the specific details related to upgrading to 1.0.0

Prerequisites

Everytime that you plan on upgrading OpenMetadata to a newer version, make sure to go over all these steps:

Before upgrading your OpenMetadata version we strongly recommend backing up the metadata.

The source of truth is stored in the underlying database (MySQL and Postgres supported). During each version upgrade there is a database migration process that needs to run. It will directly attack your database and update the shape of the data to the newest OpenMetadata release.

It is important that we backup the data because if we face any unexpected issues during the upgrade process, you will be able to get back to the previous version without any loss.

You can learn more about how the migration process works here.

During the upgrade, please note that the backup is only for safety and should not be used to restore data to a higher version.

Since version 1.4.0, OpenMetadata encourages using the builtin-tools for creating logical backups of the metadata:

For PROD deployment we recommend users to rely on cloud services for their databases, be it AWS RDS, Azure SQL or GCP Cloud SQL.

If you're a user of these services, you can leverage their backup capabilities directly:

You can refer to the following guide to get more details about the backup and restore:

Before running the migrations, it is important to update these parameters to ensure there are no runtime errors. A safe value would be setting them to 20MB.

If using MySQL

You can update it via SQL (note that it will reset after the server restarts):

To make the configuration persistent, you'd need to navigate to your MySQL Server install directory and update the my.ini or my.cnf files with sort_buffer_size = 20971520.

If using RDS, you will need to update your instance's Parameter Group to include the above change.

If using Postgres

You can update it via SQL (not that it will reset after the server restarts):

To make the configuration persistent, you'll need to update the postgresql.conf file with work_mem = 20MB.

If using RDS, you will need to update your instance's Parameter Group to include the above change.

Note that this value would depend on the size of your query_entity table. If you still see Out of Sort Memory Errors during the migration after bumping this value, you can increase them further.

After the migration is finished, you can revert this changes.

Backward Incompatible Changes

We are introducing a new feature that allows users to execute logical test suites. This feature will allow users to run groups of Data Quality tests, even if they belong to different tables (or even services!). Note that before, you could only schedule and execute the tests for each of the tables.

From the UI, you can now create a new Test Suite, add any tests you want and create and schedule the run.

This change, however, requires some adjustments if you are directly interacting with the OpenMetadata API or if you are running the ingestions externally:

CRUD operations around "executable" Test Suites - the ones directly related to a single table - were managed by the /executable endpoints, e.g., POST /v1/dataQuality/testSuites/executable. We'll keep this endpoints until the next release, but users should update their operations to use the new /base endpoints, e.g., POST /v1/dataQuality/testSuites/base.

This is to adjust the naming convention since all Test Suites are executable, so we're differentiating between "base" and "logical" Test Suites.

In the meantime, you can use the /executable endpoints to create and manage the Test Suites, but you'll get deprecation headers in the response. We recommend migrating to the new endpoints as soon as possible to avoid any issues when the /executable endpoints get completely removed.

If you're running the DQ Workflows externally AND YOU ARE NOT STORING THE SERVICE INFORMATION IN OPENMETADATA, this is how they'll change:

A YAML file for 1.5.x would look like this:

Basically, if you are not storing the service connection in OpenMetadata, you could leverage the source.serviceConnection entry to pass that information.

However, with the ability to execute Logical Test Suites, you can now have multiple tests from different services! This means, that the connection information needs to be placed differently. The new YAML file would look like this:

As you can see, you can pass multiple serviceConnections to the sourceConfig entry, each one with the connection information and the serviceName they are linked to.

If you are already storing the service connection information in OpenMetadata (e.g., because you have created the services via the UI), there's nothing you need to do. The ingestion will automatically pick up the connection information from the service.

We are updating how we compute the success percentage. Previously, we took into account for partial success the results of the Source (e.g., the tables we were able to properly retrieve from Snowflake, Redshift, etc.). This means that we had an error threshold in there were if up to 90% of the tables were successfully ingested, we would still consider the workflow as successful. However, any errors when sending the information to OpenMetadata would be considered as a failure.

Now, we're changing this behavior to consider the success rate of all the steps involved in the workflow. The UI will then show more Partial Success statuses rather than Failed, properly reflecting the real state of the workflow.

With 1.6 Release we are moving the View Lineage & Stored Procedure Lineage computation from metadata workflow to lineage workflow.

This means that we are removing the overrideViewLineage property from the DatabaseServiceMetadataPipeline schema which will be moved to the DatabaseServiceQueryLineagePipeline schema.

We are creating a new Auto Classification workflow that will take care of managing the sample data and PII classification, which was previously done by the Profiler workflow. This change will allow us to have a more modular and scalable system.

The Profiler workflow will now only focus on the profiling part of the data, while the Auto Classification will take care of the rest.

This means that we are removing these properties from the DatabaseServiceProfilerPipeline schema:

  • generateSampleData
  • processPiiSensitive
  • confidence which will be moved to the new DatabaseServiceAutoClassificationPipeline schema.

What you will need to do:

  • If you are using the EXTERNAL ingestion for the profiler (YAML configuration), you will need to update your configuration, removing these properties as well.
  • If you still want to use the Auto PII Classification and sampling features, you can create the new workflow from the UI.

We have given more granularity to the EditTags policy. Previously, it was a single policy that allowed the user to manage any kind of tagging to the assets, including adding tags, glossary terms, and Tiers.

Now, we have split this policy to give further control on which kind of tagging the user can manage. The EditTags policy has been split into:

  • EditTags: to add tags.
  • EditGlossaryTerms: to add Glossary Terms.
  • EditTier: to add Tier tags.

Since we are introducing the Auto Classification workflow, we are going to remove in 1.7 the ML Tagging action from the Metadata Actions. That feature will be covered already by the Auto Classification workflow, which even brings more flexibility allow the on-the-fly usage of the sample data for classification purposes without having to store it in the database.

This impacts users who maintain their own connectors for the ingestion framework that are NOT part of the OpenMetadata python library (openmetadata-ingestion). Introducing the "connector specifcication class (ServiceSpec)". The ServiceSpec class serves as the entrypoint for the connector and holds the references for the classes that will be used to ingest and process the metadata from the source. You can see postgres for an implementation example.

The filtering of Fivetran pipelines now supports using their names instead of IDs. This change may affect existing configurations that rely on pipeline IDs for filtering.

We are removing the field jobId which we required to ingest dbt metadata from a specific job, instead of this we added a new field called jobIds which will accept multiple job ids to ingest metadata from multiple jobs.

The serviceType for MicroStrategy connector is renamed from Mstr to MicroStrategy.

Upgrade Process

  • Stop the running compose deployment with below command
  • Download the Docker Compose Service File from OpenMetadata GitHub Release page here
  • Replace the existing Docker Compose Service File with the one downloaded from the above step

Please make sure to go through breaking changes and release highlights.

  • Start the Docker Compose Service with the below command

Post-Upgrade Steps

Go to Settings -> Applications -> Search Indexing

search-index-app

Reindex

Before initiating the process by clicking Run Now, ensure that the Recreate Indexes option is enabled to allow rebuilding the indexes as needed.

In the configuration section, you can select the entities you want to reindex.

create-project

Reindex

Since this is required after the upgrade, we want to reindex All the entities.

If you are running the ingestion workflows externally or using a custom Airflow installation, you need to make sure that the Python Client you use is aligned with the OpenMetadata server version.

For example, if you are upgrading the server to the version x.y.z, you will need to update your client with

The plugin parameter is a list of the sources that we want to ingest. An example would look like this openmetadata-ingestion[mysql,snowflake,s3]==1.2.0. You will find specific instructions for each connector here.

Moreover, if working with your own Airflow deployment - not the openmetadata-ingestion image - you will need to upgrade as well the openmetadata-managed-apis version:

Follow these steps to reindex using the CLI:

  1. List the CronJobs Use the following command to check the available CronJobs:

Upon running this command you should see output similar to the following.

  1. Create a Job from a CronJob Create a one-time job from an existing CronJob using the following command:

Replace <job_name> with the actual name of the job.

Upon running this command you should see output similar to the following.

  1. Check the Job Status Verify the status of the created job with:

Upon running this command you should see output similar to the following.

  1. view logs To view the logs use the below command.

Replace <job_name> with the actual job name.

Go to Settings -> {Services} -> {Databases} -> Pipelines

redeploy

Re-deploy

Select the pipelines you want to Re Deploy click Re Deploy.

Follow these steps to deploy pipelines using the CLI:

  1. List the CronJobs Use the following command to check the available CronJobs:

Upon running this command you should see output similar to the following.

  1. Create a Job from a CronJob Create a one-time job from an existing CronJob using the following command:

Replace <job_name> with the actual name of the job.

Upon running this command you should see output similar to the following.

  1. Check the Job Status Verify the status of the created job with:

Upon running this command you should see output similar to the following.

  1. view logs To view the logs use the below command.

Replace <job_name> with the actual job name.

If you are seeing broken dags select all the pipelines from all the services and re deploy the pipelines.

Openmetadata-ops Script

The openmetadata-ops script is designed to manage and migrate databases and search indexes, reindex existing data into Elastic Search or OpenSearch, and redeploy service pipelines.

  • analyze-tables

Migrates secrets from the database to the configured Secrets Manager. Note that this command does not support migrating between external Secrets Managers.

  • changelog

Prints the change log of database migration.

  • check-connection

Checks if a connection can be successfully obtained for the target database.

  • deploy-pipelines

Deploys all the service pipelines.

  • drop-create

Deletes any tables in the configured database and creates new tables based on the current version of OpenMetadata. This command also re-creates the search indexes.

  • info

Shows the list of migrations applied and the pending migrations waiting to be applied on the target database.

  • migrate

Migrates the OpenMetadata database schema and search index mappings.

  • migrate-secrets

Migrates secrets from the database to the configured Secrets Manager. Note that this command does not support migrating between external Secrets Managers.

  • reindex

Reindexes data into the search engine from the command line.

  • repair

Repairs the DATABASE_CHANGE_LOG table, which is used to track all the migrations on the target database. This involves removing entries for the failed migrations and updating the checksum of migrations already applied on the target database.

  • validate

Checks if all the migrations have been applied on the target database.

Display Help To display the help message:

To migrate the database schema and search index mappings:

To reindex data into the search engine:

Troubleshooting

If you have a Permission Denied error thrown when running metadata openmetadata-imports-migration --change-config-file-path you might need to change the permission on the /opt/airflow/dags folder. SSH into the ingestion container and check the permission on the folder running the below commands

both the dags folder and the files inside dags/ should have airflow root permission. if this is not the case simply run the below command

You might need to change the permission on the /opt/airflow/dag_generated_config folder. SSH into the ingestion container and check the permission on the folder running the below commands

both the dags folder and the files inside dags/ should have airflow root permission. if this is not the case simply run the below command