Skip to main content
GET
https://sandbox.getcollate.io/api
/
v1
/
tables
/
name
/
{fqn}
/
export
GET /v1/tables/name/{fqn}/export
from metadata.sdk import configure
from metadata.sdk.entities import Tables

configure(
    host="https://your-company.getcollate.io/api",
    jwt_token="your-jwt-token"
)

# Synchronous export
csv_data = Tables.export_csv("snowflake_prod.analytics.public.customers").execute()
print(csv_data)

# Async export
job = Tables.export_csv("snowflake_prod.analytics.public.customers").execute_async()
print(f"Export job: {job}")

# Synchronous import (dry run first)
result = (
    Tables.import_csv("snowflake_prod.analytics.public.customers")
    .with_data(csv_data)
    .set_dry_run(True)
    .execute()
)
print(f"Dry run result: {result}")

# Apply the import
result = (
    Tables.import_csv("snowflake_prod.analytics.public.customers")
    .with_data(csv_data)
    .set_dry_run(False)
    .execute()
)
"name,displayName,description,dataType,tags\nagent_id,,Agent identifier,VARCHAR,\nperformance_score,,Overall performance score,DECIMAL,"

Import & Export

Export table metadata (columns, owners, tags, descriptions) to CSV and import changes back. Supports both synchronous and asynchronous operations.

Export to CSV

GET /v1/tables/name/{fqn}/export
fqn
string
required
Fully qualified name of the table (e.g., snowflake_prod.analytics.public.customers).

Export Async

GET /v1/tables/name/{fqn}/exportAsync Returns a job ID for large exports that can be polled for completion.

Import from CSV

PUT /v1/tables/name/{fqn}/import
fqn
string
required
Fully qualified name of the table.
dryRun
boolean
default:"true"
If true, validates the CSV without applying changes. Set to false to apply.

Import Async

PUT /v1/tables/name/{fqn}/importAsync For large imports, use the async variant which returns a job ID.
GET /v1/tables/name/{fqn}/export
from metadata.sdk import configure
from metadata.sdk.entities import Tables

configure(
    host="https://your-company.getcollate.io/api",
    jwt_token="your-jwt-token"
)

# Synchronous export
csv_data = Tables.export_csv("snowflake_prod.analytics.public.customers").execute()
print(csv_data)

# Async export
job = Tables.export_csv("snowflake_prod.analytics.public.customers").execute_async()
print(f"Export job: {job}")

# Synchronous import (dry run first)
result = (
    Tables.import_csv("snowflake_prod.analytics.public.customers")
    .with_data(csv_data)
    .set_dry_run(True)
    .execute()
)
print(f"Dry run result: {result}")

# Apply the import
result = (
    Tables.import_csv("snowflake_prod.analytics.public.customers")
    .with_data(csv_data)
    .set_dry_run(False)
    .execute()
)
"name,displayName,description,dataType,tags\nagent_id,,Agent identifier,VARCHAR,\nperformance_score,,Overall performance score,DECIMAL,"

Returns

Export returns CSV text with headers and rows for each column in the table. Import returns a summary of changes applied (or validation results for dry run).

Error Handling

CodeError TypeDescription
401UNAUTHORIZEDInvalid or missing authentication token
403FORBIDDENUser lacks permission for import/export
404NOT_FOUNDTable with given FQN does not exist
400BAD_REQUESTInvalid CSV format or content