Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: ready for discussion

This page is meant as a template for writing a FLIP. To create a FLIP choose Tools->Copy on this page and modify with your content and replace the heading with the next FLIP number and a description of your issue. Replace anything in italics with your own description.

Status

Current state"Under Discussion"

...

  • Depending on the context the resolver is configured in streaming/batch mode
  • Depending on the context the resolver support supports metadata columns (not supported for DataStream API conversion)
  • Instances are provided by the CatalogManager

...

We suggest the following interface additions:

CatalogBaseTable {

    Schema getUnresolvedSchema();
}

ResolvedCatalogTable implements CatalogTable {

    ResolvedSchema getResolvedSchema();
}

ResolvedCatalogView implements CatalogView {

    ResolvedSchema getResolvedSchema();
}

DynamicTableFactoryContext#getCatalogTable(): ResolvedCatalogTable

ddd

Compatibility, Deprecation, and Migration Plan

  • What impact (if any) will there be on existing users?
  • If we are changing behavior how will we phase out the older behavior?
  • If we need special migration tools, describe them here.
  • When will we remove the existing behavior?

Test Plan

Describe in few sentences how the FLIP will be tested. We are mostly interested in system tests (since unit-tests are specific to implementation details). How will we know that the implementation works as expected? How will we know nothing broke?

Rejected Alternatives

...

Notes:

  • The property design of CatalogTable#toProperties and CatalogTable#fromProperties will not change.
  • However, once we drop CatalogTable#getSchema():TableSchema we will only support ResolvedCatalogTable#toProperties anymore.
  • CatalogTable#fromProperties will always return an unresolved catalog table.
  • For backwards compatibility, we leave Catalog#createTable and Catalog#alterTable untouched. The catalog manager will call them with resolved table/view only but we will leave it up to the catalog implementation.

Updated Other Parts of the API

Because we deprecate TableSchema, we need to update other locations as well and provide a consistent experience for accessing a ResolvedSchema.

org.apache.flink.table.api.Table#getResolvedSchema(): ResolvedSchema
org.apache.flink.table.operations.QueryOperation#getResolvedSchema(): ResolvedSchema
org.apache.flink.table.api.TableResult#getResolvedSchema(): ResolvedSchema

Notes:

  • There might be other locations where TableSchema is used, we will gradually update those.
  • TableSchema is mostly used in legacy interfaces.
  • In newer interfaces for connectors, DataType and UniqueConstraint should be passed around. The entire schema is usually not required.

Compatibility, Deprecation, and Migration Plan

We aim full backwards compatibility in the next release.

We deprecate TableSchema and related interfaces.

Implementation Plan

  1. Implement Schema, ResolvedSchema, SchemaResolver and property conversion utils
  2. Update the Catalog APIs
  3. Update other parts of the API gradually

Rejected Alternatives

CatalogTable is fully resolved when returned by the Catalog

Schema
--> stores Expression, AbstractDataType

ResolvedSchema
--> stores ResolvedExpression, DataType

CatalogBaseTable.getResolvedSchema(): ResolvedSchema
--> stores ResolvedSchema

Catalog.getTable(ObjectPath, SchemaResolver): CatalogBaseTable
--> creates Schema and resolves it with the help of SchemaResolver

SchemaResolver.resolve(Schema): ResolvedSchema
--> references parser, catalog manager, etc. for resolving SQL and Table API expressions

CatalogTableImpl.fromProperties(Map<String, String> properties, SchemaResolver): CatalogTableImpl
--> construct Schema -> create ResolvedSchema -> verify against remaining properties

CatalogTableImpl.toProperties(): Map<String, String>
--> no change in properties yet