Status
Current state: ["Under Discussion"]
Discussion thread: here http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-69-Flink-SQL-DDL-Enhancement-td33090.html
JIRA: here: TBD
...
Page properties | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).
Motivation:
In flink 1.9, we have introduced some awesome features such as complete catalog support[1] and sql ddl support[2]. These features have been a critical integration for Flink to be able to manage data and metadata like a classic RDBMS and make developers more easy to construct their real-time/off-line warehouse or sth similar base on flink.
Despite that big advancement, there is still a lack of support on how Flink SQL DDL to manage metadata and data mainly in TableEnvironment API and partially in SQL CLIENT.
- Don’t support show/describe/use catalog(s)
- Don’t support create/drop/alter/use/show/desc database(s)
- Don’t support show/desc table(s) and create table as select
- Don’t support create/drop/show function(s)Don’t support create/drop/show/desc view(s)
Goals:
We would like to achieve the following goals in this FLIP.
- Add Catalog DDL enhancement support
- Add Database DDL enhancement support
- Add Table DDL enhancement support
- Add Function DDL enhancement support
Note:
The following issues are out of the scope of this FLIP. They will be taken care of by other teams/committers.
- DDL related to View(s)
- DDL related to Partition(s)
- Make SQL CLIENT and tEnv.sqlUpdate support new added ddl in a unified way
Public Interfaces
Briefly list any new interfaces that will be introduced as part of this proposal or any existing interfaces that will be removed or changed. The purpose of this section is to concisely call out the public contract that will come along with this feature.
- DDL related to Function(s) (another seperate FLIP-79 to solve this issure)
- Unify the way of SQL CLI and TableEnvironment parse sql through common Flink SQL Parser module
Proposed DDL Changes:
...
There are a few DDLs following that is already supported in SQL client. We want them should be migrated using the parser Flink Sql Parser approach properly.
Catalog DDL:
We propose to add the following DDLs related to catalog operations.
showCatalogsStatement
...
...
SHOW |
...
CATALOGS |
...
Return all
...
catalogs in the current catalog.
...
note: already support in sql client but need support in
...
TableEnvironment.
...
describeCatalogStatement
...
...
DESCRIBE |
...
CATALOG |
...
catalogName |
...
Return the default database name and expalnCatalog(newly added method in the catalog) content of an existing catalogName
Newly added method in Catalog.java:
/** |
EXTENDED:
Display the catalog properties.
useCatalogStatement
Code Block | ||
---|---|---|
| ||
USE CATALOG catalogName |
* Get a user defined catalog description. |
use eg:
Flink SQL> describe catalog hiveCatalog; |
useCatalogStatement
USE CATALOG catalogName |
Set the current catalog.
...
note: already support in sql client but need support in tEnv.
...
sqlUpdate();
Database DDL:
We propose to add the following DDLs related to database operations.
createDatabaseStatement:
...
...
CREATE |
...
DATABASE |
...
[ IF |
...
NOT |
...
EXISTS ] |
...
[ catalogName.] |
...
dataBaseName |
...
database_comment ] |
...
|
...
( |
...
name=value |
...
[, |
...
name=value]*)] |
IF NOT EXISTS:
If a database with the same name already exists, nothing will happen.
dropDatabaseStatement:
...
...
language | sql |
---|
DROP |
...
DATABASE |
...
[ |
...
IF |
...
EXISTS |
...
] |
...
[ |
...
catalogName.] |
...
dataBaseName |
...
IF EXISTS
If the database to drop does not exist, nothing happens.
RESTRICT
Dropping a non-empty database triggers an exception. Enabled by default.
CASCADE
Dropping a non-empty database also drops all associated tables and functions.
...
alterDatabaseStatement:
...
language | sql |
---|
...
Catalog.dropDatabase method should add new cascade flag
/** * @param cascade Flag to specify behavior when the database is not empty |
alterDatabaseStatement:
ALTER DATABASE [ catalogName.] dataBaseName |
...
SET |
...
|
...
[, |
...
name=value]*) |
Set one or more properties in the specified database. If a particular property is already set in the database, override the old value with the new one.
useDatabaseStatement:
...
USE |
...
[ catalogName.] |
...
dataBaseName |
Set the current database. All subsequent commands that do not explicitly specify a database will use this one. If the provided database does not exist, an exception is thrown.
note: already support in sql client but need support in
...
TableEnvironment;
showDatabasesStatement:
...
language | sql |
---|
...
SHOW DATABASES |
Return all databases in the current catalog.
note: already support in sql client but need support in
...
TableEnvironment;
descDatabaseStatement:
...
language | sql |
---|
...
DESCRIBE DATABASE [ EXTENDED] |
...
[ |
...
catalogName.] |
...
dataBasesName |
Return the metadata of an existing database (name, comment).
EXTENDED:
...
Return the metadata of an existing database (name, comment, and properties).
use eg:
FLINK SQL> describe database hive.default; |
Table DDL:
We propose to add the following DDLs related to table operations.
showTablesStatement:
...
language | sql |
---|
...
SHOW |
...
TABLES |
Return all tables in the current database.
note: already support in sql client but need support in
...
TableEnvironment.
...
descTableStatement:
...
...
language | sql |
---|
...
DESCRIBE |
...
[ EXTENDED] |
...
[[catalogName.] |
...
dataBasesName].tableName |
Return the metadata of an existing table (column names, data types, and comments).
EXTENDED
Display detailed information about the table, including table type, properties and so on.
note: already support in sql client but need support in
...
TableEnvironment.
...
alterTableStatement:
...
language | sql |
---|
ALTER |
...
TABLE |
...
[[catalogName.] |
...
dataBasesName]. |
...
tableName |
Rename an existing table.
...
language | sql |
---|
ALTER |
...
TABLE |
...
[[catalogName.] |
...
dataBasesName]. |
...
tableName |
...
[, |
...
name=value]*) |
Set the properties of an existing table
...
Proposed TableEnvironment SQL API Changes:
createFunctionStatement:
Code Block | ||
---|---|---|
| ||
CREATE FUNCTION [IF NOT EXISTS] [[catalogName.]databaseName.]functionName AS className. |
Create a function indicated by a className
IF NOT EXISTS
If a function with the same name already exists, nothing will happen.
dropFunctionStatement:
Code Block | ||
---|---|---|
| ||
DROP FUNCTION [IF EXISTS] [[catalogName.]databaseName.]functionName. |
Create a function indicated by a className
IF EXISTS
If a function with the same name not exists, nothing will happen.
alterFunctionStatement:
Code Block | ||
---|---|---|
| ||
ALTER FUNCTION [IF EXISTS] [[catalogName.]databaseName.]functionName RENAME TO new_name; |
Rename a non-temporary function to new name.
IF EXISTS
If a function with the same name not exists, nothing will happen.
showFunctionsStatement:
Code Block | ||
---|---|---|
| ||
SHOW FUNCTIONS |
Return all functions in the current database.
note: already support in sql client but need support in tEnv.sqlQuery();
Implementation Changes:
Notice:
Not the scope of this flip anymore, we have another dedicated flip FLIP-84 to discuss the API changes. So I just removed this partition, anyone interested can refer to the archive of table api change .
Implementation Changes:
Support these ddl in the TableEnvironment, and there a few DDL implemented by SQL Client that should be migrated to using Calcite parser.
Implementation Plan:
- Implement all parser logic in the flink-sql-parser module
- After FLIP-84 completed, add DDLs support one by one through the `executeString(String sql)` or something similar interface.
- After step1,2 finished, SQL CLI can very easily support correspondingly ddl by just calling TableEnvironment’s execute sql method.
- Unify the parser way of SQL CLI and Flink SQL parser
- Add DDL mentioned above one by one in flink sql parser and support execute both in SQL CL and TableEnvironment.sqlQuery() or TableEnvironment.sqlUpdate()
Compatibility, Deprecation, and Migration Plan
- What impact (if any) will there be on existing users?
Not Applicable.
- If we are changing behavior how will we phase out the older behavior?
Not Applicable.
- If we need special migration tools, describe them here.
Not Applicable.
- When will we remove the existing behavior?
Not Applicable.
Test Plan
Describe in few sentences how the FLIP will be tested. We are mostly interested in system tests (since unit-tests are specific to implementation details). How will we know that the implementation works as expected? How will we know nothing broke?
Rejected Alternatives
...
- New syntax add to calcite parser can be tested using FlinkSqlParserImplTest by adding test cases
- DDLs supported in TableEnvironment can be verified by unit test.
Rejected Alternatives
No rejected alternatives.
Reference:
- https://issues.apache.org/jira/browse/FLINK-11275
- https://issues.apache.org/jira/browse/FLINK-10232
- hive ddl https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL
- spark ddl https://docs.databricks.com/spark/latest/spark-sql/index.html
- flink sql ddl design https://docs.google.com/document/d/1TTP-GCC8wSsibJaSUyFZ_5NBAHYEB1FVmPpP7RgDGBA/edit