Search in sources :

Example 21 with TableSchema

use of org.apache.flink.table.api.TableSchema in project flink by apache.

the class SqlToOperationConverterTest method before.

@Before
public void before() throws TableAlreadyExistException, DatabaseNotExistException {
    catalogManager.initSchemaResolver(isStreamingMode, ExpressionResolverMocks.basicResolver(catalogManager, functionCatalog, parser));
    final ObjectPath path1 = new ObjectPath(catalogManager.getCurrentDatabase(), "t1");
    final ObjectPath path2 = new ObjectPath(catalogManager.getCurrentDatabase(), "t2");
    final TableSchema tableSchema = TableSchema.builder().field("a", DataTypes.BIGINT()).field("b", DataTypes.VARCHAR(Integer.MAX_VALUE)).field("c", DataTypes.INT()).field("d", DataTypes.VARCHAR(Integer.MAX_VALUE)).build();
    Map<String, String> options = new HashMap<>();
    options.put("connector", "COLLECTION");
    final CatalogTable catalogTable = new CatalogTableImpl(tableSchema, options, "");
    catalog.createTable(path1, catalogTable, true);
    catalog.createTable(path2, catalogTable, true);
}
Also used : ObjectPath(org.apache.flink.table.catalog.ObjectPath) TableSchema(org.apache.flink.table.api.TableSchema) HashMap(java.util.HashMap) CatalogTableImpl(org.apache.flink.table.catalog.CatalogTableImpl) CatalogTable(org.apache.flink.table.catalog.CatalogTable) Before(org.junit.Before)

Example 22 with TableSchema

use of org.apache.flink.table.api.TableSchema in project flink by apache.

the class SqlToOperationConverterTest method testCreateTableWithPrimaryKey.

@Test
public void testCreateTableWithPrimaryKey() {
    final String sql = "CREATE TABLE tbl1 (\n" + "  a bigint,\n" + "  b varchar, \n" + "  c int, \n" + "  d varchar, \n" + "  constraint ct1 primary key(a, b) not enforced\n" + ") with (\n" + "  'connector' = 'kafka', \n" + "  'kafka.topic' = 'log.test'\n" + ")\n";
    FlinkPlannerImpl planner = getPlannerBySqlDialect(SqlDialect.DEFAULT);
    final CalciteParser parser = getParserBySqlDialect(SqlDialect.DEFAULT);
    Operation operation = parse(sql, planner, parser);
    assertThat(operation).isInstanceOf(CreateTableOperation.class);
    CreateTableOperation op = (CreateTableOperation) operation;
    CatalogTable catalogTable = op.getCatalogTable();
    TableSchema tableSchema = catalogTable.getSchema();
    assertThat(tableSchema.getPrimaryKey().map(UniqueConstraint::asSummaryString).orElse("fakeVal")).isEqualTo("CONSTRAINT ct1 PRIMARY KEY (a, b)");
    assertThat(tableSchema.getFieldNames()).isEqualTo(new String[] { "a", "b", "c", "d" });
    assertThat(tableSchema.getFieldDataTypes()).isEqualTo(new DataType[] { DataTypes.BIGINT().notNull(), DataTypes.STRING().notNull(), DataTypes.INT(), DataTypes.STRING() });
}
Also used : TableSchema(org.apache.flink.table.api.TableSchema) FlinkPlannerImpl(org.apache.flink.table.planner.calcite.FlinkPlannerImpl) OperationMatchers.isCreateTableOperation(org.apache.flink.table.planner.utils.OperationMatchers.isCreateTableOperation) DropDatabaseOperation(org.apache.flink.table.operations.ddl.DropDatabaseOperation) SinkModifyOperation(org.apache.flink.table.operations.SinkModifyOperation) AlterTableOptionsOperation(org.apache.flink.table.operations.ddl.AlterTableOptionsOperation) AlterTableDropConstraintOperation(org.apache.flink.table.operations.ddl.AlterTableDropConstraintOperation) UseCatalogOperation(org.apache.flink.table.operations.UseCatalogOperation) UseDatabaseOperation(org.apache.flink.table.operations.UseDatabaseOperation) CreateViewOperation(org.apache.flink.table.operations.ddl.CreateViewOperation) ShowJarsOperation(org.apache.flink.table.operations.command.ShowJarsOperation) AlterDatabaseOperation(org.apache.flink.table.operations.ddl.AlterDatabaseOperation) QueryOperation(org.apache.flink.table.operations.QueryOperation) EndStatementSetOperation(org.apache.flink.table.operations.EndStatementSetOperation) UseModulesOperation(org.apache.flink.table.operations.UseModulesOperation) ShowFunctionsOperation(org.apache.flink.table.operations.ShowFunctionsOperation) CreateDatabaseOperation(org.apache.flink.table.operations.ddl.CreateDatabaseOperation) SetOperation(org.apache.flink.table.operations.command.SetOperation) LoadModuleOperation(org.apache.flink.table.operations.LoadModuleOperation) Operation(org.apache.flink.table.operations.Operation) ShowModulesOperation(org.apache.flink.table.operations.ShowModulesOperation) SourceQueryOperation(org.apache.flink.table.operations.SourceQueryOperation) UnloadModuleOperation(org.apache.flink.table.operations.UnloadModuleOperation) CreateTableOperation(org.apache.flink.table.operations.ddl.CreateTableOperation) RemoveJarOperation(org.apache.flink.table.operations.command.RemoveJarOperation) BeginStatementSetOperation(org.apache.flink.table.operations.BeginStatementSetOperation) AddJarOperation(org.apache.flink.table.operations.command.AddJarOperation) AlterTableAddConstraintOperation(org.apache.flink.table.operations.ddl.AlterTableAddConstraintOperation) ExplainOperation(org.apache.flink.table.operations.ExplainOperation) ResetOperation(org.apache.flink.table.operations.command.ResetOperation) StatementSetOperation(org.apache.flink.table.operations.StatementSetOperation) AlterTableRenameOperation(org.apache.flink.table.operations.ddl.AlterTableRenameOperation) OperationMatchers.isCreateTableOperation(org.apache.flink.table.planner.utils.OperationMatchers.isCreateTableOperation) CreateTableOperation(org.apache.flink.table.operations.ddl.CreateTableOperation) CatalogTable(org.apache.flink.table.catalog.CatalogTable) CalciteParser(org.apache.flink.table.planner.parse.CalciteParser) Test(org.junit.Test)

Example 23 with TableSchema

use of org.apache.flink.table.api.TableSchema in project flink by apache.

the class MergeTableLikeUtilTest method mergeWithIncludeFailsOnDuplicateRegularColumnAndComputeColumn.

@Test
public void mergeWithIncludeFailsOnDuplicateRegularColumnAndComputeColumn() {
    TableSchema sourceSchema = TableSchema.builder().add(TableColumn.physical("one", DataTypes.INT())).build();
    List<SqlNode> derivedColumns = Arrays.asList(regularColumn("two", DataTypes.INT()), computedColumn("three", plus("two", "3")), regularColumn("three", DataTypes.INT()), regularColumn("four", DataTypes.STRING()));
    thrown.expect(ValidationException.class);
    thrown.expectMessage("A column named 'three' already exists in the table. " + "Duplicate columns exist in the compute column and regular column. ");
    util.mergeTables(getDefaultMergingStrategies(), sourceSchema, derivedColumns, Collections.emptyList(), null);
}
Also used : TableSchema(org.apache.flink.table.api.TableSchema) SqlNode(org.apache.calcite.sql.SqlNode) Test(org.junit.Test)

Example 24 with TableSchema

use of org.apache.flink.table.api.TableSchema in project flink by apache.

the class MergeTableLikeUtilTest method mergeIncludingWatermarksFailsOnDuplicate.

@Test
public void mergeIncludingWatermarksFailsOnDuplicate() {
    TableSchema sourceSchema = TableSchema.builder().add(TableColumn.physical("one", DataTypes.INT())).add(TableColumn.physical("timestamp", DataTypes.TIMESTAMP())).watermark("timestamp", "timestamp - INTERVAL '5' SECOND", DataTypes.TIMESTAMP()).build();
    List<SqlWatermark> derivedWatermarkSpecs = Collections.singletonList(new SqlWatermark(SqlParserPos.ZERO, identifier("timestamp"), boundedStrategy("timestamp", "10")));
    thrown.expect(ValidationException.class);
    thrown.expectMessage("There already exists a watermark spec for column 'timestamp' in the " + "base table. You might want to specify EXCLUDING WATERMARKS or OVERWRITING WATERMARKS.");
    util.mergeTables(getDefaultMergingStrategies(), sourceSchema, Collections.emptyList(), derivedWatermarkSpecs, null);
}
Also used : TableSchema(org.apache.flink.table.api.TableSchema) SqlWatermark(org.apache.flink.sql.parser.ddl.SqlWatermark) Test(org.junit.Test)

Example 25 with TableSchema

use of org.apache.flink.table.api.TableSchema in project flink by apache.

the class MergeTableLikeUtilTest method mergeWithIncludeFailsOnDuplicateColumn.

@Test
public void mergeWithIncludeFailsOnDuplicateColumn() {
    TableSchema sourceSchema = TableSchema.builder().add(TableColumn.physical("one", DataTypes.INT())).build();
    List<SqlNode> derivedColumns = Arrays.asList(regularColumn("one", DataTypes.INT()), regularColumn("four", DataTypes.STRING()));
    thrown.expect(ValidationException.class);
    thrown.expectMessage("A column named 'one' already exists in the base table.");
    util.mergeTables(getDefaultMergingStrategies(), sourceSchema, derivedColumns, Collections.emptyList(), null);
}
Also used : TableSchema(org.apache.flink.table.api.TableSchema) SqlNode(org.apache.calcite.sql.SqlNode) Test(org.junit.Test)

Aggregations

TableSchema (org.apache.flink.table.api.TableSchema)86 Test (org.junit.Test)54 HashMap (java.util.HashMap)26 CatalogTableImpl (org.apache.flink.table.catalog.CatalogTableImpl)21 SqlNode (org.apache.calcite.sql.SqlNode)19 ObjectPath (org.apache.flink.table.catalog.ObjectPath)19 CatalogTable (org.apache.flink.table.catalog.CatalogTable)18 DataType (org.apache.flink.table.types.DataType)16 ValidationException (org.apache.flink.table.api.ValidationException)14 TableColumn (org.apache.flink.table.api.TableColumn)10 UniqueConstraint (org.apache.flink.table.api.constraints.UniqueConstraint)10 ArrayList (java.util.ArrayList)9 List (java.util.List)9 Map (java.util.Map)9 FeatureOption (org.apache.flink.sql.parser.ddl.SqlTableLike.FeatureOption)9 MergingStrategy (org.apache.flink.sql.parser.ddl.SqlTableLike.MergingStrategy)9 CatalogBaseTable (org.apache.flink.table.catalog.CatalogBaseTable)8 ObjectIdentifier (org.apache.flink.table.catalog.ObjectIdentifier)8 Arrays (java.util.Arrays)7 Configuration (org.apache.flink.configuration.Configuration)7