Search in sources :

Example 1 with SqlWatermark

use of org.apache.flink.sql.parser.ddl.SqlWatermark in project flink by apache.

the class MergeTableLikeUtilTest method mergeIncludingWatermarksFailsOnDuplicate.

@Test
public void mergeIncludingWatermarksFailsOnDuplicate() {
    TableSchema sourceSchema = TableSchema.builder().add(TableColumn.physical("one", DataTypes.INT())).add(TableColumn.physical("timestamp", DataTypes.TIMESTAMP())).watermark("timestamp", "timestamp - INTERVAL '5' SECOND", DataTypes.TIMESTAMP()).build();
    List<SqlWatermark> derivedWatermarkSpecs = Collections.singletonList(new SqlWatermark(SqlParserPos.ZERO, identifier("timestamp"), boundedStrategy("timestamp", "10")));
    thrown.expect(ValidationException.class);
    thrown.expectMessage("There already exists a watermark spec for column 'timestamp' in the " + "base table. You might want to specify EXCLUDING WATERMARKS or OVERWRITING WATERMARKS.");
    util.mergeTables(getDefaultMergingStrategies(), sourceSchema, Collections.emptyList(), derivedWatermarkSpecs, null);
}
Also used : TableSchema(org.apache.flink.table.api.TableSchema) SqlWatermark(org.apache.flink.sql.parser.ddl.SqlWatermark) Test(org.junit.Test)

Example 2 with SqlWatermark

use of org.apache.flink.sql.parser.ddl.SqlWatermark in project flink by apache.

the class MergeTableLikeUtilTest method mergeExcludingWatermarksDuplicate.

@Test
public void mergeExcludingWatermarksDuplicate() {
    TableSchema sourceSchema = TableSchema.builder().add(TableColumn.physical("one", DataTypes.INT())).add(TableColumn.physical("timestamp", DataTypes.TIMESTAMP())).watermark("timestamp", "timestamp - INTERVAL '5' SECOND", DataTypes.TIMESTAMP()).build();
    List<SqlWatermark> derivedWatermarkSpecs = Collections.singletonList(new SqlWatermark(SqlParserPos.ZERO, identifier("timestamp"), boundedStrategy("timestamp", "10")));
    Map<FeatureOption, MergingStrategy> mergingStrategies = getDefaultMergingStrategies();
    mergingStrategies.put(FeatureOption.WATERMARKS, MergingStrategy.EXCLUDING);
    TableSchema mergedSchema = util.mergeTables(mergingStrategies, sourceSchema, Collections.emptyList(), derivedWatermarkSpecs, null);
    TableSchema expectedSchema = TableSchema.builder().add(TableColumn.physical("one", DataTypes.INT())).add(TableColumn.physical("timestamp", DataTypes.TIMESTAMP())).watermark("timestamp", "`timestamp` - INTERVAL '10' SECOND", DataTypes.TIMESTAMP()).build();
    assertThat(mergedSchema, equalTo(expectedSchema));
}
Also used : FeatureOption(org.apache.flink.sql.parser.ddl.SqlTableLike.FeatureOption) TableSchema(org.apache.flink.table.api.TableSchema) SqlWatermark(org.apache.flink.sql.parser.ddl.SqlWatermark) MergingStrategy(org.apache.flink.sql.parser.ddl.SqlTableLike.MergingStrategy) Test(org.junit.Test)

Example 3 with SqlWatermark

use of org.apache.flink.sql.parser.ddl.SqlWatermark in project flink by apache.

the class MergeTableLikeUtilTest method mergeOverwritingWatermarksDuplicate.

@Test
public void mergeOverwritingWatermarksDuplicate() {
    TableSchema sourceSchema = TableSchema.builder().add(TableColumn.physical("one", DataTypes.INT())).add(TableColumn.physical("timestamp", DataTypes.TIMESTAMP())).watermark("timestamp", "timestamp - INTERVAL '5' SECOND", DataTypes.TIMESTAMP()).build();
    List<SqlWatermark> derivedWatermarkSpecs = Collections.singletonList(new SqlWatermark(SqlParserPos.ZERO, identifier("timestamp"), boundedStrategy("timestamp", "10")));
    Map<FeatureOption, MergingStrategy> mergingStrategies = getDefaultMergingStrategies();
    mergingStrategies.put(FeatureOption.WATERMARKS, MergingStrategy.OVERWRITING);
    TableSchema mergedSchema = util.mergeTables(mergingStrategies, sourceSchema, Collections.emptyList(), derivedWatermarkSpecs, null);
    TableSchema expectedSchema = TableSchema.builder().add(TableColumn.physical("one", DataTypes.INT())).add(TableColumn.physical("timestamp", DataTypes.TIMESTAMP())).watermark("timestamp", "`timestamp` - INTERVAL '10' SECOND", DataTypes.TIMESTAMP()).build();
    assertThat(mergedSchema, equalTo(expectedSchema));
}
Also used : FeatureOption(org.apache.flink.sql.parser.ddl.SqlTableLike.FeatureOption) TableSchema(org.apache.flink.table.api.TableSchema) SqlWatermark(org.apache.flink.sql.parser.ddl.SqlWatermark) MergingStrategy(org.apache.flink.sql.parser.ddl.SqlTableLike.MergingStrategy) Test(org.junit.Test)

Aggregations

SqlWatermark (org.apache.flink.sql.parser.ddl.SqlWatermark)3 TableSchema (org.apache.flink.table.api.TableSchema)3 Test (org.junit.Test)3 FeatureOption (org.apache.flink.sql.parser.ddl.SqlTableLike.FeatureOption)2 MergingStrategy (org.apache.flink.sql.parser.ddl.SqlTableLike.MergingStrategy)2