Search in sources :

Example 16 with ResolvedSchema

use of org.apache.flink.table.catalog.ResolvedSchema in project flink by apache.

the class HiveTableFactoryTest method testHiveTable.

@Test
public void testHiveTable() throws Exception {
    final ResolvedSchema schema = ResolvedSchema.of(Column.physical("name", DataTypes.STRING()), Column.physical("age", DataTypes.INT()));
    catalog.createDatabase("mydb", new CatalogDatabaseImpl(new HashMap<>(), ""), true);
    final Map<String, String> options = Collections.singletonMap(FactoryUtil.CONNECTOR.key(), SqlCreateHiveTable.IDENTIFIER);
    final CatalogTable table = new CatalogTableImpl(TableSchema.fromResolvedSchema(schema), options, "hive table");
    catalog.createTable(new ObjectPath("mydb", "mytable"), table, true);
    final DynamicTableSource tableSource = FactoryUtil.createDynamicTableSource((DynamicTableSourceFactory) catalog.getFactory().orElseThrow(IllegalStateException::new), ObjectIdentifier.of("mycatalog", "mydb", "mytable"), new ResolvedCatalogTable(table, schema), new Configuration(), Thread.currentThread().getContextClassLoader(), false);
    assertTrue(tableSource instanceof HiveTableSource);
    final DynamicTableSink tableSink = FactoryUtil.createDynamicTableSink((DynamicTableSinkFactory) catalog.getFactory().orElseThrow(IllegalStateException::new), ObjectIdentifier.of("mycatalog", "mydb", "mytable"), new ResolvedCatalogTable(table, schema), new Configuration(), Thread.currentThread().getContextClassLoader(), false);
    assertTrue(tableSink instanceof HiveTableSink);
}
Also used : ObjectPath(org.apache.flink.table.catalog.ObjectPath) Configuration(org.apache.flink.configuration.Configuration) HashMap(java.util.HashMap) DynamicTableSink(org.apache.flink.table.connector.sink.DynamicTableSink) CatalogTable(org.apache.flink.table.catalog.CatalogTable) ResolvedCatalogTable(org.apache.flink.table.catalog.ResolvedCatalogTable) CatalogDatabaseImpl(org.apache.flink.table.catalog.CatalogDatabaseImpl) ResolvedCatalogTable(org.apache.flink.table.catalog.ResolvedCatalogTable) CatalogTableImpl(org.apache.flink.table.catalog.CatalogTableImpl) ResolvedSchema(org.apache.flink.table.catalog.ResolvedSchema) DynamicTableSource(org.apache.flink.table.connector.source.DynamicTableSource) Test(org.junit.Test)

Example 17 with ResolvedSchema

use of org.apache.flink.table.catalog.ResolvedSchema in project flink by apache.

the class KinesisDynamicTableFactoryTest method testGoodTableSource.

// --------------------------------------------------------------------------------------------
// Positive tests
// --------------------------------------------------------------------------------------------
@Test
public void testGoodTableSource() {
    ResolvedSchema sourceSchema = defaultSourceSchema();
    Map<String, String> sourceOptions = defaultTableOptions().build();
    // Construct actual DynamicTableSource using FactoryUtil
    KinesisDynamicSource actualSource = (KinesisDynamicSource) createTableSource(sourceSchema, sourceOptions);
    // Construct expected DynamicTableSink using factory under test
    KinesisDynamicSource expectedSource = new KinesisDynamicSource(sourceSchema.toPhysicalRowDataType(), STREAM_NAME, defaultConsumerProperties(), new TestFormatFactory.DecodingFormatMock(",", true));
    // verify that the constructed DynamicTableSink is as expected
    assertEquals(expectedSource, actualSource);
    // verify that the copy of the constructed DynamicTableSink is as expected
    assertEquals(expectedSource, actualSource.copy());
    // verify produced sink
    ScanTableSource.ScanRuntimeProvider functionProvider = actualSource.getScanRuntimeProvider(ScanRuntimeProviderContext.INSTANCE);
    SourceFunction<RowData> sourceFunction = as(functionProvider, SourceFunctionProvider.class).createSourceFunction();
    assertThat(sourceFunction, instanceOf(FlinkKinesisConsumer.class));
}
Also used : ScanTableSource(org.apache.flink.table.connector.source.ScanTableSource) RowData(org.apache.flink.table.data.RowData) FlinkKinesisConsumer(org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer) SourceFunctionProvider(org.apache.flink.table.connector.source.SourceFunctionProvider) TestFormatFactory(org.apache.flink.table.factories.TestFormatFactory) ResolvedSchema(org.apache.flink.table.catalog.ResolvedSchema) Test(org.junit.Test)

Example 18 with ResolvedSchema

use of org.apache.flink.table.catalog.ResolvedSchema in project flink by apache.

the class KinesisDynamicTableFactoryTest method testGoodTableSinkCopyForPartitionedTable.

@Test
public void testGoodTableSinkCopyForPartitionedTable() {
    ResolvedSchema sinkSchema = defaultSinkSchema();
    DataType physicalDataType = sinkSchema.toPhysicalRowDataType();
    Map<String, String> sinkOptions = defaultSinkTableOptions().build();
    List<String> sinkPartitionKeys = Arrays.asList("name", "curr_id");
    // Construct actual DynamicTableSink using FactoryUtil
    KinesisDynamicSink actualSink = (KinesisDynamicSink) createTableSink(sinkSchema, sinkPartitionKeys, sinkOptions);
    // Construct expected DynamicTableSink using factory under test
    KinesisDynamicSink expectedSink = (KinesisDynamicSink) new KinesisDynamicSink.KinesisDynamicTableSinkBuilder().setConsumedDataType(physicalDataType).setStream(STREAM_NAME).setKinesisClientProperties(defaultProducerProperties()).setEncodingFormat(new TestFormatFactory.EncodingFormatMock(",")).setPartitioner(new RowDataFieldsKinesisPartitionKeyGenerator((RowType) physicalDataType.getLogicalType(), sinkPartitionKeys)).build();
    Assertions.assertThat(actualSink).isEqualTo(expectedSink.copy());
    Assertions.assertThat(expectedSink).isNotSameAs(expectedSink.copy());
}
Also used : RowDataFieldsKinesisPartitionKeyGenerator(org.apache.flink.connector.kinesis.table.RowDataFieldsKinesisPartitionKeyGenerator) KinesisDynamicSink(org.apache.flink.connector.kinesis.table.KinesisDynamicSink) DataType(org.apache.flink.table.types.DataType) RowType(org.apache.flink.table.types.logical.RowType) ResolvedSchema(org.apache.flink.table.catalog.ResolvedSchema) Test(org.junit.Test)

Example 19 with ResolvedSchema

use of org.apache.flink.table.catalog.ResolvedSchema in project flink by apache.

the class JdbcDynamicTableSinkITCase method testFlushBufferWhenCheckpoint.

@Test
public void testFlushBufferWhenCheckpoint() throws Exception {
    Map<String, String> options = new HashMap<>();
    options.put("connector", "jdbc");
    options.put("url", DB_URL);
    options.put("table-name", OUTPUT_TABLE5);
    options.put("sink.buffer-flush.interval", "0");
    ResolvedSchema schema = ResolvedSchema.of(Column.physical("id", DataTypes.BIGINT().notNull()));
    DynamicTableSink tableSink = createTableSink(schema, options);
    SinkRuntimeProviderContext context = new SinkRuntimeProviderContext(false);
    SinkFunctionProvider sinkProvider = (SinkFunctionProvider) tableSink.getSinkRuntimeProvider(context);
    GenericJdbcSinkFunction<RowData> sinkFunction = (GenericJdbcSinkFunction<RowData>) sinkProvider.createSinkFunction();
    sinkFunction.setRuntimeContext(new MockStreamingRuntimeContext(true, 1, 0));
    sinkFunction.open(new Configuration());
    sinkFunction.invoke(GenericRowData.of(1L), SinkContextUtil.forTimestamp(1));
    sinkFunction.invoke(GenericRowData.of(2L), SinkContextUtil.forTimestamp(1));
    check(new Row[] {}, DB_URL, OUTPUT_TABLE5, new String[] { "id" });
    sinkFunction.snapshotState(new StateSnapshotContextSynchronousImpl(1, 1));
    check(new Row[] { Row.of(1L), Row.of(2L) }, DB_URL, OUTPUT_TABLE5, new String[] { "id" });
    sinkFunction.close();
}
Also used : SinkRuntimeProviderContext(org.apache.flink.table.runtime.connector.sink.SinkRuntimeProviderContext) MockStreamingRuntimeContext(org.apache.flink.streaming.util.MockStreamingRuntimeContext) Configuration(org.apache.flink.configuration.Configuration) HashMap(java.util.HashMap) StateSnapshotContextSynchronousImpl(org.apache.flink.runtime.state.StateSnapshotContextSynchronousImpl) DynamicTableSink(org.apache.flink.table.connector.sink.DynamicTableSink) SinkFunctionProvider(org.apache.flink.table.connector.sink.SinkFunctionProvider) GenericRowData(org.apache.flink.table.data.GenericRowData) RowData(org.apache.flink.table.data.RowData) GenericJdbcSinkFunction(org.apache.flink.connector.jdbc.internal.GenericJdbcSinkFunction) ResolvedSchema(org.apache.flink.table.catalog.ResolvedSchema) Test(org.junit.Test)

Example 20 with ResolvedSchema

use of org.apache.flink.table.catalog.ResolvedSchema in project flink by apache.

the class DataTypeUtilsTest method testExpandDistinctType.

@Test
public void testExpandDistinctType() {
    FieldsDataType dataType = (FieldsDataType) ROW(FIELD("f0", INT()), FIELD("f1", STRING()), FIELD("f2", TIMESTAMP(5).bridgedTo(Timestamp.class)), FIELD("f3", TIMESTAMP(3)));
    LogicalType originalLogicalType = dataType.getLogicalType();
    DistinctType distinctLogicalType = DistinctType.newBuilder(ObjectIdentifier.of("catalog", "database", "type"), originalLogicalType).build();
    DataType distinctDataType = new FieldsDataType(distinctLogicalType, dataType.getChildren());
    ResolvedSchema schema = DataTypeUtils.expandCompositeTypeToSchema(distinctDataType);
    assertThat(schema).isEqualTo(ResolvedSchema.of(Column.physical("f0", INT()), Column.physical("f1", STRING()), Column.physical("f2", TIMESTAMP(5).bridgedTo(Timestamp.class)), Column.physical("f3", TIMESTAMP(3).bridgedTo(LocalDateTime.class))));
}
Also used : LocalDateTime(java.time.LocalDateTime) FieldsDataType(org.apache.flink.table.types.FieldsDataType) DistinctType(org.apache.flink.table.types.logical.DistinctType) LogicalType(org.apache.flink.table.types.logical.LogicalType) DataType(org.apache.flink.table.types.DataType) FieldsDataType(org.apache.flink.table.types.FieldsDataType) ResolvedSchema(org.apache.flink.table.catalog.ResolvedSchema) Timestamp(java.sql.Timestamp) Test(org.junit.Test)

Aggregations

ResolvedSchema (org.apache.flink.table.catalog.ResolvedSchema)84 Test (org.junit.Test)50 DynamicTableSink (org.apache.flink.table.connector.sink.DynamicTableSink)20 DataType (org.apache.flink.table.types.DataType)20 RowData (org.apache.flink.table.data.RowData)17 ValidationException (org.apache.flink.table.api.ValidationException)14 ResolvedCatalogTable (org.apache.flink.table.catalog.ResolvedCatalogTable)14 List (java.util.List)11 SinkRuntimeProviderContext (org.apache.flink.table.runtime.connector.sink.SinkRuntimeProviderContext)11 DynamicTableSource (org.apache.flink.table.connector.source.DynamicTableSource)10 Column (org.apache.flink.table.catalog.Column)9 LogicalType (org.apache.flink.table.types.logical.LogicalType)9 RowType (org.apache.flink.table.types.logical.RowType)9 HashMap (java.util.HashMap)8 Collectors (java.util.stream.Collectors)8 RelDataType (org.apache.calcite.rel.type.RelDataType)8 Internal (org.apache.flink.annotation.Internal)8 HBaseWriteOptions (org.apache.flink.connector.hbase.options.HBaseWriteOptions)6 FlinkTypeFactory (org.apache.flink.table.planner.calcite.FlinkTypeFactory)6 Row (org.apache.flink.types.Row)6