Search in sources :

Example 6 with ResolvedSchema

use of org.apache.flink.table.catalog.ResolvedSchema in project flink by apache.

the class KinesisDynamicTableSinkFactoryTest method testGoodTableSinkForPartitionedTable.

@Test
public void testGoodTableSinkForPartitionedTable() {
    ResolvedSchema sinkSchema = defaultSinkSchema();
    DataType physicalDataType = sinkSchema.toPhysicalRowDataType();
    Map<String, String> sinkOptions = defaultTableOptions().build();
    List<String> sinkPartitionKeys = Arrays.asList("name", "curr_id");
    // Construct actual DynamicTableSink using FactoryUtil
    KinesisDynamicSink actualSink = (KinesisDynamicSink) createTableSink(sinkSchema, sinkPartitionKeys, sinkOptions);
    // Construct expected DynamicTableSink using factory under test
    KinesisDynamicSink expectedSink = (KinesisDynamicSink) new KinesisDynamicSink.KinesisDynamicTableSinkBuilder().setConsumedDataType(physicalDataType).setStream(STREAM_NAME).setKinesisClientProperties(defaultProducerProperties()).setEncodingFormat(new TestFormatFactory.EncodingFormatMock(",")).setPartitioner(new RowDataFieldsKinesisPartitionKeyGenerator((RowType) physicalDataType.getLogicalType(), sinkPartitionKeys)).build();
    // verify that the constructed DynamicTableSink is as expected
    Assertions.assertThat(actualSink).isEqualTo(expectedSink);
    // verify the produced sink
    DynamicTableSink.SinkRuntimeProvider sinkFunctionProvider = actualSink.getSinkRuntimeProvider(new SinkRuntimeProviderContext(false));
    Sink<RowData> sinkFunction = ((SinkV2Provider) sinkFunctionProvider).createSink();
    Assertions.assertThat(sinkFunction).isInstanceOf(KinesisDataStreamsSink.class);
}
Also used : SinkRuntimeProviderContext(org.apache.flink.table.runtime.connector.sink.SinkRuntimeProviderContext) RowType(org.apache.flink.table.types.logical.RowType) DynamicTableSink(org.apache.flink.table.connector.sink.DynamicTableSink) RowData(org.apache.flink.table.data.RowData) DataType(org.apache.flink.table.types.DataType) SinkV2Provider(org.apache.flink.table.connector.sink.SinkV2Provider) ResolvedSchema(org.apache.flink.table.catalog.ResolvedSchema) Test(org.junit.Test)

Example 7 with ResolvedSchema

use of org.apache.flink.table.catalog.ResolvedSchema in project flink by apache.

the class HBaseDynamicTableFactoryTest method testParallelismOptions.

@Test
public void testParallelismOptions() {
    Map<String, String> options = getAllOptions();
    options.put("sink.parallelism", "2");
    ResolvedSchema schema = ResolvedSchema.of(Column.physical(ROWKEY, STRING()));
    DynamicTableSink sink = createTableSink(schema, options);
    assertTrue(sink instanceof HBaseDynamicTableSink);
    HBaseDynamicTableSink hbaseSink = (HBaseDynamicTableSink) sink;
    SinkFunctionProvider provider = (SinkFunctionProvider) hbaseSink.getSinkRuntimeProvider(new SinkRuntimeProviderContext(false));
    assertEquals(2, (long) provider.getParallelism().get());
}
Also used : SinkRuntimeProviderContext(org.apache.flink.table.runtime.connector.sink.SinkRuntimeProviderContext) HBaseDynamicTableSink(org.apache.flink.connector.hbase2.sink.HBaseDynamicTableSink) HBaseDynamicTableSink(org.apache.flink.connector.hbase2.sink.HBaseDynamicTableSink) DynamicTableSink(org.apache.flink.table.connector.sink.DynamicTableSink) SinkFunctionProvider(org.apache.flink.table.connector.sink.SinkFunctionProvider) ResolvedSchema(org.apache.flink.table.catalog.ResolvedSchema) Test(org.junit.Test)

Example 8 with ResolvedSchema

use of org.apache.flink.table.catalog.ResolvedSchema in project flink by apache.

the class HBaseDynamicTableFactoryTest method testLookupOptions.

@Test
public void testLookupOptions() {
    Map<String, String> options = getAllOptions();
    options.put("lookup.cache.max-rows", "1000");
    options.put("lookup.cache.ttl", "10s");
    options.put("lookup.max-retries", "10");
    ResolvedSchema schema = ResolvedSchema.of(Column.physical(ROWKEY, STRING()), Column.physical(FAMILY1, ROW(FIELD(COL1, DOUBLE()), FIELD(COL2, INT()))));
    DynamicTableSource source = createTableSource(schema, options);
    HBaseLookupOptions actual = ((HBaseDynamicTableSource) source).getLookupOptions();
    HBaseLookupOptions expected = HBaseLookupOptions.builder().setCacheMaxSize(1000).setCacheExpireMs(10_000).setMaxRetryTimes(10).build();
    assertEquals(expected, actual);
}
Also used : HBaseLookupOptions(org.apache.flink.connector.hbase.options.HBaseLookupOptions) HBaseDynamicTableSource(org.apache.flink.connector.hbase2.source.HBaseDynamicTableSource) ResolvedSchema(org.apache.flink.table.catalog.ResolvedSchema) DynamicTableSource(org.apache.flink.table.connector.source.DynamicTableSource) HBaseDynamicTableSource(org.apache.flink.connector.hbase2.source.HBaseDynamicTableSource) Test(org.junit.Test)

Example 9 with ResolvedSchema

use of org.apache.flink.table.catalog.ResolvedSchema in project flink by apache.

the class HBaseDynamicTableFactoryTest method testDisabledBufferFlushOptions.

@Test
public void testDisabledBufferFlushOptions() {
    Map<String, String> options = getAllOptions();
    options.put("sink.buffer-flush.max-size", "0");
    options.put("sink.buffer-flush.max-rows", "0");
    options.put("sink.buffer-flush.interval", "0");
    ResolvedSchema schema = ResolvedSchema.of(Column.physical(ROWKEY, STRING()));
    DynamicTableSink sink = createTableSink(schema, options);
    HBaseWriteOptions expected = HBaseWriteOptions.builder().setBufferFlushMaxRows(0).setBufferFlushIntervalMillis(0).setBufferFlushMaxSizeInBytes(0).build();
    HBaseWriteOptions actual = ((HBaseDynamicTableSink) sink).getWriteOptions();
    assertEquals(expected, actual);
}
Also used : HBaseDynamicTableSink(org.apache.flink.connector.hbase2.sink.HBaseDynamicTableSink) HBaseDynamicTableSink(org.apache.flink.connector.hbase2.sink.HBaseDynamicTableSink) DynamicTableSink(org.apache.flink.table.connector.sink.DynamicTableSink) ResolvedSchema(org.apache.flink.table.catalog.ResolvedSchema) HBaseWriteOptions(org.apache.flink.connector.hbase.options.HBaseWriteOptions) Test(org.junit.Test)

Example 10 with ResolvedSchema

use of org.apache.flink.table.catalog.ResolvedSchema in project flink by apache.

the class HBaseDynamicTableFactoryTest method testDisabledBufferFlushOptions.

@Test
public void testDisabledBufferFlushOptions() {
    Map<String, String> options = getAllOptions();
    options.put("sink.buffer-flush.max-size", "0");
    options.put("sink.buffer-flush.max-rows", "0");
    options.put("sink.buffer-flush.interval", "0");
    ResolvedSchema schema = ResolvedSchema.of(Column.physical(ROWKEY, STRING()));
    DynamicTableSink sink = createTableSink(schema, options);
    HBaseWriteOptions expected = HBaseWriteOptions.builder().setBufferFlushMaxRows(0).setBufferFlushIntervalMillis(0).setBufferFlushMaxSizeInBytes(0).build();
    HBaseWriteOptions actual = ((HBaseDynamicTableSink) sink).getWriteOptions();
    assertEquals(expected, actual);
}
Also used : HBaseDynamicTableSink(org.apache.flink.connector.hbase1.sink.HBaseDynamicTableSink) HBaseDynamicTableSink(org.apache.flink.connector.hbase1.sink.HBaseDynamicTableSink) DynamicTableSink(org.apache.flink.table.connector.sink.DynamicTableSink) ResolvedSchema(org.apache.flink.table.catalog.ResolvedSchema) HBaseWriteOptions(org.apache.flink.connector.hbase.options.HBaseWriteOptions) Test(org.junit.Test)

Aggregations

ResolvedSchema (org.apache.flink.table.catalog.ResolvedSchema)84 Test (org.junit.Test)50 DynamicTableSink (org.apache.flink.table.connector.sink.DynamicTableSink)20 DataType (org.apache.flink.table.types.DataType)20 RowData (org.apache.flink.table.data.RowData)17 ValidationException (org.apache.flink.table.api.ValidationException)14 ResolvedCatalogTable (org.apache.flink.table.catalog.ResolvedCatalogTable)14 List (java.util.List)11 SinkRuntimeProviderContext (org.apache.flink.table.runtime.connector.sink.SinkRuntimeProviderContext)11 DynamicTableSource (org.apache.flink.table.connector.source.DynamicTableSource)10 Column (org.apache.flink.table.catalog.Column)9 LogicalType (org.apache.flink.table.types.logical.LogicalType)9 RowType (org.apache.flink.table.types.logical.RowType)9 HashMap (java.util.HashMap)8 Collectors (java.util.stream.Collectors)8 RelDataType (org.apache.calcite.rel.type.RelDataType)8 Internal (org.apache.flink.annotation.Internal)8 HBaseWriteOptions (org.apache.flink.connector.hbase.options.HBaseWriteOptions)6 FlinkTypeFactory (org.apache.flink.table.planner.calcite.FlinkTypeFactory)6 Row (org.apache.flink.types.Row)6