Search in sources :

Example 6 with DynamicTableSink

use of org.apache.flink.table.connector.sink.DynamicTableSink in project flink by apache.

the class HBaseDynamicTableFactoryTest method testParallelismOptions.

@Test
public void testParallelismOptions() {
    Map<String, String> options = getAllOptions();
    options.put("sink.parallelism", "2");
    ResolvedSchema schema = ResolvedSchema.of(Column.physical(ROWKEY, STRING()));
    DynamicTableSink sink = createTableSink(schema, options);
    assertTrue(sink instanceof HBaseDynamicTableSink);
    HBaseDynamicTableSink hbaseSink = (HBaseDynamicTableSink) sink;
    SinkFunctionProvider provider = (SinkFunctionProvider) hbaseSink.getSinkRuntimeProvider(new SinkRuntimeProviderContext(false));
    assertEquals(2, (long) provider.getParallelism().get());
}
Also used : SinkRuntimeProviderContext(org.apache.flink.table.runtime.connector.sink.SinkRuntimeProviderContext) HBaseDynamicTableSink(org.apache.flink.connector.hbase2.sink.HBaseDynamicTableSink) HBaseDynamicTableSink(org.apache.flink.connector.hbase2.sink.HBaseDynamicTableSink) DynamicTableSink(org.apache.flink.table.connector.sink.DynamicTableSink) SinkFunctionProvider(org.apache.flink.table.connector.sink.SinkFunctionProvider) ResolvedSchema(org.apache.flink.table.catalog.ResolvedSchema) Test(org.junit.Test)

Example 7 with DynamicTableSink

use of org.apache.flink.table.connector.sink.DynamicTableSink in project flink by apache.

the class HBaseDynamicTableFactoryTest method testDisabledBufferFlushOptions.

@Test
public void testDisabledBufferFlushOptions() {
    Map<String, String> options = getAllOptions();
    options.put("sink.buffer-flush.max-size", "0");
    options.put("sink.buffer-flush.max-rows", "0");
    options.put("sink.buffer-flush.interval", "0");
    ResolvedSchema schema = ResolvedSchema.of(Column.physical(ROWKEY, STRING()));
    DynamicTableSink sink = createTableSink(schema, options);
    HBaseWriteOptions expected = HBaseWriteOptions.builder().setBufferFlushMaxRows(0).setBufferFlushIntervalMillis(0).setBufferFlushMaxSizeInBytes(0).build();
    HBaseWriteOptions actual = ((HBaseDynamicTableSink) sink).getWriteOptions();
    assertEquals(expected, actual);
}
Also used : HBaseDynamicTableSink(org.apache.flink.connector.hbase2.sink.HBaseDynamicTableSink) HBaseDynamicTableSink(org.apache.flink.connector.hbase2.sink.HBaseDynamicTableSink) DynamicTableSink(org.apache.flink.table.connector.sink.DynamicTableSink) ResolvedSchema(org.apache.flink.table.catalog.ResolvedSchema) HBaseWriteOptions(org.apache.flink.connector.hbase.options.HBaseWriteOptions) Test(org.junit.Test)

Example 8 with DynamicTableSink

use of org.apache.flink.table.connector.sink.DynamicTableSink in project flink by apache.

the class HiveDynamicTableFactory method createDynamicTableSink.

@Override
public DynamicTableSink createDynamicTableSink(Context context) {
    final boolean isHiveTable = HiveCatalog.isHiveTable(context.getCatalogTable().getOptions());
    // we don't support temporary hive tables yet
    if (!isHiveTable || context.isTemporary()) {
        DynamicTableSink sink = FactoryUtil.createDynamicTableSink(null, context.getObjectIdentifier(), context.getCatalogTable(), context.getConfiguration(), context.getClassLoader(), context.isTemporary());
        if (sink instanceof RequireCatalogLock) {
            ((RequireCatalogLock) sink).setLockFactory(HiveCatalogLock.createFactory(hiveConf));
        }
        return sink;
    }
    final Integer configuredParallelism = Configuration.fromMap(context.getCatalogTable().getOptions()).get(FileSystemConnectorOptions.SINK_PARALLELISM);
    final JobConf jobConf = JobConfUtils.createJobConfWithCredentials(hiveConf);
    return new HiveTableSink(context.getConfiguration(), jobConf, context.getObjectIdentifier(), context.getCatalogTable(), configuredParallelism);
}
Also used : RequireCatalogLock(org.apache.flink.table.connector.RequireCatalogLock) DynamicTableSink(org.apache.flink.table.connector.sink.DynamicTableSink) JobConf(org.apache.hadoop.mapred.JobConf)

Example 9 with DynamicTableSink

use of org.apache.flink.table.connector.sink.DynamicTableSink in project flink by apache.

the class HBaseDynamicTableFactoryTest method testDisabledBufferFlushOptions.

@Test
public void testDisabledBufferFlushOptions() {
    Map<String, String> options = getAllOptions();
    options.put("sink.buffer-flush.max-size", "0");
    options.put("sink.buffer-flush.max-rows", "0");
    options.put("sink.buffer-flush.interval", "0");
    ResolvedSchema schema = ResolvedSchema.of(Column.physical(ROWKEY, STRING()));
    DynamicTableSink sink = createTableSink(schema, options);
    HBaseWriteOptions expected = HBaseWriteOptions.builder().setBufferFlushMaxRows(0).setBufferFlushIntervalMillis(0).setBufferFlushMaxSizeInBytes(0).build();
    HBaseWriteOptions actual = ((HBaseDynamicTableSink) sink).getWriteOptions();
    assertEquals(expected, actual);
}
Also used : HBaseDynamicTableSink(org.apache.flink.connector.hbase1.sink.HBaseDynamicTableSink) HBaseDynamicTableSink(org.apache.flink.connector.hbase1.sink.HBaseDynamicTableSink) DynamicTableSink(org.apache.flink.table.connector.sink.DynamicTableSink) ResolvedSchema(org.apache.flink.table.catalog.ResolvedSchema) HBaseWriteOptions(org.apache.flink.connector.hbase.options.HBaseWriteOptions) Test(org.junit.Test)

Example 10 with DynamicTableSink

use of org.apache.flink.table.connector.sink.DynamicTableSink in project flink by apache.

the class HBaseDynamicTableFactoryTest method testParallelismOptions.

@Test
public void testParallelismOptions() {
    Map<String, String> options = getAllOptions();
    options.put("sink.parallelism", "2");
    ResolvedSchema schema = ResolvedSchema.of(Column.physical(ROWKEY, STRING()));
    DynamicTableSink sink = createTableSink(schema, options);
    assertTrue(sink instanceof HBaseDynamicTableSink);
    HBaseDynamicTableSink hbaseSink = (HBaseDynamicTableSink) sink;
    SinkFunctionProvider provider = (SinkFunctionProvider) hbaseSink.getSinkRuntimeProvider(new SinkRuntimeProviderContext(false));
    assertEquals(2, (long) provider.getParallelism().get());
}
Also used : SinkRuntimeProviderContext(org.apache.flink.table.runtime.connector.sink.SinkRuntimeProviderContext) HBaseDynamicTableSink(org.apache.flink.connector.hbase1.sink.HBaseDynamicTableSink) HBaseDynamicTableSink(org.apache.flink.connector.hbase1.sink.HBaseDynamicTableSink) DynamicTableSink(org.apache.flink.table.connector.sink.DynamicTableSink) SinkFunctionProvider(org.apache.flink.table.connector.sink.SinkFunctionProvider) ResolvedSchema(org.apache.flink.table.catalog.ResolvedSchema) Test(org.junit.Test)

Aggregations

DynamicTableSink (org.apache.flink.table.connector.sink.DynamicTableSink)54 Test (org.junit.Test)34 SinkRuntimeProviderContext (org.apache.flink.table.runtime.connector.sink.SinkRuntimeProviderContext)23 RowData (org.apache.flink.table.data.RowData)21 ResolvedSchema (org.apache.flink.table.catalog.ResolvedSchema)19 DynamicTableSource (org.apache.flink.table.connector.source.DynamicTableSource)14 SinkV2Provider (org.apache.flink.table.connector.sink.SinkV2Provider)12 TestDynamicTableFactory (org.apache.flink.table.factories.TestDynamicTableFactory)12 Test (org.junit.jupiter.api.Test)10 EncodingFormatMock (org.apache.flink.table.factories.TestFormatFactory.EncodingFormatMock)8 HashMap (java.util.HashMap)7 HBaseWriteOptions (org.apache.flink.connector.hbase.options.HBaseWriteOptions)6 AvroRowDataSerializationSchema (org.apache.flink.formats.avro.AvroRowDataSerializationSchema)6 SinkFunctionProvider (org.apache.flink.table.connector.sink.SinkFunctionProvider)5 Collections (java.util.Collections)4 HBaseDynamicTableSink (org.apache.flink.connector.hbase2.sink.HBaseDynamicTableSink)4 SupportsPartitioning (org.apache.flink.table.connector.sink.abilities.SupportsPartitioning)4 DataType (org.apache.flink.table.types.DataType)4 RowType (org.apache.flink.table.types.logical.RowType)4 ArrayList (java.util.ArrayList)3