Search in sources :

Example 6 with SinkRuntimeProviderContext

use of org.apache.flink.table.runtime.connector.sink.SinkRuntimeProviderContext in project flink by apache.

the class HBaseDynamicTableFactoryTest method testParallelismOptions.

@Test
public void testParallelismOptions() {
    Map<String, String> options = getAllOptions();
    options.put("sink.parallelism", "2");
    ResolvedSchema schema = ResolvedSchema.of(Column.physical(ROWKEY, STRING()));
    DynamicTableSink sink = createTableSink(schema, options);
    assertTrue(sink instanceof HBaseDynamicTableSink);
    HBaseDynamicTableSink hbaseSink = (HBaseDynamicTableSink) sink;
    SinkFunctionProvider provider = (SinkFunctionProvider) hbaseSink.getSinkRuntimeProvider(new SinkRuntimeProviderContext(false));
    assertEquals(2, (long) provider.getParallelism().get());
}
Also used : SinkRuntimeProviderContext(org.apache.flink.table.runtime.connector.sink.SinkRuntimeProviderContext) HBaseDynamicTableSink(org.apache.flink.connector.hbase2.sink.HBaseDynamicTableSink) HBaseDynamicTableSink(org.apache.flink.connector.hbase2.sink.HBaseDynamicTableSink) DynamicTableSink(org.apache.flink.table.connector.sink.DynamicTableSink) SinkFunctionProvider(org.apache.flink.table.connector.sink.SinkFunctionProvider) ResolvedSchema(org.apache.flink.table.catalog.ResolvedSchema) Test(org.junit.Test)

Example 7 with SinkRuntimeProviderContext

use of org.apache.flink.table.runtime.connector.sink.SinkRuntimeProviderContext in project flink by apache.

the class HBaseDynamicTableFactoryTest method testParallelismOptions.

@Test
public void testParallelismOptions() {
    Map<String, String> options = getAllOptions();
    options.put("sink.parallelism", "2");
    ResolvedSchema schema = ResolvedSchema.of(Column.physical(ROWKEY, STRING()));
    DynamicTableSink sink = createTableSink(schema, options);
    assertTrue(sink instanceof HBaseDynamicTableSink);
    HBaseDynamicTableSink hbaseSink = (HBaseDynamicTableSink) sink;
    SinkFunctionProvider provider = (SinkFunctionProvider) hbaseSink.getSinkRuntimeProvider(new SinkRuntimeProviderContext(false));
    assertEquals(2, (long) provider.getParallelism().get());
}
Also used : SinkRuntimeProviderContext(org.apache.flink.table.runtime.connector.sink.SinkRuntimeProviderContext) HBaseDynamicTableSink(org.apache.flink.connector.hbase1.sink.HBaseDynamicTableSink) HBaseDynamicTableSink(org.apache.flink.connector.hbase1.sink.HBaseDynamicTableSink) DynamicTableSink(org.apache.flink.table.connector.sink.DynamicTableSink) SinkFunctionProvider(org.apache.flink.table.connector.sink.SinkFunctionProvider) ResolvedSchema(org.apache.flink.table.catalog.ResolvedSchema) Test(org.junit.Test)

Example 8 with SinkRuntimeProviderContext

use of org.apache.flink.table.runtime.connector.sink.SinkRuntimeProviderContext in project flink by apache.

the class DebeziumAvroFormatFactoryTest method createSerializationSchema.

private static SerializationSchema<RowData> createSerializationSchema(Map<String, String> options) {
    final DynamicTableSink actualSink = createTableSink(SCHEMA, options);
    assertThat(actualSink, instanceOf(TestDynamicTableFactory.DynamicTableSinkMock.class));
    TestDynamicTableFactory.DynamicTableSinkMock sinkMock = (TestDynamicTableFactory.DynamicTableSinkMock) actualSink;
    return sinkMock.valueFormat.createRuntimeEncoder(new SinkRuntimeProviderContext(false), SCHEMA.toPhysicalRowDataType());
}
Also used : SinkRuntimeProviderContext(org.apache.flink.table.runtime.connector.sink.SinkRuntimeProviderContext) DynamicTableSink(org.apache.flink.table.connector.sink.DynamicTableSink) TestDynamicTableFactory(org.apache.flink.table.factories.TestDynamicTableFactory)

Example 9 with SinkRuntimeProviderContext

use of org.apache.flink.table.runtime.connector.sink.SinkRuntimeProviderContext in project flink by apache.

the class JdbcDynamicTableSinkITCase method testFlushBufferWhenCheckpoint.

@Test
public void testFlushBufferWhenCheckpoint() throws Exception {
    Map<String, String> options = new HashMap<>();
    options.put("connector", "jdbc");
    options.put("url", DB_URL);
    options.put("table-name", OUTPUT_TABLE5);
    options.put("sink.buffer-flush.interval", "0");
    ResolvedSchema schema = ResolvedSchema.of(Column.physical("id", DataTypes.BIGINT().notNull()));
    DynamicTableSink tableSink = createTableSink(schema, options);
    SinkRuntimeProviderContext context = new SinkRuntimeProviderContext(false);
    SinkFunctionProvider sinkProvider = (SinkFunctionProvider) tableSink.getSinkRuntimeProvider(context);
    GenericJdbcSinkFunction<RowData> sinkFunction = (GenericJdbcSinkFunction<RowData>) sinkProvider.createSinkFunction();
    sinkFunction.setRuntimeContext(new MockStreamingRuntimeContext(true, 1, 0));
    sinkFunction.open(new Configuration());
    sinkFunction.invoke(GenericRowData.of(1L), SinkContextUtil.forTimestamp(1));
    sinkFunction.invoke(GenericRowData.of(2L), SinkContextUtil.forTimestamp(1));
    check(new Row[] {}, DB_URL, OUTPUT_TABLE5, new String[] { "id" });
    sinkFunction.snapshotState(new StateSnapshotContextSynchronousImpl(1, 1));
    check(new Row[] { Row.of(1L), Row.of(2L) }, DB_URL, OUTPUT_TABLE5, new String[] { "id" });
    sinkFunction.close();
}
Also used : SinkRuntimeProviderContext(org.apache.flink.table.runtime.connector.sink.SinkRuntimeProviderContext) MockStreamingRuntimeContext(org.apache.flink.streaming.util.MockStreamingRuntimeContext) Configuration(org.apache.flink.configuration.Configuration) HashMap(java.util.HashMap) StateSnapshotContextSynchronousImpl(org.apache.flink.runtime.state.StateSnapshotContextSynchronousImpl) DynamicTableSink(org.apache.flink.table.connector.sink.DynamicTableSink) SinkFunctionProvider(org.apache.flink.table.connector.sink.SinkFunctionProvider) GenericRowData(org.apache.flink.table.data.GenericRowData) RowData(org.apache.flink.table.data.RowData) GenericJdbcSinkFunction(org.apache.flink.connector.jdbc.internal.GenericJdbcSinkFunction) ResolvedSchema(org.apache.flink.table.catalog.ResolvedSchema) Test(org.junit.Test)

Example 10 with SinkRuntimeProviderContext

use of org.apache.flink.table.runtime.connector.sink.SinkRuntimeProviderContext in project flink by apache.

the class UpsertKafkaDynamicTableFactoryTest method verifyEncoderSubject.

private void verifyEncoderSubject(Consumer<Map<String, String>> optionModifier, String expectedValueSubject, String expectedKeySubject) {
    Map<String, String> options = new HashMap<>();
    // Kafka specific options.
    options.put("connector", UpsertKafkaDynamicTableFactory.IDENTIFIER);
    options.put("topic", SINK_TOPIC);
    options.put("properties.group.id", "dummy");
    options.put("properties.bootstrap.servers", "dummy");
    optionModifier.accept(options);
    final RowType rowType = (RowType) SINK_SCHEMA.toSinkRowDataType().getLogicalType();
    final String valueFormat = options.getOrDefault(FactoryUtil.FORMAT.key(), options.get(KafkaConnectorOptions.VALUE_FORMAT.key()));
    final String keyFormat = options.get(KafkaConnectorOptions.KEY_FORMAT.key());
    KafkaDynamicSink sink = (KafkaDynamicSink) createTableSink(SINK_SCHEMA, options);
    if (AVRO_CONFLUENT.equals(valueFormat)) {
        SerializationSchema<RowData> actualValueEncoder = sink.valueEncodingFormat.createRuntimeEncoder(new SinkRuntimeProviderContext(false), SINK_SCHEMA.toSinkRowDataType());
        assertEquals(createConfluentAvroSerSchema(rowType, expectedValueSubject), actualValueEncoder);
    }
    if (AVRO_CONFLUENT.equals(keyFormat)) {
        assert sink.keyEncodingFormat != null;
        SerializationSchema<RowData> actualKeyEncoder = sink.keyEncodingFormat.createRuntimeEncoder(new SinkRuntimeProviderContext(false), SINK_SCHEMA.toSinkRowDataType());
        assertEquals(createConfluentAvroSerSchema(rowType, expectedKeySubject), actualKeyEncoder);
    }
}
Also used : SinkRuntimeProviderContext(org.apache.flink.table.runtime.connector.sink.SinkRuntimeProviderContext) RowData(org.apache.flink.table.data.RowData) BinaryRowData(org.apache.flink.table.data.binary.BinaryRowData) HashMap(java.util.HashMap) RowType(org.apache.flink.table.types.logical.RowType)

Aggregations

SinkRuntimeProviderContext (org.apache.flink.table.runtime.connector.sink.SinkRuntimeProviderContext)27 DynamicTableSink (org.apache.flink.table.connector.sink.DynamicTableSink)23 RowData (org.apache.flink.table.data.RowData)19 Test (org.junit.Test)17 ResolvedSchema (org.apache.flink.table.catalog.ResolvedSchema)12 SinkV2Provider (org.apache.flink.table.connector.sink.SinkV2Provider)11 TestDynamicTableFactory (org.apache.flink.table.factories.TestDynamicTableFactory)7 HashMap (java.util.HashMap)5 RowType (org.apache.flink.table.types.logical.RowType)5 SinkFunctionProvider (org.apache.flink.table.connector.sink.SinkFunctionProvider)4 SerializationSchema (org.apache.flink.api.common.serialization.SerializationSchema)3 AvroRowDataSerializationSchema (org.apache.flink.formats.avro.AvroRowDataSerializationSchema)3 ConfluentRegistryAvroSerializationSchema (org.apache.flink.formats.avro.registry.confluent.ConfluentRegistryAvroSerializationSchema)3 DynamicTableSource (org.apache.flink.table.connector.source.DynamicTableSource)3 BinaryRowData (org.apache.flink.table.data.binary.BinaryRowData)3 EncodingFormatMock (org.apache.flink.table.factories.TestFormatFactory.EncodingFormatMock)3 Test (org.junit.jupiter.api.Test)3 ParameterizedTest (org.junit.jupiter.params.ParameterizedTest)3 Transformation (org.apache.flink.api.dag.Transformation)2 Configuration (org.apache.flink.configuration.Configuration)2