Search in sources :

Example 1 with SerializationSchema

use of org.apache.flink.api.common.serialization.SerializationSchema in project flink by apache.

the class KafkaDynamicTableFactory method createDynamicTableSink.

@Override
public DynamicTableSink createDynamicTableSink(Context context) {
    final TableFactoryHelper helper = FactoryUtil.createTableFactoryHelper(this, autoCompleteSchemaRegistrySubject(context));
    final Optional<EncodingFormat<SerializationSchema<RowData>>> keyEncodingFormat = getKeyEncodingFormat(helper);
    final EncodingFormat<SerializationSchema<RowData>> valueEncodingFormat = getValueEncodingFormat(helper);
    helper.validateExcept(PROPERTIES_PREFIX);
    final ReadableConfig tableOptions = helper.getOptions();
    final DeliveryGuarantee deliveryGuarantee = validateDeprecatedSemantic(tableOptions);
    validateTableSinkOptions(tableOptions);
    KafkaConnectorOptionsUtil.validateDeliveryGuarantee(tableOptions);
    validatePKConstraints(context.getObjectIdentifier(), context.getPrimaryKeyIndexes(), context.getCatalogTable().getOptions(), valueEncodingFormat);
    final DataType physicalDataType = context.getPhysicalRowDataType();
    final int[] keyProjection = createKeyFormatProjection(tableOptions, physicalDataType);
    final int[] valueProjection = createValueFormatProjection(tableOptions, physicalDataType);
    final String keyPrefix = tableOptions.getOptional(KEY_FIELDS_PREFIX).orElse(null);
    final Integer parallelism = tableOptions.getOptional(SINK_PARALLELISM).orElse(null);
    return createKafkaTableSink(physicalDataType, keyEncodingFormat.orElse(null), valueEncodingFormat, keyProjection, valueProjection, keyPrefix, tableOptions.get(TOPIC).get(0), getKafkaProperties(context.getCatalogTable().getOptions()), getFlinkKafkaPartitioner(tableOptions, context.getClassLoader()).orElse(null), deliveryGuarantee, parallelism, tableOptions.get(TRANSACTIONAL_ID_PREFIX));
}
Also used : EncodingFormat(org.apache.flink.table.connector.format.EncodingFormat) RowData(org.apache.flink.table.data.RowData) ReadableConfig(org.apache.flink.configuration.ReadableConfig) DeliveryGuarantee(org.apache.flink.connector.base.DeliveryGuarantee) SerializationSchema(org.apache.flink.api.common.serialization.SerializationSchema) TableFactoryHelper(org.apache.flink.table.factories.FactoryUtil.TableFactoryHelper) DataType(org.apache.flink.table.types.DataType)

Example 2 with SerializationSchema

use of org.apache.flink.api.common.serialization.SerializationSchema in project flink by apache.

the class KafkaDynamicTableFactoryTest method testTableSinkWithParallelism.

@Test
public void testTableSinkWithParallelism() {
    final Map<String, String> modifiedOptions = getModifiedOptions(getBasicSinkOptions(), options -> options.put("sink.parallelism", "100"));
    KafkaDynamicSink actualSink = (KafkaDynamicSink) createTableSink(SCHEMA, modifiedOptions);
    final EncodingFormat<SerializationSchema<RowData>> valueEncodingFormat = new EncodingFormatMock(",");
    final DynamicTableSink expectedSink = createExpectedSink(SCHEMA_DATA_TYPE, null, valueEncodingFormat, new int[0], new int[] { 0, 1, 2 }, null, TOPIC, KAFKA_SINK_PROPERTIES, new FlinkFixedPartitioner<>(), DeliveryGuarantee.EXACTLY_ONCE, 100, "kafka-sink");
    assertThat(actualSink).isEqualTo(expectedSink);
    final DynamicTableSink.SinkRuntimeProvider provider = actualSink.getSinkRuntimeProvider(new SinkRuntimeProviderContext(false));
    assertThat(provider).isInstanceOf(SinkV2Provider.class);
    final SinkV2Provider sinkProvider = (SinkV2Provider) provider;
    assertThat(sinkProvider.getParallelism().isPresent()).isTrue();
    assertThat((long) sinkProvider.getParallelism().get()).isEqualTo(100);
}
Also used : SinkRuntimeProviderContext(org.apache.flink.table.runtime.connector.sink.SinkRuntimeProviderContext) EncodingFormatMock(org.apache.flink.table.factories.TestFormatFactory.EncodingFormatMock) ConfluentRegistryAvroSerializationSchema(org.apache.flink.formats.avro.registry.confluent.ConfluentRegistryAvroSerializationSchema) AvroRowDataSerializationSchema(org.apache.flink.formats.avro.AvroRowDataSerializationSchema) SerializationSchema(org.apache.flink.api.common.serialization.SerializationSchema) DebeziumAvroSerializationSchema(org.apache.flink.formats.avro.registry.confluent.debezium.DebeziumAvroSerializationSchema) SinkV2Provider(org.apache.flink.table.connector.sink.SinkV2Provider) DynamicTableSink(org.apache.flink.table.connector.sink.DynamicTableSink) Test(org.junit.jupiter.api.Test) ParameterizedTest(org.junit.jupiter.params.ParameterizedTest)

Example 3 with SerializationSchema

use of org.apache.flink.api.common.serialization.SerializationSchema in project flink by apache.

the class KafkaDynamicTableFactoryTest method testTableSinkSemanticTranslation.

@Test
public void testTableSinkSemanticTranslation() {
    final List<String> semantics = ImmutableList.of("exactly-once", "at-least-once", "none");
    final EncodingFormat<SerializationSchema<RowData>> valueEncodingFormat = new EncodingFormatMock(",");
    for (final String semantic : semantics) {
        final Map<String, String> modifiedOptions = getModifiedOptions(getBasicSinkOptions(), options -> {
            options.put("sink.semantic", semantic);
            options.put("sink.transactional-id-prefix", "kafka-sink");
        });
        final DynamicTableSink actualSink = createTableSink(SCHEMA, modifiedOptions);
        final DynamicTableSink expectedSink = createExpectedSink(SCHEMA_DATA_TYPE, null, valueEncodingFormat, new int[0], new int[] { 0, 1, 2 }, null, TOPIC, KAFKA_SINK_PROPERTIES, new FlinkFixedPartitioner<>(), DeliveryGuarantee.valueOf(semantic.toUpperCase().replace("-", "_")), null, "kafka-sink");
        assertThat(actualSink).isEqualTo(expectedSink);
    }
}
Also used : EncodingFormatMock(org.apache.flink.table.factories.TestFormatFactory.EncodingFormatMock) ConfluentRegistryAvroSerializationSchema(org.apache.flink.formats.avro.registry.confluent.ConfluentRegistryAvroSerializationSchema) AvroRowDataSerializationSchema(org.apache.flink.formats.avro.AvroRowDataSerializationSchema) SerializationSchema(org.apache.flink.api.common.serialization.SerializationSchema) DebeziumAvroSerializationSchema(org.apache.flink.formats.avro.registry.confluent.debezium.DebeziumAvroSerializationSchema) DynamicTableSink(org.apache.flink.table.connector.sink.DynamicTableSink) Test(org.junit.jupiter.api.Test) ParameterizedTest(org.junit.jupiter.params.ParameterizedTest)

Example 4 with SerializationSchema

use of org.apache.flink.api.common.serialization.SerializationSchema in project flink by apache.

the class KafkaDynamicTableFactoryTest method testTableSink.

@Test
public void testTableSink() {
    final Map<String, String> modifiedOptions = getModifiedOptions(getBasicSinkOptions(), options -> {
        options.put("sink.delivery-guarantee", "exactly-once");
        options.put("sink.transactional-id-prefix", "kafka-sink");
    });
    final DynamicTableSink actualSink = createTableSink(SCHEMA, modifiedOptions);
    final EncodingFormat<SerializationSchema<RowData>> valueEncodingFormat = new EncodingFormatMock(",");
    final DynamicTableSink expectedSink = createExpectedSink(SCHEMA_DATA_TYPE, null, valueEncodingFormat, new int[0], new int[] { 0, 1, 2 }, null, TOPIC, KAFKA_SINK_PROPERTIES, new FlinkFixedPartitioner<>(), DeliveryGuarantee.EXACTLY_ONCE, null, "kafka-sink");
    assertThat(actualSink).isEqualTo(expectedSink);
    // Test kafka producer.
    final KafkaDynamicSink actualKafkaSink = (KafkaDynamicSink) actualSink;
    DynamicTableSink.SinkRuntimeProvider provider = actualKafkaSink.getSinkRuntimeProvider(new SinkRuntimeProviderContext(false));
    assertThat(provider).isInstanceOf(SinkV2Provider.class);
    final SinkV2Provider sinkProvider = (SinkV2Provider) provider;
    final Sink<RowData> sinkFunction = sinkProvider.createSink();
    assertThat(sinkFunction).isInstanceOf(KafkaSink.class);
}
Also used : SinkRuntimeProviderContext(org.apache.flink.table.runtime.connector.sink.SinkRuntimeProviderContext) EncodingFormatMock(org.apache.flink.table.factories.TestFormatFactory.EncodingFormatMock) ConfluentRegistryAvroSerializationSchema(org.apache.flink.formats.avro.registry.confluent.ConfluentRegistryAvroSerializationSchema) AvroRowDataSerializationSchema(org.apache.flink.formats.avro.AvroRowDataSerializationSchema) SerializationSchema(org.apache.flink.api.common.serialization.SerializationSchema) DebeziumAvroSerializationSchema(org.apache.flink.formats.avro.registry.confluent.debezium.DebeziumAvroSerializationSchema) DynamicTableSink(org.apache.flink.table.connector.sink.DynamicTableSink) RowData(org.apache.flink.table.data.RowData) SinkV2Provider(org.apache.flink.table.connector.sink.SinkV2Provider) Test(org.junit.jupiter.api.Test) ParameterizedTest(org.junit.jupiter.params.ParameterizedTest)

Example 5 with SerializationSchema

use of org.apache.flink.api.common.serialization.SerializationSchema in project flink by apache.

the class UpsertKafkaDynamicTableFactory method createDynamicTableSink.

@Override
public DynamicTableSink createDynamicTableSink(Context context) {
    FactoryUtil.TableFactoryHelper helper = FactoryUtil.createTableFactoryHelper(this, autoCompleteSchemaRegistrySubject(context));
    final ReadableConfig tableOptions = helper.getOptions();
    EncodingFormat<SerializationSchema<RowData>> keyEncodingFormat = helper.discoverEncodingFormat(SerializationFormatFactory.class, KEY_FORMAT);
    EncodingFormat<SerializationSchema<RowData>> valueEncodingFormat = helper.discoverEncodingFormat(SerializationFormatFactory.class, VALUE_FORMAT);
    // Validate the option data type.
    helper.validateExcept(PROPERTIES_PREFIX);
    validateSink(tableOptions, keyEncodingFormat, valueEncodingFormat, context.getPrimaryKeyIndexes());
    Tuple2<int[], int[]> keyValueProjections = createKeyValueProjections(context.getCatalogTable());
    final String keyPrefix = tableOptions.getOptional(KEY_FIELDS_PREFIX).orElse(null);
    final Properties properties = getKafkaProperties(context.getCatalogTable().getOptions());
    Integer parallelism = tableOptions.get(SINK_PARALLELISM);
    int batchSize = tableOptions.get(SINK_BUFFER_FLUSH_MAX_ROWS);
    Duration batchInterval = tableOptions.get(SINK_BUFFER_FLUSH_INTERVAL);
    SinkBufferFlushMode flushMode = new SinkBufferFlushMode(batchSize, batchInterval.toMillis());
    // it will use hash partition if key is set else in round-robin behaviour.
    return new KafkaDynamicSink(context.getPhysicalRowDataType(), context.getPhysicalRowDataType(), keyEncodingFormat, new EncodingFormatWrapper(valueEncodingFormat), keyValueProjections.f0, keyValueProjections.f1, keyPrefix, tableOptions.get(TOPIC).get(0), properties, null, DeliveryGuarantee.AT_LEAST_ONCE, true, flushMode, parallelism, tableOptions.get(TRANSACTIONAL_ID_PREFIX));
}
Also used : FactoryUtil(org.apache.flink.table.factories.FactoryUtil) SerializationSchema(org.apache.flink.api.common.serialization.SerializationSchema) Duration(java.time.Duration) KafkaConnectorOptionsUtil.getKafkaProperties(org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptionsUtil.getKafkaProperties) Properties(java.util.Properties) ReadableConfig(org.apache.flink.configuration.ReadableConfig)

Aggregations

SerializationSchema (org.apache.flink.api.common.serialization.SerializationSchema)6 AvroRowDataSerializationSchema (org.apache.flink.formats.avro.AvroRowDataSerializationSchema)3 ConfluentRegistryAvroSerializationSchema (org.apache.flink.formats.avro.registry.confluent.ConfluentRegistryAvroSerializationSchema)3 DebeziumAvroSerializationSchema (org.apache.flink.formats.avro.registry.confluent.debezium.DebeziumAvroSerializationSchema)3 DynamicTableSink (org.apache.flink.table.connector.sink.DynamicTableSink)3 RowData (org.apache.flink.table.data.RowData)3 EncodingFormatMock (org.apache.flink.table.factories.TestFormatFactory.EncodingFormatMock)3 Test (org.junit.jupiter.api.Test)3 ParameterizedTest (org.junit.jupiter.params.ParameterizedTest)3 ReadableConfig (org.apache.flink.configuration.ReadableConfig)2 EncodingFormat (org.apache.flink.table.connector.format.EncodingFormat)2 SinkV2Provider (org.apache.flink.table.connector.sink.SinkV2Provider)2 TableFactoryHelper (org.apache.flink.table.factories.FactoryUtil.TableFactoryHelper)2 SinkRuntimeProviderContext (org.apache.flink.table.runtime.connector.sink.SinkRuntimeProviderContext)2 Duration (java.time.Duration)1 Properties (java.util.Properties)1 DeliveryGuarantee (org.apache.flink.connector.base.DeliveryGuarantee)1 KafkaConnectorOptionsUtil.getKafkaProperties (org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptionsUtil.getKafkaProperties)1 FactoryUtil (org.apache.flink.table.factories.FactoryUtil)1 DataType (org.apache.flink.table.types.DataType)1