Search in sources :

Example 6 with TestingReaderContext

use of org.apache.flink.connector.testutils.source.reader.TestingReaderContext in project flink by apache.

the class SourceReaderBaseTest method testExceptionInSplitReader.

@Test
void testExceptionInSplitReader() {
    assertThatThrownBy(() -> {
        final String errMsg = "Testing Exception";
        FutureCompletingBlockingQueue<RecordsWithSplitIds<int[]>> elementsQueue = new FutureCompletingBlockingQueue<>();
        // called.
        try (MockSourceReader reader = new MockSourceReader(elementsQueue, () -> new SplitReader<int[], MockSourceSplit>() {

            @Override
            public RecordsWithSplitIds<int[]> fetch() {
                throw new RuntimeException(errMsg);
            }

            @Override
            public void handleSplitsChanges(SplitsChange<MockSourceSplit> splitsChanges) {
            }

            @Override
            public void wakeUp() {
            }

            @Override
            public void close() {
            }
        }, getConfig(), new TestingReaderContext())) {
            ValidatingSourceOutput output = new ValidatingSourceOutput();
            reader.addSplits(Collections.singletonList(getSplit(0, NUM_RECORDS_PER_SPLIT, Boundedness.CONTINUOUS_UNBOUNDED)));
            reader.notifyNoMoreSplits();
            // two polls.
            while (true) {
                InputStatus inputStatus = reader.pollNext(output);
                assertThat(inputStatus).isNotEqualTo(InputStatus.END_OF_INPUT);
                // Add a sleep to avoid tight loop.
                Thread.sleep(1);
            }
        }
    }).isInstanceOf(RuntimeException.class).hasMessage("One or more fetchers have encountered exception");
}
Also used : MockSourceReader(org.apache.flink.connector.base.source.reader.mocks.MockSourceReader) TestingRecordsWithSplitIds(org.apache.flink.connector.base.source.reader.mocks.TestingRecordsWithSplitIds) FutureCompletingBlockingQueue(org.apache.flink.connector.base.source.reader.synchronization.FutureCompletingBlockingQueue) InputStatus(org.apache.flink.core.io.InputStatus) TestingReaderContext(org.apache.flink.connector.testutils.source.reader.TestingReaderContext) MockSourceSplit(org.apache.flink.api.connector.source.mocks.MockSourceSplit) Test(org.junit.jupiter.api.Test)

Example 7 with TestingReaderContext

use of org.apache.flink.connector.testutils.source.reader.TestingReaderContext in project flink by apache.

the class KafkaPartitionSplitReaderTest method createReader.

private KafkaPartitionSplitReader createReader(Properties additionalProperties, SourceReaderMetricGroup sourceReaderMetricGroup) {
    Properties props = new Properties();
    props.putAll(KafkaSourceTestEnv.getConsumerProperties(ByteArrayDeserializer.class));
    props.setProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "none");
    if (!additionalProperties.isEmpty()) {
        props.putAll(additionalProperties);
    }
    KafkaSourceReaderMetrics kafkaSourceReaderMetrics = new KafkaSourceReaderMetrics(sourceReaderMetricGroup);
    return new KafkaPartitionSplitReader(props, new TestingReaderContext(new Configuration(), sourceReaderMetricGroup), kafkaSourceReaderMetrics);
}
Also used : Configuration(org.apache.flink.configuration.Configuration) KafkaSourceReaderMetrics(org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics) TestingReaderContext(org.apache.flink.connector.testutils.source.reader.TestingReaderContext) Properties(java.util.Properties) ByteArrayDeserializer(org.apache.kafka.common.serialization.ByteArrayDeserializer)

Example 8 with TestingReaderContext

use of org.apache.flink.connector.testutils.source.reader.TestingReaderContext in project flink by apache.

the class KafkaSourceReaderTest method testDisableOffsetCommit.

@Test
void testDisableOffsetCommit() throws Exception {
    final Properties properties = new Properties();
    properties.setProperty(KafkaSourceOptions.COMMIT_OFFSETS_ON_CHECKPOINT.key(), "false");
    try (KafkaSourceReader<Integer> reader = (KafkaSourceReader<Integer>) createReader(Boundedness.CONTINUOUS_UNBOUNDED, new TestingReaderContext(), (ignore) -> {
    }, properties)) {
        reader.addSplits(getSplits(numSplits, NUM_RECORDS_PER_SPLIT, Boundedness.CONTINUOUS_UNBOUNDED));
        ValidatingSourceOutput output = new ValidatingSourceOutput();
        long checkpointId = 0;
        do {
            checkpointId++;
            reader.pollNext(output);
            // Create a checkpoint for each message consumption, but not complete them.
            reader.snapshotState(checkpointId);
            // Offsets to commit should be always empty because offset commit is disabled
            assertThat(reader.getOffsetsToCommit()).isEmpty();
        } while (output.count() < totalNumRecords);
    }
}
Also used : Arrays(java.util.Arrays) Assertions.assertThat(org.assertj.core.api.Assertions.assertThat) InputStatus(org.apache.flink.core.io.InputStatus) AdminClient(org.apache.kafka.clients.admin.AdminClient) AfterAll(org.junit.jupiter.api.AfterAll) CURRENT_OFFSET_METRIC_GAUGE(org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics.CURRENT_OFFSET_METRIC_GAUGE) TOPIC_GROUP(org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics.TOPIC_GROUP) TestingReaderContext(org.apache.flink.connector.testutils.source.reader.TestingReaderContext) BeforeAll(org.junit.jupiter.api.BeforeAll) KafkaSourceBuilder(org.apache.flink.connector.kafka.source.KafkaSourceBuilder) Duration(java.time.Duration) Map(java.util.Map) StringSerializer(org.apache.kafka.common.serialization.StringSerializer) PARTITION_GROUP(org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics.PARTITION_GROUP) ReaderOutput(org.apache.flink.api.connector.source.ReaderOutput) SourceReaderContext(org.apache.flink.api.connector.source.SourceReaderContext) TopicPartition(org.apache.kafka.common.TopicPartition) Collection(java.util.Collection) SourceReaderTestBase(org.apache.flink.connector.testutils.source.reader.SourceReaderTestBase) KAFKA_CONSUMER_METRIC_GROUP(org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics.KAFKA_CONSUMER_METRIC_GROUP) Set(java.util.Set) CommonTestUtils.waitUtil(org.apache.flink.core.testutils.CommonTestUtils.waitUtil) TestingReaderOutput(org.apache.flink.connector.testutils.source.reader.TestingReaderOutput) ConsumerConfig(org.apache.kafka.clients.consumer.ConsumerConfig) COMMITS_SUCCEEDED_METRIC_COUNTER(org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics.COMMITS_SUCCEEDED_METRIC_COUNTER) INITIAL_OFFSET(org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics.INITIAL_OFFSET) Test(org.junit.jupiter.api.Test) MetricGroup(org.apache.flink.metrics.MetricGroup) InternalSourceReaderMetricGroup(org.apache.flink.runtime.metrics.groups.InternalSourceReaderMetricGroup) List(java.util.List) MatcherAssert(org.hamcrest.MatcherAssert) MetricListener(org.apache.flink.metrics.testutils.MetricListener) OffsetAndMetadata(org.apache.kafka.clients.consumer.OffsetAndMetadata) Optional(java.util.Optional) KAFKA_SOURCE_READER_METRIC_GROUP(org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics.KAFKA_SOURCE_READER_METRIC_GROUP) Boundedness(org.apache.flink.api.connector.source.Boundedness) Counter(org.apache.flink.metrics.Counter) ProducerRecord(org.apache.kafka.clients.producer.ProducerRecord) OffsetsInitializer(org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer) Supplier(java.util.function.Supplier) ArrayList(java.util.ArrayList) HashSet(java.util.HashSet) Gauge(org.apache.flink.metrics.Gauge) IntegerSerializer(org.apache.kafka.common.serialization.IntegerSerializer) KafkaSourceOptions(org.apache.flink.connector.kafka.source.KafkaSourceOptions) Properties(java.util.Properties) SourceReader(org.apache.flink.api.connector.source.SourceReader) Configuration(org.apache.flink.configuration.Configuration) NewTopic(org.apache.kafka.clients.admin.NewTopic) Matchers(org.hamcrest.Matchers) NUM_PARTITIONS(org.apache.flink.connector.kafka.testutils.KafkaSourceTestEnv.NUM_PARTITIONS) KafkaSourceTestEnv(org.apache.flink.connector.kafka.testutils.KafkaSourceTestEnv) KafkaRecordDeserializationSchema(org.apache.flink.connector.kafka.source.reader.deserializer.KafkaRecordDeserializationSchema) Consumer(java.util.function.Consumer) KafkaSource(org.apache.flink.connector.kafka.source.KafkaSource) KafkaSourceTestUtils(org.apache.flink.connector.kafka.source.KafkaSourceTestUtils) IntegerDeserializer(org.apache.kafka.common.serialization.IntegerDeserializer) KafkaPartitionSplit(org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit) Collections(java.util.Collections) COMMITTED_OFFSET_METRIC_GAUGE(org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics.COMMITTED_OFFSET_METRIC_GAUGE) TestingReaderContext(org.apache.flink.connector.testutils.source.reader.TestingReaderContext) Properties(java.util.Properties) Test(org.junit.jupiter.api.Test)

Example 9 with TestingReaderContext

use of org.apache.flink.connector.testutils.source.reader.TestingReaderContext in project flink by apache.

the class KafkaSourceReaderTest method testAssigningEmptySplits.

@Test
void testAssigningEmptySplits() throws Exception {
    // Normal split with NUM_RECORDS_PER_SPLIT records
    final KafkaPartitionSplit normalSplit = new KafkaPartitionSplit(new TopicPartition(TOPIC, 0), 0, KafkaPartitionSplit.LATEST_OFFSET);
    // Empty split with no record
    final KafkaPartitionSplit emptySplit = new KafkaPartitionSplit(new TopicPartition(TOPIC, 1), NUM_RECORDS_PER_SPLIT, NUM_RECORDS_PER_SPLIT);
    // Split finished hook for listening finished splits
    final Set<String> finishedSplits = new HashSet<>();
    final Consumer<Collection<String>> splitFinishedHook = finishedSplits::addAll;
    try (final KafkaSourceReader<Integer> reader = (KafkaSourceReader<Integer>) createReader(Boundedness.BOUNDED, "KafkaSourceReaderTestGroup", new TestingReaderContext(), splitFinishedHook)) {
        reader.addSplits(Arrays.asList(normalSplit, emptySplit));
        pollUntil(reader, new TestingReaderOutput<>(), () -> reader.getNumAliveFetchers() == 0, "The split fetcher did not exit before timeout.");
        MatcherAssert.assertThat(finishedSplits, Matchers.containsInAnyOrder(KafkaPartitionSplit.toSplitId(normalSplit.getTopicPartition()), KafkaPartitionSplit.toSplitId(emptySplit.getTopicPartition())));
    }
}
Also used : KafkaPartitionSplit(org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit) TopicPartition(org.apache.kafka.common.TopicPartition) Collection(java.util.Collection) TestingReaderContext(org.apache.flink.connector.testutils.source.reader.TestingReaderContext) HashSet(java.util.HashSet) Test(org.junit.jupiter.api.Test)

Example 10 with TestingReaderContext

use of org.apache.flink.connector.testutils.source.reader.TestingReaderContext in project flink by apache.

the class PulsarSourceReaderTestBase method sourceReader.

private PulsarSourceReaderBase<Integer> sourceReader(boolean autoAcknowledgementEnabled, SubscriptionType subscriptionType) {
    Configuration configuration = operator().config();
    configuration.set(PULSAR_MAX_FETCH_RECORDS, 1);
    configuration.set(PULSAR_MAX_FETCH_TIME, 1000L);
    configuration.set(PULSAR_SUBSCRIPTION_NAME, randomAlphabetic(10));
    configuration.set(PULSAR_SUBSCRIPTION_TYPE, subscriptionType);
    if (autoAcknowledgementEnabled || configuration.get(PULSAR_SUBSCRIPTION_TYPE) == SubscriptionType.Shared) {
        configuration.set(PULSAR_ENABLE_AUTO_ACKNOWLEDGE_MESSAGE, true);
    }
    PulsarDeserializationSchema<Integer> deserializationSchema = pulsarSchema(Schema.INT32);
    SourceReaderContext context = new TestingReaderContext();
    try {
        deserializationSchema.open(new PulsarDeserializationSchemaInitializationContext(context), mock(SourceConfiguration.class));
    } catch (Exception e) {
        fail("Error while opening deserializationSchema");
    }
    SourceConfiguration sourceConfiguration = new SourceConfiguration(configuration);
    return (PulsarSourceReaderBase<Integer>) PulsarSourceReaderFactory.create(context, deserializationSchema, sourceConfiguration);
}
Also used : SourceConfiguration(org.apache.flink.connector.pulsar.source.config.SourceConfiguration) Configuration(org.apache.flink.configuration.Configuration) PulsarDeserializationSchemaInitializationContext(org.apache.flink.connector.pulsar.source.reader.deserializer.PulsarDeserializationSchemaInitializationContext) SourceConfiguration(org.apache.flink.connector.pulsar.source.config.SourceConfiguration) TestingReaderContext(org.apache.flink.connector.testutils.source.reader.TestingReaderContext) SourceReaderContext(org.apache.flink.api.connector.source.SourceReaderContext) PulsarAdminException(org.apache.pulsar.client.admin.PulsarAdminException) ParameterResolutionException(org.junit.jupiter.api.extension.ParameterResolutionException)

Aggregations

TestingReaderContext (org.apache.flink.connector.testutils.source.reader.TestingReaderContext)15 MockSourceSplit (org.apache.flink.api.connector.source.mocks.MockSourceSplit)7 TestingRecordsWithSplitIds (org.apache.flink.connector.base.source.reader.mocks.TestingRecordsWithSplitIds)6 FutureCompletingBlockingQueue (org.apache.flink.connector.base.source.reader.synchronization.FutureCompletingBlockingQueue)6 Test (org.junit.jupiter.api.Test)6 MockSourceReader (org.apache.flink.connector.base.source.reader.mocks.MockSourceReader)5 InputStatus (org.apache.flink.core.io.InputStatus)5 Test (org.junit.Test)5 SourceReaderContext (org.apache.flink.api.connector.source.SourceReaderContext)4 Configuration (org.apache.flink.configuration.Configuration)4 MockSplitReader (org.apache.flink.connector.base.source.reader.mocks.MockSplitReader)4 TestingReaderOutput (org.apache.flink.connector.testutils.source.reader.TestingReaderOutput)4 MockBaseSource (org.apache.flink.connector.base.source.reader.mocks.MockBaseSource)3 Collection (java.util.Collection)2 HashSet (java.util.HashSet)2 Map (java.util.Map)2 Properties (java.util.Properties)2 FileSourceSplit (org.apache.flink.connector.file.src.FileSourceSplit)2 KafkaPartitionSplit (org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit)2 TopicPartition (org.apache.kafka.common.TopicPartition)2