Search in sources :

Example 6 with KafkaPartitionSplit

use of org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit in project flink by apache.

the class KafkaPartitionSplitReaderTest method testUsingCommittedOffsetsWithEarliestOrLatestOffsetResetStrategy.

@ParameterizedTest
@CsvSource({ "earliest, 0", "latest, 10" })
public void testUsingCommittedOffsetsWithEarliestOrLatestOffsetResetStrategy(String offsetResetStrategy, Long expectedOffset) {
    final Properties props = new Properties();
    props.setProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, offsetResetStrategy);
    props.setProperty(ConsumerConfig.GROUP_ID_CONFIG, "using-committed-offset");
    KafkaPartitionSplitReader reader = createReader(props, UnregisteredMetricsGroup.createSourceReaderMetricGroup());
    // Add committed offset split
    final TopicPartition partition = new TopicPartition(TOPIC1, 0);
    reader.handleSplitsChanges(new SplitsAddition<>(Collections.singletonList(new KafkaPartitionSplit(partition, KafkaPartitionSplit.COMMITTED_OFFSET))));
    // Verify that the current offset of the consumer is the expected offset
    assertEquals(expectedOffset, reader.consumer().position(partition));
}
Also used : KafkaPartitionSplit(org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit) TopicPartition(org.apache.kafka.common.TopicPartition) Properties(java.util.Properties) CsvSource(org.junit.jupiter.params.provider.CsvSource) ParameterizedTest(org.junit.jupiter.params.ParameterizedTest)

Example 7 with KafkaPartitionSplit

use of org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit in project flink by apache.

the class KafkaSourceReaderTest method testCommitOffsetsWithoutAliveFetchers.

// -----------------------------------------
@Test
void testCommitOffsetsWithoutAliveFetchers() throws Exception {
    final String groupId = "testCommitOffsetsWithoutAliveFetchers";
    try (KafkaSourceReader<Integer> reader = (KafkaSourceReader<Integer>) createReader(Boundedness.CONTINUOUS_UNBOUNDED, groupId)) {
        KafkaPartitionSplit split = new KafkaPartitionSplit(new TopicPartition(TOPIC, 0), 0, NUM_RECORDS_PER_SPLIT);
        reader.addSplits(Collections.singletonList(split));
        reader.notifyNoMoreSplits();
        ReaderOutput<Integer> output = new TestingReaderOutput<>();
        InputStatus status;
        do {
            status = reader.pollNext(output);
        } while (status != InputStatus.NOTHING_AVAILABLE);
        pollUntil(reader, output, () -> reader.getNumAliveFetchers() == 0, "The split fetcher did not exit before timeout.");
        reader.snapshotState(100L);
        reader.notifyCheckpointComplete(100L);
        // Due to a bug in KafkaConsumer, when the consumer closes, the offset commit callback
        // won't be fired, so the offsetsToCommit map won't be cleaned. To make the test
        // stable, we add a split whose starting offset is the log end offset, so the
        // split fetcher won't become idle and exit after commitOffsetAsync is invoked from
        // notifyCheckpointComplete().
        reader.addSplits(Collections.singletonList(new KafkaPartitionSplit(new TopicPartition(TOPIC, 0), NUM_RECORDS_PER_SPLIT)));
        pollUntil(reader, output, () -> reader.getOffsetsToCommit().isEmpty(), "The offset commit did not finish before timeout.");
    }
    // Verify the committed offsets.
    try (AdminClient adminClient = KafkaSourceTestEnv.getAdminClient()) {
        Map<TopicPartition, OffsetAndMetadata> committedOffsets = adminClient.listConsumerGroupOffsets(groupId).partitionsToOffsetAndMetadata().get();
        assertThat(committedOffsets).hasSize(1);
        assertThat(committedOffsets.values()).extracting(OffsetAndMetadata::offset).allMatch(offset -> offset == NUM_RECORDS_PER_SPLIT);
    }
}
Also used : TestingReaderOutput(org.apache.flink.connector.testutils.source.reader.TestingReaderOutput) KafkaPartitionSplit(org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit) TopicPartition(org.apache.kafka.common.TopicPartition) OffsetAndMetadata(org.apache.kafka.clients.consumer.OffsetAndMetadata) InputStatus(org.apache.flink.core.io.InputStatus) AdminClient(org.apache.kafka.clients.admin.AdminClient) Test(org.junit.jupiter.api.Test)

Example 8 with KafkaPartitionSplit

use of org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit in project flink by apache.

the class KafkaSourceReaderTest method testNotCommitOffsetsForUninitializedSplits.

@Test
void testNotCommitOffsetsForUninitializedSplits() throws Exception {
    final long checkpointId = 1234L;
    try (KafkaSourceReader<Integer> reader = (KafkaSourceReader<Integer>) createReader()) {
        KafkaPartitionSplit split = new KafkaPartitionSplit(new TopicPartition(TOPIC, 0), KafkaPartitionSplit.EARLIEST_OFFSET);
        reader.addSplits(Collections.singletonList(split));
        reader.snapshotState(checkpointId);
        assertThat(reader.getOffsetsToCommit()).hasSize(1);
        assertThat(reader.getOffsetsToCommit().get(checkpointId)).isEmpty();
    }
}
Also used : KafkaPartitionSplit(org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit) TopicPartition(org.apache.kafka.common.TopicPartition) Test(org.junit.jupiter.api.Test)

Example 9 with KafkaPartitionSplit

use of org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit in project flink by apache.

the class KafkaPartitionSplitReader method maybeLogSplitChangesHandlingResult.

private void maybeLogSplitChangesHandlingResult(SplitsChange<KafkaPartitionSplit> splitsChange) {
    if (LOG.isDebugEnabled()) {
        StringJoiner splitsInfo = new StringJoiner(",");
        for (KafkaPartitionSplit split : splitsChange.splits()) {
            long startingOffset = consumer.position(split.getTopicPartition());
            long stoppingOffset = getStoppingOffset(split.getTopicPartition());
            splitsInfo.add(String.format("[%s, start:%d, stop: %d]", split.getTopicPartition(), startingOffset, stoppingOffset));
        }
        LOG.debug("SplitsChange handling result: {}", splitsInfo);
    }
}
Also used : KafkaPartitionSplit(org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit) StringJoiner(java.util.StringJoiner)

Example 10 with KafkaPartitionSplit

use of org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit in project flink by apache.

the class KafkaPartitionSplitReader method removeEmptySplits.

private void removeEmptySplits() {
    List<TopicPartition> emptyPartitions = new ArrayList<>();
    // If none of the partitions have any records,
    for (TopicPartition tp : consumer.assignment()) {
        if (consumer.position(tp) >= getStoppingOffset(tp)) {
            emptyPartitions.add(tp);
        }
    }
    if (!emptyPartitions.isEmpty()) {
        LOG.debug("These assigning splits are empty and will be marked as finished in later fetch: {}", emptyPartitions);
        // Add empty partitions to empty split set for later cleanup in fetch()
        emptySplits.addAll(emptyPartitions.stream().map(KafkaPartitionSplit::toSplitId).collect(Collectors.toSet()));
        // Un-assign partitions from Kafka consumer
        unassignPartitions(emptyPartitions);
    }
}
Also used : KafkaPartitionSplit(org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit) TopicPartition(org.apache.kafka.common.TopicPartition) ArrayList(java.util.ArrayList)

Aggregations

KafkaPartitionSplit (org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit)25 TopicPartition (org.apache.kafka.common.TopicPartition)20 Properties (java.util.Properties)10 Test (org.junit.jupiter.api.Test)9 HashSet (java.util.HashSet)7 ParameterizedTest (org.junit.jupiter.params.ParameterizedTest)6 HashMap (java.util.HashMap)5 Map (java.util.Map)5 AdminClient (org.apache.kafka.clients.admin.AdminClient)5 ArrayList (java.util.ArrayList)4 Collection (java.util.Collection)4 Test (org.junit.Test)4 Collections (java.util.Collections)3 List (java.util.List)3 Set (java.util.Set)3 Consumer (java.util.function.Consumer)3 MockSplitEnumeratorContext (org.apache.flink.api.connector.source.mocks.MockSplitEnumeratorContext)3 OffsetAndMetadata (org.apache.kafka.clients.consumer.OffsetAndMetadata)3 Arrays (java.util.Arrays)2 Optional (java.util.Optional)2