Search in sources :

Example 1 with KafkaSourceTestEnv

use of org.apache.flink.connector.kafka.testutils.KafkaSourceTestEnv in project flink by apache.

the class KafkaPartitionSplitReaderTest method testPendingRecordsGauge.

@ParameterizedTest
@EmptySource
@ValueSource(strings = { "_underscore.period-minus" })
public void testPendingRecordsGauge(String topicSuffix) throws Throwable {
    final String topic1Name = TOPIC1 + topicSuffix;
    final String topic2Name = TOPIC2 + topicSuffix;
    if (!topicSuffix.isEmpty()) {
        KafkaSourceTestEnv.setupTopic(topic1Name, true, true, KafkaSourceTestEnv::getRecordsForTopic);
        KafkaSourceTestEnv.setupTopic(topic2Name, true, true, KafkaSourceTestEnv::getRecordsForTopic);
    }
    MetricListener metricListener = new MetricListener();
    final Properties props = new Properties();
    props.setProperty(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "1");
    KafkaPartitionSplitReader reader = createReader(props, InternalSourceReaderMetricGroup.mock(metricListener.getMetricGroup()));
    // Add a split
    reader.handleSplitsChanges(new SplitsAddition<>(Collections.singletonList(new KafkaPartitionSplit(new TopicPartition(topic1Name, 0), 0L))));
    // pendingRecords should have not been registered because of lazily registration
    assertFalse(metricListener.getGauge(MetricNames.PENDING_RECORDS).isPresent());
    // Trigger first fetch
    reader.fetch();
    final Optional<Gauge<Long>> pendingRecords = metricListener.getGauge(MetricNames.PENDING_RECORDS);
    assertTrue(pendingRecords.isPresent());
    // Validate pendingRecords
    assertNotNull(pendingRecords);
    assertEquals(NUM_RECORDS_PER_PARTITION - 1, (long) pendingRecords.get().getValue());
    for (int i = 1; i < NUM_RECORDS_PER_PARTITION; i++) {
        reader.fetch();
        assertEquals(NUM_RECORDS_PER_PARTITION - i - 1, (long) pendingRecords.get().getValue());
    }
    // Add another split
    reader.handleSplitsChanges(new SplitsAddition<>(Collections.singletonList(new KafkaPartitionSplit(new TopicPartition(topic2Name, 0), 0L))));
    // Validate pendingRecords
    for (int i = 0; i < NUM_RECORDS_PER_PARTITION; i++) {
        reader.fetch();
        assertEquals(NUM_RECORDS_PER_PARTITION - i - 1, (long) pendingRecords.get().getValue());
    }
}
Also used : KafkaPartitionSplit(org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit) TopicPartition(org.apache.kafka.common.TopicPartition) KafkaSourceTestEnv(org.apache.flink.connector.kafka.testutils.KafkaSourceTestEnv) Properties(java.util.Properties) MetricListener(org.apache.flink.metrics.testutils.MetricListener) Gauge(org.apache.flink.metrics.Gauge) EmptySource(org.junit.jupiter.params.provider.EmptySource) ValueSource(org.junit.jupiter.params.provider.ValueSource) ParameterizedTest(org.junit.jupiter.params.ParameterizedTest)

Aggregations

Properties (java.util.Properties)1 KafkaPartitionSplit (org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit)1 KafkaSourceTestEnv (org.apache.flink.connector.kafka.testutils.KafkaSourceTestEnv)1 Gauge (org.apache.flink.metrics.Gauge)1 MetricListener (org.apache.flink.metrics.testutils.MetricListener)1 TopicPartition (org.apache.kafka.common.TopicPartition)1 ParameterizedTest (org.junit.jupiter.params.ParameterizedTest)1 EmptySource (org.junit.jupiter.params.provider.EmptySource)1 ValueSource (org.junit.jupiter.params.provider.ValueSource)1