Search in sources :

Example 1 with PollingRecordPublisherMetricsReporter

use of org.apache.flink.streaming.connectors.kinesis.metrics.PollingRecordPublisherMetricsReporter in project flink by apache.

the class PollingRecordPublisherTest method testRunEmitsRunLoopTimeNanos.

@Test
public void testRunEmitsRunLoopTimeNanos() throws Exception {
    PollingRecordPublisherMetricsReporter metricsReporter = spy(new PollingRecordPublisherMetricsReporter(createFakeShardConsumerMetricGroup()));
    KinesisProxyInterface fakeKinesis = totalNumOfRecordsAfterNumOfGetRecordsCalls(5, 5, 100);
    PollingRecordPublisher recordPublisher = createPollingRecordPublisher(fakeKinesis, metricsReporter);
    recordPublisher.run(new TestConsumer());
    // Expect that the run loop took at least FETCH_INTERVAL_MILLIS in nanos
    verify(metricsReporter).setRunLoopTimeNanos(geq(FETCH_INTERVAL_MILLIS * 1_000_000));
}
Also used : KinesisProxyInterface(org.apache.flink.streaming.connectors.kinesis.proxy.KinesisProxyInterface) PollingRecordPublisherMetricsReporter(org.apache.flink.streaming.connectors.kinesis.metrics.PollingRecordPublisherMetricsReporter) TestConsumer(org.apache.flink.streaming.connectors.kinesis.testutils.TestUtils.TestConsumer) Test(org.junit.Test)

Example 2 with PollingRecordPublisherMetricsReporter

use of org.apache.flink.streaming.connectors.kinesis.metrics.PollingRecordPublisherMetricsReporter in project flink by apache.

the class PollingRecordPublisherFactory method create.

/**
 * Create a {@link PollingRecordPublisher}. An {@link AdaptivePollingRecordPublisher} will be
 * created should adaptive reads be enabled in the configuration.
 *
 * @param startingPosition the position in the shard to start consuming records from
 * @param consumerConfig the consumer configuration properties
 * @param metricGroup the metric group to report metrics to
 * @param streamShardHandle the shard this consumer is subscribed to
 * @return a {@link PollingRecordPublisher}
 */
@Override
public PollingRecordPublisher create(final StartingPosition startingPosition, final Properties consumerConfig, final MetricGroup metricGroup, final StreamShardHandle streamShardHandle) throws InterruptedException {
    Preconditions.checkNotNull(startingPosition);
    Preconditions.checkNotNull(consumerConfig);
    Preconditions.checkNotNull(metricGroup);
    Preconditions.checkNotNull(streamShardHandle);
    final PollingRecordPublisherConfiguration configuration = new PollingRecordPublisherConfiguration(consumerConfig);
    final PollingRecordPublisherMetricsReporter metricsReporter = new PollingRecordPublisherMetricsReporter(metricGroup);
    final KinesisProxyInterface kinesisProxy = kinesisProxyFactory.create(consumerConfig);
    if (configuration.isAdaptiveReads()) {
        return new AdaptivePollingRecordPublisher(startingPosition, streamShardHandle, metricsReporter, kinesisProxy, configuration.getMaxNumberOfRecordsPerFetch(), configuration.getFetchIntervalMillis());
    } else {
        return new PollingRecordPublisher(startingPosition, streamShardHandle, metricsReporter, kinesisProxy, configuration.getMaxNumberOfRecordsPerFetch(), configuration.getFetchIntervalMillis());
    }
}
Also used : KinesisProxyInterface(org.apache.flink.streaming.connectors.kinesis.proxy.KinesisProxyInterface) PollingRecordPublisherMetricsReporter(org.apache.flink.streaming.connectors.kinesis.metrics.PollingRecordPublisherMetricsReporter)

Aggregations

PollingRecordPublisherMetricsReporter (org.apache.flink.streaming.connectors.kinesis.metrics.PollingRecordPublisherMetricsReporter)2 KinesisProxyInterface (org.apache.flink.streaming.connectors.kinesis.proxy.KinesisProxyInterface)2 TestConsumer (org.apache.flink.streaming.connectors.kinesis.testutils.TestUtils.TestConsumer)1 Test (org.junit.Test)1