Search in sources :

Example 1 with RecordPublisherRunResult

use of org.apache.flink.streaming.connectors.kinesis.internals.publisher.RecordPublisher.RecordPublisherRunResult in project flink by apache.

the class FanOutRecordPublisherTest method testInterruptedPublisherReturnsCancelled.

@Test
public void testInterruptedPublisherReturnsCancelled() throws Exception {
    KinesisProxyV2Interface kinesis = FakeKinesisFanOutBehavioursFactory.errorDuringSubscription(new SdkInterruptedException(null));
    RecordPublisher publisher = createRecordPublisher(kinesis, StartingPosition.continueFromSequenceNumber(SEQUENCE_NUMBER));
    RecordPublisherRunResult actual = publisher.run(new TestConsumer());
    assertEquals(CANCELLED, actual);
}
Also used : RecordPublisher(org.apache.flink.streaming.connectors.kinesis.internals.publisher.RecordPublisher) RecordPublisherRunResult(org.apache.flink.streaming.connectors.kinesis.internals.publisher.RecordPublisher.RecordPublisherRunResult) SdkInterruptedException(com.amazonaws.http.timers.client.SdkInterruptedException) KinesisProxyV2Interface(org.apache.flink.streaming.connectors.kinesis.proxy.KinesisProxyV2Interface) TestConsumer(org.apache.flink.streaming.connectors.kinesis.testutils.TestUtils.TestConsumer) Test(org.junit.Test)

Example 2 with RecordPublisherRunResult

use of org.apache.flink.streaming.connectors.kinesis.internals.publisher.RecordPublisher.RecordPublisherRunResult in project flink by apache.

the class ShardConsumer method run.

@Override
public void run() {
    try {
        while (isRunning()) {
            final RecordPublisherRunResult result = recordPublisher.run(batch -> {
                if (!batch.getDeaggregatedRecords().isEmpty()) {
                    LOG.debug("stream: {}, shard: {}, millis behind latest: {}, batch size: {}", subscribedShard.getStreamName(), subscribedShard.getShard().getShardId(), batch.getMillisBehindLatest(), batch.getDeaggregatedRecordSize());
                }
                for (UserRecord userRecord : batch.getDeaggregatedRecords()) {
                    if (filterDeaggregatedRecord(userRecord)) {
                        deserializeRecordForCollectionAndUpdateState(userRecord);
                    }
                }
                shardConsumerMetricsReporter.setAverageRecordSizeBytes(batch.getAverageRecordSizeBytes());
                shardConsumerMetricsReporter.setNumberOfAggregatedRecords(batch.getAggregatedRecordSize());
                shardConsumerMetricsReporter.setNumberOfDeaggregatedRecords(batch.getDeaggregatedRecordSize());
                ofNullable(batch.getMillisBehindLatest()).ifPresent(shardConsumerMetricsReporter::setMillisBehindLatest);
                return lastSequenceNum;
            });
            if (result == COMPLETE) {
                fetcherRef.updateState(subscribedShardStateIndex, SentinelSequenceNumber.SENTINEL_SHARD_ENDING_SEQUENCE_NUM.get());
                // subscribed shard
                break;
            } else if (result == CANCELLED) {
                final String errorMessage = "Shard consumer cancelled: " + subscribedShard.getShard().getShardId();
                LOG.info(errorMessage);
                throw new ShardConsumerCancelledException(errorMessage);
            }
        }
    } catch (Throwable t) {
        fetcherRef.stopWithError(t);
    } finally {
        this.shardConsumerMetricsReporter.unregister();
    }
}
Also used : RecordPublisherRunResult(org.apache.flink.streaming.connectors.kinesis.internals.publisher.RecordPublisher.RecordPublisherRunResult) UserRecord(com.amazonaws.services.kinesis.clientlibrary.types.UserRecord)

Example 3 with RecordPublisherRunResult

use of org.apache.flink.streaming.connectors.kinesis.internals.publisher.RecordPublisher.RecordPublisherRunResult in project flink by apache.

the class FanOutRecordPublisherTest method testShardConsumerRetriesIfLimitExceededExceptionThrownFromSubscription.

@Test
public void testShardConsumerRetriesIfLimitExceededExceptionThrownFromSubscription() throws Exception {
    LimitExceededException exception = LimitExceededException.builder().build();
    SubscriptionErrorKinesisV2 kinesis = FakeKinesisFanOutBehavioursFactory.errorDuringSubscription(exception);
    RecordPublisher recordPublisher = createRecordPublisher(kinesis);
    TestConsumer consumer = new TestConsumer();
    RecordPublisherRunResult result = recordPublisher.run(consumer);
    // An exception is thrown after the 5th record in each subscription, therefore we expect to
    // receive 5 records
    assertEquals(5, consumer.getRecordBatches().size());
    assertEquals(1, kinesis.getNumberOfSubscribeToShardInvocations());
    // INCOMPLETE is returned to indicate the shard is not complete
    assertEquals(INCOMPLETE, result);
}
Also used : RecordPublisher(org.apache.flink.streaming.connectors.kinesis.internals.publisher.RecordPublisher) RecordPublisherRunResult(org.apache.flink.streaming.connectors.kinesis.internals.publisher.RecordPublisher.RecordPublisherRunResult) LimitExceededException(software.amazon.awssdk.services.kinesis.model.LimitExceededException) SubscriptionErrorKinesisV2(org.apache.flink.streaming.connectors.kinesis.testutils.FakeKinesisFanOutBehavioursFactory.SubscriptionErrorKinesisV2) TestConsumer(org.apache.flink.streaming.connectors.kinesis.testutils.TestUtils.TestConsumer) Test(org.junit.Test)

Aggregations

RecordPublisherRunResult (org.apache.flink.streaming.connectors.kinesis.internals.publisher.RecordPublisher.RecordPublisherRunResult)3 RecordPublisher (org.apache.flink.streaming.connectors.kinesis.internals.publisher.RecordPublisher)2 TestConsumer (org.apache.flink.streaming.connectors.kinesis.testutils.TestUtils.TestConsumer)2 Test (org.junit.Test)2 SdkInterruptedException (com.amazonaws.http.timers.client.SdkInterruptedException)1 UserRecord (com.amazonaws.services.kinesis.clientlibrary.types.UserRecord)1 KinesisProxyV2Interface (org.apache.flink.streaming.connectors.kinesis.proxy.KinesisProxyV2Interface)1 SubscriptionErrorKinesisV2 (org.apache.flink.streaming.connectors.kinesis.testutils.FakeKinesisFanOutBehavioursFactory.SubscriptionErrorKinesisV2)1 LimitExceededException (software.amazon.awssdk.services.kinesis.model.LimitExceededException)1