Search in sources :

Example 26 with OffsetCommitCallback

use of org.apache.kafka.clients.consumer.OffsetCommitCallback in project kafka by apache.

the class WorkerSinkTaskTest method testPreCommit.

@Test
public void testPreCommit() throws Exception {
    createTask(initialState);
    expectInitializeTask();
    expectTaskGetTopic(true);
    // iter 1
    expectPollInitialAssignment();
    // iter 2
    expectConsumerPoll(2);
    expectConversionAndTransformation(2);
    sinkTask.put(EasyMock.anyObject());
    EasyMock.expectLastCall();
    final Map<TopicPartition, OffsetAndMetadata> workerStartingOffsets = new HashMap<>();
    workerStartingOffsets.put(TOPIC_PARTITION, new OffsetAndMetadata(FIRST_OFFSET));
    workerStartingOffsets.put(TOPIC_PARTITION2, new OffsetAndMetadata(FIRST_OFFSET));
    final Map<TopicPartition, OffsetAndMetadata> workerCurrentOffsets = new HashMap<>();
    workerCurrentOffsets.put(TOPIC_PARTITION, new OffsetAndMetadata(FIRST_OFFSET + 2));
    workerCurrentOffsets.put(TOPIC_PARTITION2, new OffsetAndMetadata(FIRST_OFFSET));
    final Map<TopicPartition, OffsetAndMetadata> taskOffsets = new HashMap<>();
    // act like FIRST_OFFSET+2 has not yet been flushed by the task
    taskOffsets.put(TOPIC_PARTITION, new OffsetAndMetadata(FIRST_OFFSET + 1));
    // should be ignored because > current offset
    taskOffsets.put(TOPIC_PARTITION2, new OffsetAndMetadata(FIRST_OFFSET + 1));
    // should be ignored because this partition is not assigned
    taskOffsets.put(new TopicPartition(TOPIC, 3), new OffsetAndMetadata(FIRST_OFFSET));
    final Map<TopicPartition, OffsetAndMetadata> committableOffsets = new HashMap<>();
    committableOffsets.put(TOPIC_PARTITION, new OffsetAndMetadata(FIRST_OFFSET + 1));
    committableOffsets.put(TOPIC_PARTITION2, new OffsetAndMetadata(FIRST_OFFSET));
    sinkTask.preCommit(workerCurrentOffsets);
    EasyMock.expectLastCall().andReturn(taskOffsets);
    // Expect extra invalid topic partition to be filtered, which causes the consumer assignment to be logged
    EasyMock.expect(consumer.assignment()).andReturn(INITIAL_ASSIGNMENT).times(2);
    final Capture<OffsetCommitCallback> callback = EasyMock.newCapture();
    consumer.commitAsync(EasyMock.eq(committableOffsets), EasyMock.capture(callback));
    EasyMock.expectLastCall().andAnswer(() -> {
        callback.getValue().onComplete(committableOffsets, null);
        return null;
    });
    expectConsumerPoll(0);
    sinkTask.put(EasyMock.anyObject());
    EasyMock.expectLastCall();
    PowerMock.replayAll();
    workerTask.initialize(TASK_CONFIG);
    workerTask.initializeAndStart();
    // iter 1 -- initial assignment
    workerTask.iteration();
    assertEquals(workerStartingOffsets, Whitebox.<Map<TopicPartition, OffsetAndMetadata>>getInternalState(workerTask, "currentOffsets"));
    // iter 2 -- deliver 2 records
    workerTask.iteration();
    assertEquals(workerCurrentOffsets, Whitebox.<Map<TopicPartition, OffsetAndMetadata>>getInternalState(workerTask, "currentOffsets"));
    assertEquals(workerStartingOffsets, Whitebox.<Map<TopicPartition, OffsetAndMetadata>>getInternalState(workerTask, "lastCommittedOffsets"));
    sinkTaskContext.getValue().requestCommit();
    // iter 3 -- commit
    workerTask.iteration();
    assertEquals(committableOffsets, Whitebox.<Map<TopicPartition, OffsetAndMetadata>>getInternalState(workerTask, "lastCommittedOffsets"));
    PowerMock.verifyAll();
}
Also used : HashMap(java.util.HashMap) TopicPartition(org.apache.kafka.common.TopicPartition) OffsetAndMetadata(org.apache.kafka.clients.consumer.OffsetAndMetadata) OffsetCommitCallback(org.apache.kafka.clients.consumer.OffsetCommitCallback) RetryWithToleranceOperatorTest(org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest) PrepareForTest(org.powermock.core.classloader.annotations.PrepareForTest) Test(org.junit.Test)

Example 27 with OffsetCommitCallback

use of org.apache.kafka.clients.consumer.OffsetCommitCallback in project kafka by apache.

the class WorkerSinkTaskThreadedTest method expectOffsetCommit.

private Capture<OffsetCommitCallback> expectOffsetCommit(final long expectedMessages, final RuntimeException error, final Exception consumerCommitError, final long consumerCommitDelayMs, final boolean invokeCallback) {
    final long finalOffset = FIRST_OFFSET + expectedMessages;
    // All assigned partitions will have offsets committed, but we've only processed messages/updated offsets for one
    final Map<TopicPartition, OffsetAndMetadata> offsetsToCommit = new HashMap<>();
    offsetsToCommit.put(TOPIC_PARTITION, new OffsetAndMetadata(finalOffset));
    offsetsToCommit.put(TOPIC_PARTITION2, new OffsetAndMetadata(FIRST_OFFSET));
    offsetsToCommit.put(TOPIC_PARTITION3, new OffsetAndMetadata(FIRST_OFFSET));
    sinkTask.preCommit(offsetsToCommit);
    IExpectationSetters<Object> expectation = PowerMock.expectLastCall();
    if (error != null) {
        expectation.andThrow(error).once();
        return null;
    } else {
        expectation.andReturn(offsetsToCommit);
    }
    final Capture<OffsetCommitCallback> capturedCallback = EasyMock.newCapture();
    consumer.commitAsync(EasyMock.eq(offsetsToCommit), EasyMock.capture(capturedCallback));
    PowerMock.expectLastCall().andAnswer(() -> {
        time.sleep(consumerCommitDelayMs);
        if (invokeCallback)
            capturedCallback.getValue().onComplete(offsetsToCommit, consumerCommitError);
        return null;
    });
    return capturedCallback;
}
Also used : HashMap(java.util.HashMap) TopicPartition(org.apache.kafka.common.TopicPartition) OffsetAndMetadata(org.apache.kafka.clients.consumer.OffsetAndMetadata) OffsetCommitCallback(org.apache.kafka.clients.consumer.OffsetCommitCallback)

Example 28 with OffsetCommitCallback

use of org.apache.kafka.clients.consumer.OffsetCommitCallback in project incubator-gobblin by apache.

the class Kafka09ConsumerClient method commitOffsetsAsync.

/**
 * Commit offsets to Kafka asynchronously
 */
@Override
public void commitOffsetsAsync(Map<KafkaPartition, Long> partitionOffsets) {
    Map<TopicPartition, OffsetAndMetadata> offsets = partitionOffsets.entrySet().stream().collect(Collectors.toMap(e -> new TopicPartition(e.getKey().getTopicName(), e.getKey().getId()), e -> new OffsetAndMetadata(e.getValue())));
    consumer.commitAsync(offsets, new OffsetCommitCallback() {

        @Override
        public void onComplete(Map<TopicPartition, OffsetAndMetadata> offsets, Exception exception) {
            if (exception != null) {
                log.error("Exception while committing offsets " + offsets, exception);
                return;
            }
        }
    });
}
Also used : HashMap(java.util.HashMap) ConfigUtils(org.apache.gobblin.util.ConfigUtils) ConsumerRecords(org.apache.kafka.clients.consumer.ConsumerRecords) Iterators(com.google.common.collect.Iterators) HashSet(java.util.HashSet) KafkaOffsetRetrievalFailureException(org.apache.gobblin.source.extractor.extract.kafka.KafkaOffsetRetrievalFailureException) Lists(com.google.common.collect.Lists) FluentIterable(com.google.common.collect.FluentIterable) Map(java.util.Map) OffsetCommitCallback(org.apache.kafka.clients.consumer.OffsetCommitCallback) MetricName(org.apache.kafka.common.MetricName) ToString(lombok.ToString) ConfigFactory(com.typesafe.config.ConfigFactory) Nonnull(javax.annotation.Nonnull) Consumer(org.apache.kafka.clients.consumer.Consumer) TopicPartition(org.apache.kafka.common.TopicPartition) Properties(java.util.Properties) Function(com.google.common.base.Function) Iterator(java.util.Iterator) ImmutableMap(com.google.common.collect.ImmutableMap) Config(com.typesafe.config.Config) Collection(java.util.Collection) NoOpConsumerRebalanceListener(org.apache.kafka.clients.consumer.internals.NoOpConsumerRebalanceListener) Metric(com.codahale.metrics.Metric) Throwables(com.google.common.base.Throwables) IOException(java.io.IOException) PartitionInfo(org.apache.kafka.common.PartitionInfo) ConfigurationKeys(org.apache.gobblin.configuration.ConfigurationKeys) EqualsAndHashCode(lombok.EqualsAndHashCode) Collectors(java.util.stream.Collectors) ConsumerRebalanceListener(org.apache.kafka.clients.consumer.ConsumerRebalanceListener) List(java.util.List) Slf4j(lombok.extern.slf4j.Slf4j) KafkaPartition(org.apache.gobblin.source.extractor.extract.kafka.KafkaPartition) ConsumerRecord(org.apache.kafka.clients.consumer.ConsumerRecord) Entry(java.util.Map.Entry) OffsetAndMetadata(org.apache.kafka.clients.consumer.OffsetAndMetadata) KafkaTopic(org.apache.gobblin.source.extractor.extract.kafka.KafkaTopic) KafkaMetric(org.apache.kafka.common.metrics.KafkaMetric) Preconditions(com.google.common.base.Preconditions) Gauge(com.codahale.metrics.Gauge) Collections(java.util.Collections) LongWatermark(org.apache.gobblin.source.extractor.extract.LongWatermark) KafkaConsumer(org.apache.kafka.clients.consumer.KafkaConsumer) Joiner(com.google.common.base.Joiner) TopicPartition(org.apache.kafka.common.TopicPartition) OffsetAndMetadata(org.apache.kafka.clients.consumer.OffsetAndMetadata) OffsetCommitCallback(org.apache.kafka.clients.consumer.OffsetCommitCallback) KafkaOffsetRetrievalFailureException(org.apache.gobblin.source.extractor.extract.kafka.KafkaOffsetRetrievalFailureException) IOException(java.io.IOException)

Aggregations

OffsetCommitCallback (org.apache.kafka.clients.consumer.OffsetCommitCallback)28 OffsetAndMetadata (org.apache.kafka.clients.consumer.OffsetAndMetadata)24 TopicPartition (org.apache.kafka.common.TopicPartition)24 HashMap (java.util.HashMap)18 Test (org.junit.Test)15 PrepareForTest (org.powermock.core.classloader.annotations.PrepareForTest)12 Map (java.util.Map)11 WakeupException (org.apache.kafka.common.errors.WakeupException)11 KafkaException (org.apache.kafka.common.KafkaException)10 CommitFailedException (org.apache.kafka.clients.consumer.CommitFailedException)7 RetriableCommitFailedException (org.apache.kafka.clients.consumer.RetriableCommitFailedException)7 GroupAuthorizationException (org.apache.kafka.common.errors.GroupAuthorizationException)7 ConsumerRecords (org.apache.kafka.clients.consumer.ConsumerRecords)6 ConsumerRecord (org.apache.kafka.clients.consumer.ConsumerRecord)5 TopicAuthorizationException (org.apache.kafka.common.errors.TopicAuthorizationException)5 RetryWithToleranceOperatorTest (org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest)5 SinkRecord (org.apache.kafka.connect.sink.SinkRecord)5 AtomicReference (java.util.concurrent.atomic.AtomicReference)4 ArrayList (java.util.ArrayList)3 List (java.util.List)3