Search in sources :

Example 1 with OffsetAndMetadata

use of org.apache.kafka.clients.consumer.OffsetAndMetadata in project kafka by apache.

the class FetcherTest method testUpdateFetchPositionToCommitted.

@Test
public void testUpdateFetchPositionToCommitted() {
    // unless a specific reset is expected, the default behavior is to reset to the committed
    // position if one is present
    subscriptions.assignFromUser(singleton(tp));
    subscriptions.committed(tp, new OffsetAndMetadata(5));
    fetcher.updateFetchPositions(singleton(tp));
    assertTrue(subscriptions.isFetchable(tp));
    assertEquals(5, subscriptions.position(tp).longValue());
}
Also used : OffsetAndMetadata(org.apache.kafka.clients.consumer.OffsetAndMetadata) Test(org.junit.Test)

Example 2 with OffsetAndMetadata

use of org.apache.kafka.clients.consumer.OffsetAndMetadata in project kafka by apache.

the class FetcherTest method testUpdateFetchPositionOfPausedPartitionsRequiringOffsetReset.

@Test
public void testUpdateFetchPositionOfPausedPartitionsRequiringOffsetReset() {
    subscriptions.assignFromUser(singleton(tp));
    subscriptions.committed(tp, new OffsetAndMetadata(0));
    // paused partition does not have a valid position
    subscriptions.pause(tp);
    subscriptions.needOffsetReset(tp, OffsetResetStrategy.LATEST);
    client.prepareResponse(listOffsetRequestMatcher(ListOffsetRequest.LATEST_TIMESTAMP), listOffsetResponse(Errors.NONE, 1L, 10L));
    fetcher.updateFetchPositions(singleton(tp));
    assertFalse(subscriptions.isOffsetResetNeeded(tp));
    // because tp is paused
    assertFalse(subscriptions.isFetchable(tp));
    assertTrue(subscriptions.hasValidPosition(tp));
    assertEquals(10, subscriptions.position(tp).longValue());
}
Also used : OffsetAndMetadata(org.apache.kafka.clients.consumer.OffsetAndMetadata) Test(org.junit.Test)

Example 3 with OffsetAndMetadata

use of org.apache.kafka.clients.consumer.OffsetAndMetadata in project kafka by apache.

the class FetcherTest method testUpdateFetchPositionOfPausedPartitionsWithoutAValidPosition.

@Test
public void testUpdateFetchPositionOfPausedPartitionsWithoutAValidPosition() {
    subscriptions.assignFromUser(singleton(tp));
    subscriptions.committed(tp, new OffsetAndMetadata(0));
    // paused partition does not have a valid position
    subscriptions.pause(tp);
    subscriptions.seek(tp, 10);
    fetcher.updateFetchPositions(singleton(tp));
    assertFalse(subscriptions.isOffsetResetNeeded(tp));
    // because tp is paused
    assertFalse(subscriptions.isFetchable(tp));
    assertTrue(subscriptions.hasValidPosition(tp));
    assertEquals(10, subscriptions.position(tp).longValue());
}
Also used : OffsetAndMetadata(org.apache.kafka.clients.consumer.OffsetAndMetadata) Test(org.junit.Test)

Example 4 with OffsetAndMetadata

use of org.apache.kafka.clients.consumer.OffsetAndMetadata in project kafka by apache.

the class ConsumerCoordinatorTest method testCommitOffsetOnly.

@Test
public void testCommitOffsetOnly() {
    subscriptions.assignFromUser(singleton(t1p));
    client.prepareResponse(groupCoordinatorResponse(node, Errors.NONE));
    coordinator.ensureCoordinatorReady();
    client.prepareResponse(offsetCommitResponse(Collections.singletonMap(t1p, Errors.NONE)));
    AtomicBoolean success = new AtomicBoolean(false);
    coordinator.commitOffsetsAsync(Collections.singletonMap(t1p, new OffsetAndMetadata(100L)), callback(success));
    coordinator.invokeCompletedOffsetCommitCallbacks();
    assertTrue(success.get());
    assertEquals(100L, subscriptions.committed(t1p).offset());
}
Also used : AtomicBoolean(java.util.concurrent.atomic.AtomicBoolean) OffsetAndMetadata(org.apache.kafka.clients.consumer.OffsetAndMetadata) Test(org.junit.Test)

Example 5 with OffsetAndMetadata

use of org.apache.kafka.clients.consumer.OffsetAndMetadata in project kafka by apache.

the class ConsumerInterceptorsTest method testOnCommitChain.

@Test
public void testOnCommitChain() {
    List<ConsumerInterceptor<Integer, Integer>> interceptorList = new ArrayList<>();
    // we are testing two different interceptors by configuring the same interceptor differently, which is not
    // how it would be done in KafkaConsumer, but ok for testing interceptor callbacks
    FilterConsumerInterceptor<Integer, Integer> interceptor1 = new FilterConsumerInterceptor<>(filterPartition1);
    FilterConsumerInterceptor<Integer, Integer> interceptor2 = new FilterConsumerInterceptor<>(filterPartition2);
    interceptorList.add(interceptor1);
    interceptorList.add(interceptor2);
    ConsumerInterceptors<Integer, Integer> interceptors = new ConsumerInterceptors<>(interceptorList);
    // verify that onCommit is called for all interceptors in the chain
    Map<TopicPartition, OffsetAndMetadata> offsets = new HashMap<>();
    offsets.put(tp, new OffsetAndMetadata(0));
    interceptors.onCommit(offsets);
    assertEquals(2, onCommitCount);
    // verify that even if one of the interceptors throws an exception, all interceptors' onCommit are called
    interceptor1.injectOnCommitError(true);
    interceptors.onCommit(offsets);
    assertEquals(4, onCommitCount);
    interceptors.close();
}
Also used : HashMap(java.util.HashMap) TopicPartition(org.apache.kafka.common.TopicPartition) ConsumerInterceptor(org.apache.kafka.clients.consumer.ConsumerInterceptor) ArrayList(java.util.ArrayList) OffsetAndMetadata(org.apache.kafka.clients.consumer.OffsetAndMetadata) Test(org.junit.Test)

Aggregations

OffsetAndMetadata (org.apache.kafka.clients.consumer.OffsetAndMetadata)40 TopicPartition (org.apache.kafka.common.TopicPartition)28 Test (org.junit.Test)22 HashMap (java.util.HashMap)21 Map (java.util.Map)11 OffsetCommitCallback (org.apache.kafka.clients.consumer.OffsetCommitCallback)7 SinkRecord (org.apache.kafka.connect.sink.SinkRecord)6 PrepareForTest (org.powermock.core.classloader.annotations.PrepareForTest)6 ArrayList (java.util.ArrayList)4 PartitionInfo (org.apache.kafka.common.PartitionInfo)4 WakeupException (org.apache.kafka.common.errors.WakeupException)4 Properties (java.util.Properties)3 AtomicBoolean (java.util.concurrent.atomic.AtomicBoolean)3 KafkaTopicPartition (org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition)3 CommitFailedException (org.apache.kafka.clients.consumer.CommitFailedException)3 ConsumerRecord (org.apache.kafka.clients.consumer.ConsumerRecord)3 ConsumerRecords (org.apache.kafka.clients.consumer.ConsumerRecords)3 StreamsConfig (org.apache.kafka.streams.StreamsConfig)3 File (java.io.File)2 LinkedBlockingQueue (java.util.concurrent.LinkedBlockingQueue)2