Search in sources :

Example 11 with OffsetAndMetadata

use of org.apache.kafka.clients.consumer.OffsetAndMetadata in project storm by apache.

the class OffsetManager method findNextCommitOffset.

/**
     * An offset is only committed when all records with lower offset have been
     * acked. This guarantees that all offsets smaller than the committedOffset
     * have been delivered.
     *
     * @return the next OffsetAndMetadata to commit, or null if no offset is
     * ready to commit.
     */
public OffsetAndMetadata findNextCommitOffset() {
    boolean found = false;
    long currOffset;
    long nextCommitOffset = committedOffset;
    // this is a convenience variable to make it faster to create OffsetAndMetadata
    KafkaSpoutMessageId nextCommitMsg = null;
    for (KafkaSpoutMessageId currAckedMsg : ackedMsgs) {
        // complexity is that of a linear scan on a TreeMap
        if ((currOffset = currAckedMsg.offset()) == nextCommitOffset + 1) {
            // found the next offset to commit
            found = true;
            nextCommitMsg = currAckedMsg;
            nextCommitOffset = currOffset;
        } else if (currAckedMsg.offset() > nextCommitOffset + 1) {
            // offset found is not continuous to the offsets listed to go in the next commit, so stop search
            LOG.debug("topic-partition [{}] has non-continuous offset [{}]. It will be processed in a subsequent batch.", tp, currOffset);
            break;
        } else {
            //Received a redundant ack. Ignore and continue processing.
            LOG.warn("topic-partition [{}] has unexpected offset [{}]. Current committed Offset [{}]", tp, currOffset, committedOffset);
        }
    }
    OffsetAndMetadata nextCommitOffsetAndMetadata = null;
    if (found) {
        nextCommitOffsetAndMetadata = new OffsetAndMetadata(nextCommitOffset, nextCommitMsg.getMetadata(Thread.currentThread()));
        LOG.debug("topic-partition [{}] has offsets [{}-{}] ready to be committed", tp, committedOffset + 1, nextCommitOffsetAndMetadata.offset());
    } else {
        LOG.debug("topic-partition [{}] has NO offsets ready to be committed", tp);
    }
    LOG.trace("{}", this);
    return nextCommitOffsetAndMetadata;
}
Also used : KafkaSpoutMessageId(org.apache.storm.kafka.spout.KafkaSpoutMessageId) OffsetAndMetadata(org.apache.kafka.clients.consumer.OffsetAndMetadata)

Example 12 with OffsetAndMetadata

use of org.apache.kafka.clients.consumer.OffsetAndMetadata in project kafka by apache.

the class FetcherTest method testUpdateFetchPositionToCommitted.

@Test
public void testUpdateFetchPositionToCommitted() {
    // unless a specific reset is expected, the default behavior is to reset to the committed
    // position if one is present
    subscriptions.assignFromUser(singleton(tp));
    subscriptions.committed(tp, new OffsetAndMetadata(5));
    fetcher.updateFetchPositions(singleton(tp));
    assertTrue(subscriptions.isFetchable(tp));
    assertEquals(5, subscriptions.position(tp).longValue());
}
Also used : OffsetAndMetadata(org.apache.kafka.clients.consumer.OffsetAndMetadata) Test(org.junit.Test)

Example 13 with OffsetAndMetadata

use of org.apache.kafka.clients.consumer.OffsetAndMetadata in project kafka by apache.

the class FetcherTest method testUpdateFetchPositionOfPausedPartitionsRequiringOffsetReset.

@Test
public void testUpdateFetchPositionOfPausedPartitionsRequiringOffsetReset() {
    subscriptions.assignFromUser(singleton(tp));
    subscriptions.committed(tp, new OffsetAndMetadata(0));
    // paused partition does not have a valid position
    subscriptions.pause(tp);
    subscriptions.needOffsetReset(tp, OffsetResetStrategy.LATEST);
    client.prepareResponse(listOffsetRequestMatcher(ListOffsetRequest.LATEST_TIMESTAMP), listOffsetResponse(Errors.NONE, 1L, 10L));
    fetcher.updateFetchPositions(singleton(tp));
    assertFalse(subscriptions.isOffsetResetNeeded(tp));
    // because tp is paused
    assertFalse(subscriptions.isFetchable(tp));
    assertTrue(subscriptions.hasValidPosition(tp));
    assertEquals(10, subscriptions.position(tp).longValue());
}
Also used : OffsetAndMetadata(org.apache.kafka.clients.consumer.OffsetAndMetadata) Test(org.junit.Test)

Example 14 with OffsetAndMetadata

use of org.apache.kafka.clients.consumer.OffsetAndMetadata in project kafka by apache.

the class FetcherTest method testUpdateFetchPositionOfPausedPartitionsWithoutAValidPosition.

@Test
public void testUpdateFetchPositionOfPausedPartitionsWithoutAValidPosition() {
    subscriptions.assignFromUser(singleton(tp));
    subscriptions.committed(tp, new OffsetAndMetadata(0));
    // paused partition does not have a valid position
    subscriptions.pause(tp);
    subscriptions.seek(tp, 10);
    fetcher.updateFetchPositions(singleton(tp));
    assertFalse(subscriptions.isOffsetResetNeeded(tp));
    // because tp is paused
    assertFalse(subscriptions.isFetchable(tp));
    assertTrue(subscriptions.hasValidPosition(tp));
    assertEquals(10, subscriptions.position(tp).longValue());
}
Also used : OffsetAndMetadata(org.apache.kafka.clients.consumer.OffsetAndMetadata) Test(org.junit.Test)

Example 15 with OffsetAndMetadata

use of org.apache.kafka.clients.consumer.OffsetAndMetadata in project kafka by apache.

the class ConsumerCoordinatorTest method testCommitOffsetOnly.

@Test
public void testCommitOffsetOnly() {
    subscriptions.assignFromUser(singleton(t1p));
    client.prepareResponse(groupCoordinatorResponse(node, Errors.NONE));
    coordinator.ensureCoordinatorReady();
    client.prepareResponse(offsetCommitResponse(Collections.singletonMap(t1p, Errors.NONE)));
    AtomicBoolean success = new AtomicBoolean(false);
    coordinator.commitOffsetsAsync(Collections.singletonMap(t1p, new OffsetAndMetadata(100L)), callback(success));
    coordinator.invokeCompletedOffsetCommitCallbacks();
    assertTrue(success.get());
    assertEquals(100L, subscriptions.committed(t1p).offset());
}
Also used : AtomicBoolean(java.util.concurrent.atomic.AtomicBoolean) OffsetAndMetadata(org.apache.kafka.clients.consumer.OffsetAndMetadata) Test(org.junit.Test)

Aggregations

OffsetAndMetadata (org.apache.kafka.clients.consumer.OffsetAndMetadata)41 TopicPartition (org.apache.kafka.common.TopicPartition)29 HashMap (java.util.HashMap)22 Test (org.junit.Test)22 Map (java.util.Map)12 OffsetCommitCallback (org.apache.kafka.clients.consumer.OffsetCommitCallback)7 SinkRecord (org.apache.kafka.connect.sink.SinkRecord)6 PrepareForTest (org.powermock.core.classloader.annotations.PrepareForTest)6 ArrayList (java.util.ArrayList)4 PartitionInfo (org.apache.kafka.common.PartitionInfo)4 WakeupException (org.apache.kafka.common.errors.WakeupException)4 Properties (java.util.Properties)3 AtomicBoolean (java.util.concurrent.atomic.AtomicBoolean)3 KafkaTopicPartition (org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition)3 CommitFailedException (org.apache.kafka.clients.consumer.CommitFailedException)3 ConsumerRecord (org.apache.kafka.clients.consumer.ConsumerRecord)3 ConsumerRecords (org.apache.kafka.clients.consumer.ConsumerRecords)3 StreamsConfig (org.apache.kafka.streams.StreamsConfig)3 File (java.io.File)2 LinkedBlockingQueue (java.util.concurrent.LinkedBlockingQueue)2