Search in sources :

Example 36 with OffsetAndMetadata

use of org.apache.kafka.clients.consumer.OffsetAndMetadata in project kafka by apache.

the class ConsumerCoordinator method refreshCommittedOffsetsIfNeeded.

/**
     * Refresh the committed offsets for provided partitions.
     */
public void refreshCommittedOffsetsIfNeeded() {
    if (subscriptions.refreshCommitsNeeded()) {
        Map<TopicPartition, OffsetAndMetadata> offsets = fetchCommittedOffsets(subscriptions.assignedPartitions());
        for (Map.Entry<TopicPartition, OffsetAndMetadata> entry : offsets.entrySet()) {
            TopicPartition tp = entry.getKey();
            // verify assignment is still active
            if (subscriptions.isAssigned(tp))
                this.subscriptions.committed(tp, entry.getValue());
        }
        this.subscriptions.commitsRefreshed();
    }
}
Also used : TopicPartition(org.apache.kafka.common.TopicPartition) OffsetAndMetadata(org.apache.kafka.clients.consumer.OffsetAndMetadata) HashMap(java.util.HashMap) Map(java.util.Map)

Example 37 with OffsetAndMetadata

use of org.apache.kafka.clients.consumer.OffsetAndMetadata in project kafka by apache.

the class StandbyTaskTest method shouldCheckpointStoreOffsetsOnCommit.

@Test
public void shouldCheckpointStoreOffsetsOnCommit() throws Exception {
    consumer.assign(Utils.mkList(ktable));
    final Map<TopicPartition, OffsetAndMetadata> committedOffsets = new HashMap<>();
    committedOffsets.put(new TopicPartition(ktable.topic(), ktable.partition()), new OffsetAndMetadata(100L));
    consumer.commitSync(committedOffsets);
    restoreStateConsumer.updatePartitions("ktable1", Utils.mkList(new PartitionInfo("ktable1", 0, Node.noNode(), new Node[0], new Node[0])));
    final TaskId taskId = new TaskId(0, 0);
    final MockTime time = new MockTime();
    final StreamsConfig config = createConfig(baseDir);
    final StandbyTask task = new StandbyTask(taskId, applicationId, ktablePartitions, ktableTopology, consumer, changelogReader, config, null, stateDirectory);
    restoreStateConsumer.assign(new ArrayList<>(task.changeLogPartitions()));
    final byte[] serializedValue = Serdes.Integer().serializer().serialize("", 1);
    task.update(ktable, Collections.singletonList(new ConsumerRecord<>(ktable.topic(), ktable.partition(), 50L, serializedValue, serializedValue)));
    time.sleep(config.getLong(StreamsConfig.COMMIT_INTERVAL_MS_CONFIG));
    task.commit();
    final Map<TopicPartition, Long> checkpoint = new OffsetCheckpoint(new File(stateDirectory.directoryForTask(taskId), ProcessorStateManager.CHECKPOINT_FILE_NAME)).read();
    assertThat(checkpoint, equalTo(Collections.singletonMap(ktable, 51L)));
}
Also used : OffsetCheckpoint(org.apache.kafka.streams.state.internals.OffsetCheckpoint) TaskId(org.apache.kafka.streams.processor.TaskId) HashMap(java.util.HashMap) ConsumerRecord(org.apache.kafka.clients.consumer.ConsumerRecord) TopicPartition(org.apache.kafka.common.TopicPartition) OffsetAndMetadata(org.apache.kafka.clients.consumer.OffsetAndMetadata) PartitionInfo(org.apache.kafka.common.PartitionInfo) File(java.io.File) MockTime(org.apache.kafka.common.utils.MockTime) StreamsConfig(org.apache.kafka.streams.StreamsConfig) Test(org.junit.Test)

Example 38 with OffsetAndMetadata

use of org.apache.kafka.clients.consumer.OffsetAndMetadata in project kafka by apache.

the class StandbyTaskTest method shouldNotThrowUnsupportedOperationExceptionWhenInitializingStateStores.

@Test
public void shouldNotThrowUnsupportedOperationExceptionWhenInitializingStateStores() throws Exception {
    final String changelogName = "test-application-my-store-changelog";
    final List<TopicPartition> partitions = Utils.mkList(new TopicPartition(changelogName, 0));
    consumer.assign(partitions);
    final Map<TopicPartition, OffsetAndMetadata> committedOffsets = new HashMap<>();
    committedOffsets.put(new TopicPartition(changelogName, 0), new OffsetAndMetadata(0L));
    consumer.commitSync(committedOffsets);
    restoreStateConsumer.updatePartitions(changelogName, Utils.mkList(new PartitionInfo(changelogName, 0, Node.noNode(), new Node[0], new Node[0])));
    final KStreamBuilder builder = new KStreamBuilder();
    builder.stream("topic").groupByKey().count("my-store");
    final ProcessorTopology topology = builder.setApplicationId(applicationId).build(0);
    StreamsConfig config = createConfig(baseDir);
    new StandbyTask(taskId, applicationId, partitions, topology, consumer, changelogReader, config, new MockStreamsMetrics(new Metrics()), stateDirectory);
}
Also used : KStreamBuilder(org.apache.kafka.streams.kstream.KStreamBuilder) HashMap(java.util.HashMap) Metrics(org.apache.kafka.common.metrics.Metrics) TopicPartition(org.apache.kafka.common.TopicPartition) OffsetAndMetadata(org.apache.kafka.clients.consumer.OffsetAndMetadata) PartitionInfo(org.apache.kafka.common.PartitionInfo) StreamsConfig(org.apache.kafka.streams.StreamsConfig) Test(org.junit.Test)

Example 39 with OffsetAndMetadata

use of org.apache.kafka.clients.consumer.OffsetAndMetadata in project kafka by apache.

the class StreamTask method commitOffsets.

/**
     * commit consumed offsets if needed
     */
@Override
public void commitOffsets() {
    if (commitOffsetNeeded) {
        Map<TopicPartition, OffsetAndMetadata> consumedOffsetsAndMetadata = new HashMap<>(consumedOffsets.size());
        for (Map.Entry<TopicPartition, Long> entry : consumedOffsets.entrySet()) {
            TopicPartition partition = entry.getKey();
            long offset = entry.getValue() + 1;
            consumedOffsetsAndMetadata.put(partition, new OffsetAndMetadata(offset));
            stateMgr.putOffsetLimit(partition, offset);
        }
        try {
            consumer.commitSync(consumedOffsetsAndMetadata);
        } catch (final CommitFailedException cfe) {
            log.warn("{} Failed offset commits: {} ", logPrefix, consumedOffsetsAndMetadata);
            throw cfe;
        }
        commitOffsetNeeded = false;
    }
    commitRequested = false;
}
Also used : HashMap(java.util.HashMap) TopicPartition(org.apache.kafka.common.TopicPartition) OffsetAndMetadata(org.apache.kafka.clients.consumer.OffsetAndMetadata) HashMap(java.util.HashMap) Map(java.util.Map) CommitFailedException(org.apache.kafka.clients.consumer.CommitFailedException)

Example 40 with OffsetAndMetadata

use of org.apache.kafka.clients.consumer.OffsetAndMetadata in project kafka by apache.

the class AbstractTask method initializeOffsetLimits.

protected void initializeOffsetLimits() {
    for (TopicPartition partition : partitions) {
        try {
            // TODO: batch API?
            OffsetAndMetadata metadata = consumer.committed(partition);
            stateMgr.putOffsetLimit(partition, metadata != null ? metadata.offset() : 0L);
        } catch (AuthorizationException e) {
            throw new ProcessorStateException(String.format("task [%s] AuthorizationException when initializing offsets for %s", id, partition), e);
        } catch (WakeupException e) {
            throw e;
        } catch (KafkaException e) {
            throw new ProcessorStateException(String.format("task [%s] Failed to initialize offsets for %s", id, partition), e);
        }
    }
}
Also used : AuthorizationException(org.apache.kafka.common.errors.AuthorizationException) TopicPartition(org.apache.kafka.common.TopicPartition) OffsetAndMetadata(org.apache.kafka.clients.consumer.OffsetAndMetadata) KafkaException(org.apache.kafka.common.KafkaException) ProcessorStateException(org.apache.kafka.streams.errors.ProcessorStateException) WakeupException(org.apache.kafka.common.errors.WakeupException)

Aggregations

OffsetAndMetadata (org.apache.kafka.clients.consumer.OffsetAndMetadata)40 TopicPartition (org.apache.kafka.common.TopicPartition)28 Test (org.junit.Test)22 HashMap (java.util.HashMap)21 Map (java.util.Map)11 OffsetCommitCallback (org.apache.kafka.clients.consumer.OffsetCommitCallback)7 SinkRecord (org.apache.kafka.connect.sink.SinkRecord)6 PrepareForTest (org.powermock.core.classloader.annotations.PrepareForTest)6 ArrayList (java.util.ArrayList)4 PartitionInfo (org.apache.kafka.common.PartitionInfo)4 WakeupException (org.apache.kafka.common.errors.WakeupException)4 Properties (java.util.Properties)3 AtomicBoolean (java.util.concurrent.atomic.AtomicBoolean)3 KafkaTopicPartition (org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition)3 CommitFailedException (org.apache.kafka.clients.consumer.CommitFailedException)3 ConsumerRecord (org.apache.kafka.clients.consumer.ConsumerRecord)3 ConsumerRecords (org.apache.kafka.clients.consumer.ConsumerRecords)3 StreamsConfig (org.apache.kafka.streams.StreamsConfig)3 File (java.io.File)2 LinkedBlockingQueue (java.util.concurrent.LinkedBlockingQueue)2