Search in sources :

Example 1 with OffsetAndTimestamp

use of org.apache.kafka.clients.consumer.OffsetAndTimestamp in project kafka by apache.

the class FetcherTest method testGetOffsetsForTimesWithError.

private void testGetOffsetsForTimesWithError(Errors errorForTp0, Errors errorForTp1, long offsetForTp0, long offsetForTp1, Long expectedOffsetForTp0, Long expectedOffsetForTp1) {
    client.reset();
    TopicPartition tp0 = tp;
    TopicPartition tp1 = new TopicPartition(topicName, 1);
    // Ensure metadata has both partition.
    Cluster cluster = TestUtils.clusterWith(2, topicName, 2);
    metadata.update(cluster, Collections.<String>emptySet(), time.milliseconds());
    // First try should fail due to metadata error.
    client.prepareResponseFrom(listOffsetResponse(tp0, errorForTp0, offsetForTp0, offsetForTp0), cluster.leaderFor(tp0));
    client.prepareResponseFrom(listOffsetResponse(tp1, errorForTp1, offsetForTp1, offsetForTp1), cluster.leaderFor(tp1));
    // Second try should succeed.
    client.prepareResponseFrom(listOffsetResponse(tp0, Errors.NONE, offsetForTp0, offsetForTp0), cluster.leaderFor(tp0));
    client.prepareResponseFrom(listOffsetResponse(tp1, Errors.NONE, offsetForTp1, offsetForTp1), cluster.leaderFor(tp1));
    Map<TopicPartition, Long> timestampToSearch = new HashMap<>();
    timestampToSearch.put(tp0, 0L);
    timestampToSearch.put(tp1, 0L);
    Map<TopicPartition, OffsetAndTimestamp> offsetAndTimestampMap = fetcher.getOffsetsByTimes(timestampToSearch, Long.MAX_VALUE);
    if (expectedOffsetForTp0 == null)
        assertNull(offsetAndTimestampMap.get(tp0));
    else {
        assertEquals(expectedOffsetForTp0.longValue(), offsetAndTimestampMap.get(tp0).timestamp());
        assertEquals(expectedOffsetForTp0.longValue(), offsetAndTimestampMap.get(tp0).offset());
    }
    if (expectedOffsetForTp1 == null)
        assertNull(offsetAndTimestampMap.get(tp1));
    else {
        assertEquals(expectedOffsetForTp1.longValue(), offsetAndTimestampMap.get(tp1).timestamp());
        assertEquals(expectedOffsetForTp1.longValue(), offsetAndTimestampMap.get(tp1).offset());
    }
}
Also used : HashMap(java.util.HashMap) LinkedHashMap(java.util.LinkedHashMap) TopicPartition(org.apache.kafka.common.TopicPartition) Cluster(org.apache.kafka.common.Cluster) OffsetAndTimestamp(org.apache.kafka.clients.consumer.OffsetAndTimestamp)

Example 2 with OffsetAndTimestamp

use of org.apache.kafka.clients.consumer.OffsetAndTimestamp in project streamsx.kafka by IBMStreams.

the class KafkaConsumerClient method seekToTimestamp.

private void seekToTimestamp(Map<TopicPartition, Long> topicPartitionTimestampMap) {
    Map<TopicPartition, OffsetAndTimestamp> offsetsForTimes = consumer.offsetsForTimes(topicPartitionTimestampMap);
    logger.debug("offsetsForTimes=" + offsetsForTimes);
    topicPartitionTimestampMap.forEach((tp, timestamp) -> {
        OffsetAndTimestamp ot = offsetsForTimes.get(tp);
        if (ot != null) {
            logger.debug("Seeking consumer for tp=" + tp + " to offsetAndTimestamp=" + ot);
            consumer.seek(tp, ot.offset());
        } else {
        // nothing...consumer will move to the offset as determined by the 'auto.offset.reset' config
        }
    });
}
Also used : TopicPartition(org.apache.kafka.common.TopicPartition) OffsetAndTimestamp(org.apache.kafka.clients.consumer.OffsetAndTimestamp)

Aggregations

OffsetAndTimestamp (org.apache.kafka.clients.consumer.OffsetAndTimestamp)2 TopicPartition (org.apache.kafka.common.TopicPartition)2 HashMap (java.util.HashMap)1 LinkedHashMap (java.util.LinkedHashMap)1 Cluster (org.apache.kafka.common.Cluster)1