Search in sources :

Example 1 with OffsetOutOfRangeException

use of org.apache.kafka.common.errors.OffsetOutOfRangeException in project cdap by caskdata.

the class KafkaLogProcessorPipeline method fetchMessages.

/**
 * Fetch messages from Kafka.
 *
 * @param partition the partition to fetch from
 * @param offset the Kafka offset to fetch from
 * @return An {@link Iterable} of {@link MessageAndOffset}.
 *
 * @throws LeaderNotAvailableException if there is no Kafka broker to talk to.
 * @throws OffsetOutOfRangeException if the given offset is out of range.
 * @throws NotLeaderForPartitionException if the broker that the consumer is talking to is not the leader
 *                                        for the given topic and partition.
 * @throws UnknownTopicOrPartitionException if the topic or partition is not known by the Kafka server
 * @throws UnknownServerException if the Kafka server responded with error.
 */
private Iterable<MessageAndOffset> fetchMessages(int partition, long offset) throws KafkaException {
    String topic = config.getTopic();
    KafkaSimpleConsumer consumer = getKafkaConsumer(topic, partition);
    if (consumer == null) {
        throw new LeaderNotAvailableException("No broker to fetch messages for " + topic + ":" + partition);
    }
    LOG.trace("Fetching messages from Kafka on {}:{} for pipeline {} with offset {}", topic, partition, name, offset);
    try {
        ByteBufferMessageSet result = KafkaUtil.fetchMessages(consumer, topic, partition, config.getKafkaFetchBufferSize(), offset);
        LOG.trace("Fetched {} bytes from Kafka on {}:{} for pipeline {}", result.sizeInBytes(), topic, partition, name);
        return result;
    } catch (OffsetOutOfRangeException e) {
        // If the error is not offset out of range, clear the consumer cache
        kafkaConsumers.remove(consumer.getBrokerInfo());
        throw e;
    }
}
Also used : LeaderNotAvailableException(org.apache.kafka.common.errors.LeaderNotAvailableException) ByteBufferMessageSet(kafka.javaapi.message.ByteBufferMessageSet) OffsetOutOfRangeException(org.apache.kafka.common.errors.OffsetOutOfRangeException)

Example 2 with OffsetOutOfRangeException

use of org.apache.kafka.common.errors.OffsetOutOfRangeException in project cdap by caskdata.

the class KafkaLogProcessorPipeline method processMessages.

/**
 * Process messages fetched from a given partition.
 */
private boolean processMessages(String topic, int partition, Future<Iterable<MessageAndOffset>> future) throws InterruptedException, KafkaException, IOException {
    Iterable<MessageAndOffset> messages;
    try {
        messages = future.get();
    } catch (ExecutionException e) {
        try {
            throw e.getCause();
        } catch (OffsetOutOfRangeException cause) {
            // This shouldn't happen under normal situation.
            // If happened, usually is caused by race between kafka log rotation and fetching in here,
            // hence just fetching from the beginning should be fine
            offsets.put(partition, getLastOffset(partition, kafka.api.OffsetRequest.EarliestTime()));
            return false;
        } catch (KafkaException | IOException cause) {
            throw cause;
        } catch (Throwable t) {
            // For other type of exceptions, just throw an IOException. It will be handled by caller.
            throw new IOException(t);
        }
    }
    boolean processed = false;
    for (MessageAndOffset message : messages) {
        if (eventQueue.getEventSize() >= config.getMaxBufferSize()) {
            // Log a message. If this happen too often, it indicates that more memory is needed for the log processing
            OUTAGE_LOG.info("Maximum queue size {} reached for pipeline {}.", config.getMaxBufferSize(), name);
            // If nothing has been appended (due to error), we break the loop so that no need event will be appended
            // Since the offset is not updated, the same set of messages will be fetched again in next iteration.
            int eventsAppended = appendEvents(System.currentTimeMillis(), true);
            if (eventsAppended <= 0) {
                break;
            }
            unSyncedEvents += eventsAppended;
        }
        try {
            metricsContext.increment("kafka.bytes.read", message.message().payloadSize());
            ILoggingEvent loggingEvent = serializer.fromBytes(message.message().payload());
            // Use the message payload size as the size estimate of the logging event
            // Although it's not the same as the in memory object size, it should be just a constant factor, hence
            // it is proportional to the actual object size.
            eventQueue.add(loggingEvent, loggingEvent.getTimeStamp(), message.message().payloadSize(), partition, new OffsetTime(message.nextOffset(), loggingEvent.getTimeStamp()));
        } catch (IOException e) {
            // This shouldn't happen. In case it happens (e.g. someone published some garbage), just skip the message.
            LOG.trace("Fail to decode logging event from {}:{} at offset {}. Skipping it.", topic, partition, message.offset(), e);
        }
        processed = true;
        offsets.put(partition, message.nextOffset());
    }
    return processed;
}
Also used : MessageAndOffset(kafka.message.MessageAndOffset) IOException(java.io.IOException) ExecutionException(java.util.concurrent.ExecutionException) OffsetOutOfRangeException(org.apache.kafka.common.errors.OffsetOutOfRangeException) ILoggingEvent(ch.qos.logback.classic.spi.ILoggingEvent) Checkpoint(co.cask.cdap.logging.meta.Checkpoint)

Example 3 with OffsetOutOfRangeException

use of org.apache.kafka.common.errors.OffsetOutOfRangeException in project cdap by caskdata.

the class KafkaOffsetResolver method getStartOffset.

/**
 * Check whether the message fetched with the offset {@code checkpoint.getNextOffset() - 1} contains the
 * same timestamp as in the given checkpoint. If they match, directly return {@code checkpoint.getNextOffset()}.
 * If they don't, search for the smallest offset of the message with the same log event time
 * as {@code checkpoint.getNextEventTime()}
 *
 * @param checkpoint A {@link Checkpoint} containing the next offset of a message and its log event timestamp.
 *                   {@link Checkpoint#getNextOffset()}, {@link Checkpoint#getNextEventTime()}
 *                   and {@link Checkpoint#getMaxEventTime()} all must return a non-negative long
 * @param partition  the partition in the topic for searching matching offset
 * @return the next offset of the message with smallest offset and log event time equal to
 * {@code checkpoint.getNextEventTime()}.
 * {@code -1} if no such offset can be found or {@code checkpoint.getNextOffset()} is negative.
 *
 * @throws LeaderNotAvailableException if there is no Kafka broker to talk to.
 * @throws OffsetOutOfRangeException if the given offset is out of range.
 * @throws NotLeaderForPartitionException if the broker that the consumer is talking to is not the leader
 *                                        for the given topic and partition.
 * @throws UnknownTopicOrPartitionException if the topic or partition is not known by the Kafka server
 * @throws UnknownServerException if the Kafka server responded with error.
 */
long getStartOffset(final Checkpoint checkpoint, final int partition) {
    // This should never happen
    Preconditions.checkArgument(checkpoint.getNextOffset() > 0, "Invalid checkpoint offset");
    // Get BrokerInfo for constructing SimpleConsumer
    String topic = config.getTopic();
    BrokerInfo brokerInfo = brokerService.getLeader(topic, partition);
    if (brokerInfo == null) {
        throw new LeaderNotAvailableException(String.format("BrokerInfo from BrokerService is null for topic %s partition %d. Will retry in next run.", topic, partition));
    }
    SimpleConsumer consumer = new SimpleConsumer(brokerInfo.getHost(), brokerInfo.getPort(), SO_TIMEOUT_MILLIS, BUFFER_SIZE, "offset-finder-" + topic + "-" + partition);
    // Check whether the message fetched with the offset in the given checkpoint has the timestamp from
    // checkpoint.getNextOffset() - 1 to get the offset corresponding to the timestamp in checkpoint
    long offset = checkpoint.getNextOffset() - 1;
    try {
        long timestamp = getEventTimeByOffset(consumer, partition, offset);
        if (timestamp == checkpoint.getNextEventTime()) {
            return checkpoint.getNextOffset();
        }
        // This can happen in replicated cluster
        LOG.debug("Event timestamp in {}:{} at offset {} is {}. It doesn't match with checkpoint timestamp {}", topic, partition, offset, timestamp, checkpoint.getNextEventTime());
    } catch (NotFoundException | OffsetOutOfRangeException e) {
        // This means we can't find the timestamp. This can happen in replicated cluster
        LOG.debug("Cannot get valid log event in {}:{} at offset {}", topic, partition, offset);
    }
    // Find offset that has an event that matches the timestamp
    long nextOffset = findStartOffset(consumer, partition, checkpoint.getNextEventTime());
    LOG.debug("Found new nextOffset {} for topic {} partition {} with existing checkpoint {}.", nextOffset, topic, partition, checkpoint);
    return nextOffset;
}
Also used : NotFoundException(co.cask.cdap.common.NotFoundException) LeaderNotAvailableException(org.apache.kafka.common.errors.LeaderNotAvailableException) OffsetOutOfRangeException(org.apache.kafka.common.errors.OffsetOutOfRangeException) SimpleConsumer(kafka.javaapi.consumer.SimpleConsumer) BrokerInfo(org.apache.twill.kafka.client.BrokerInfo)

Example 4 with OffsetOutOfRangeException

use of org.apache.kafka.common.errors.OffsetOutOfRangeException in project apache-kafka-on-k8s by banzaicloud.

the class KafkaAdminClientTest method testDeleteRecords.

@Test
public void testDeleteRecords() throws Exception {
    HashMap<Integer, Node> nodes = new HashMap<>();
    nodes.put(0, new Node(0, "localhost", 8121));
    List<PartitionInfo> partitionInfos = new ArrayList<>();
    partitionInfos.add(new PartitionInfo("my_topic", 0, nodes.get(0), new Node[] { nodes.get(0) }, new Node[] { nodes.get(0) }));
    partitionInfos.add(new PartitionInfo("my_topic", 1, nodes.get(0), new Node[] { nodes.get(0) }, new Node[] { nodes.get(0) }));
    partitionInfos.add(new PartitionInfo("my_topic", 2, null, new Node[] { nodes.get(0) }, new Node[] { nodes.get(0) }));
    partitionInfos.add(new PartitionInfo("my_topic", 3, nodes.get(0), new Node[] { nodes.get(0) }, new Node[] { nodes.get(0) }));
    partitionInfos.add(new PartitionInfo("my_topic", 4, nodes.get(0), new Node[] { nodes.get(0) }, new Node[] { nodes.get(0) }));
    Cluster cluster = new Cluster("mockClusterId", nodes.values(), partitionInfos, Collections.<String>emptySet(), Collections.<String>emptySet(), nodes.get(0));
    TopicPartition myTopicPartition0 = new TopicPartition("my_topic", 0);
    TopicPartition myTopicPartition1 = new TopicPartition("my_topic", 1);
    TopicPartition myTopicPartition2 = new TopicPartition("my_topic", 2);
    TopicPartition myTopicPartition3 = new TopicPartition("my_topic", 3);
    TopicPartition myTopicPartition4 = new TopicPartition("my_topic", 4);
    try (AdminClientUnitTestEnv env = new AdminClientUnitTestEnv(cluster)) {
        env.kafkaClient().setNodeApiVersions(NodeApiVersions.create());
        env.kafkaClient().prepareMetadataUpdate(env.cluster(), Collections.<String>emptySet());
        env.kafkaClient().setNode(env.cluster().nodes().get(0));
        Map<TopicPartition, DeleteRecordsResponse.PartitionResponse> m = new HashMap<>();
        m.put(myTopicPartition0, new DeleteRecordsResponse.PartitionResponse(3, Errors.NONE));
        m.put(myTopicPartition1, new DeleteRecordsResponse.PartitionResponse(DeleteRecordsResponse.INVALID_LOW_WATERMARK, Errors.OFFSET_OUT_OF_RANGE));
        m.put(myTopicPartition3, new DeleteRecordsResponse.PartitionResponse(DeleteRecordsResponse.INVALID_LOW_WATERMARK, Errors.NOT_LEADER_FOR_PARTITION));
        m.put(myTopicPartition4, new DeleteRecordsResponse.PartitionResponse(DeleteRecordsResponse.INVALID_LOW_WATERMARK, Errors.UNKNOWN_TOPIC_OR_PARTITION));
        List<MetadataResponse.TopicMetadata> t = new ArrayList<>();
        List<MetadataResponse.PartitionMetadata> p = new ArrayList<>();
        p.add(new MetadataResponse.PartitionMetadata(Errors.NONE, 0, nodes.get(0), Collections.singletonList(nodes.get(0)), Collections.singletonList(nodes.get(0)), Collections.<Node>emptyList()));
        p.add(new MetadataResponse.PartitionMetadata(Errors.NONE, 1, nodes.get(0), Collections.singletonList(nodes.get(0)), Collections.singletonList(nodes.get(0)), Collections.<Node>emptyList()));
        p.add(new MetadataResponse.PartitionMetadata(Errors.LEADER_NOT_AVAILABLE, 2, null, Collections.singletonList(nodes.get(0)), Collections.singletonList(nodes.get(0)), Collections.<Node>emptyList()));
        p.add(new MetadataResponse.PartitionMetadata(Errors.NONE, 3, nodes.get(0), Collections.singletonList(nodes.get(0)), Collections.singletonList(nodes.get(0)), Collections.<Node>emptyList()));
        p.add(new MetadataResponse.PartitionMetadata(Errors.NONE, 4, nodes.get(0), Collections.singletonList(nodes.get(0)), Collections.singletonList(nodes.get(0)), Collections.<Node>emptyList()));
        t.add(new MetadataResponse.TopicMetadata(Errors.NONE, "my_topic", false, p));
        env.kafkaClient().prepareResponse(new MetadataResponse(cluster.nodes(), cluster.clusterResource().clusterId(), cluster.controller().id(), t));
        env.kafkaClient().prepareResponse(new DeleteRecordsResponse(0, m));
        Map<TopicPartition, RecordsToDelete> recordsToDelete = new HashMap<>();
        recordsToDelete.put(myTopicPartition0, RecordsToDelete.beforeOffset(3L));
        recordsToDelete.put(myTopicPartition1, RecordsToDelete.beforeOffset(10L));
        recordsToDelete.put(myTopicPartition2, RecordsToDelete.beforeOffset(10L));
        recordsToDelete.put(myTopicPartition3, RecordsToDelete.beforeOffset(10L));
        recordsToDelete.put(myTopicPartition4, RecordsToDelete.beforeOffset(10L));
        DeleteRecordsResult results = env.adminClient().deleteRecords(recordsToDelete);
        // success on records deletion for partition 0
        Map<TopicPartition, KafkaFuture<DeletedRecords>> values = results.lowWatermarks();
        KafkaFuture<DeletedRecords> myTopicPartition0Result = values.get(myTopicPartition0);
        long lowWatermark = myTopicPartition0Result.get().lowWatermark();
        assertEquals(lowWatermark, 3);
        // "offset out of range" failure on records deletion for partition 1
        KafkaFuture<DeletedRecords> myTopicPartition1Result = values.get(myTopicPartition1);
        try {
            myTopicPartition1Result.get();
            fail("get() should throw ExecutionException");
        } catch (ExecutionException e0) {
            assertTrue(e0.getCause() instanceof OffsetOutOfRangeException);
        }
        // "leader not available" failure on metadata request for partition 2
        KafkaFuture<DeletedRecords> myTopicPartition2Result = values.get(myTopicPartition2);
        try {
            myTopicPartition2Result.get();
            fail("get() should throw ExecutionException");
        } catch (ExecutionException e1) {
            assertTrue(e1.getCause() instanceof LeaderNotAvailableException);
        }
        // "not leader for partition" failure on records deletion for partition 3
        KafkaFuture<DeletedRecords> myTopicPartition3Result = values.get(myTopicPartition3);
        try {
            myTopicPartition3Result.get();
            fail("get() should throw ExecutionException");
        } catch (ExecutionException e1) {
            assertTrue(e1.getCause() instanceof NotLeaderForPartitionException);
        }
        // "unknown topic or partition" failure on records deletion for partition 4
        KafkaFuture<DeletedRecords> myTopicPartition4Result = values.get(myTopicPartition4);
        try {
            myTopicPartition4Result.get();
            fail("get() should throw ExecutionException");
        } catch (ExecutionException e1) {
            assertTrue(e1.getCause() instanceof UnknownTopicOrPartitionException);
        }
    }
}
Also used : HashMap(java.util.HashMap) Node(org.apache.kafka.common.Node) UnknownTopicOrPartitionException(org.apache.kafka.common.errors.UnknownTopicOrPartitionException) ArrayList(java.util.ArrayList) LeaderNotAvailableException(org.apache.kafka.common.errors.LeaderNotAvailableException) MetadataResponse(org.apache.kafka.common.requests.MetadataResponse) DeleteRecordsResponse(org.apache.kafka.common.requests.DeleteRecordsResponse) PartitionInfo(org.apache.kafka.common.PartitionInfo) ExecutionException(java.util.concurrent.ExecutionException) NotLeaderForPartitionException(org.apache.kafka.common.errors.NotLeaderForPartitionException) KafkaFuture(org.apache.kafka.common.KafkaFuture) Cluster(org.apache.kafka.common.Cluster) TopicPartition(org.apache.kafka.common.TopicPartition) OffsetOutOfRangeException(org.apache.kafka.common.errors.OffsetOutOfRangeException) Test(org.junit.Test)

Example 5 with OffsetOutOfRangeException

use of org.apache.kafka.common.errors.OffsetOutOfRangeException in project kafka by apache.

the class KafkaAdminClientTest method testDeleteRecords.

@Test
public void testDeleteRecords() throws Exception {
    HashMap<Integer, Node> nodes = new HashMap<>();
    nodes.put(0, new Node(0, "localhost", 8121));
    List<PartitionInfo> partitionInfos = new ArrayList<>();
    partitionInfos.add(new PartitionInfo("my_topic", 0, nodes.get(0), new Node[] { nodes.get(0) }, new Node[] { nodes.get(0) }));
    partitionInfos.add(new PartitionInfo("my_topic", 1, nodes.get(0), new Node[] { nodes.get(0) }, new Node[] { nodes.get(0) }));
    partitionInfos.add(new PartitionInfo("my_topic", 2, null, new Node[] { nodes.get(0) }, new Node[] { nodes.get(0) }));
    partitionInfos.add(new PartitionInfo("my_topic", 3, nodes.get(0), new Node[] { nodes.get(0) }, new Node[] { nodes.get(0) }));
    partitionInfos.add(new PartitionInfo("my_topic", 4, nodes.get(0), new Node[] { nodes.get(0) }, new Node[] { nodes.get(0) }));
    Cluster cluster = new Cluster("mockClusterId", nodes.values(), partitionInfos, Collections.<String>emptySet(), Collections.<String>emptySet(), nodes.get(0));
    TopicPartition myTopicPartition0 = new TopicPartition("my_topic", 0);
    TopicPartition myTopicPartition1 = new TopicPartition("my_topic", 1);
    TopicPartition myTopicPartition2 = new TopicPartition("my_topic", 2);
    TopicPartition myTopicPartition3 = new TopicPartition("my_topic", 3);
    TopicPartition myTopicPartition4 = new TopicPartition("my_topic", 4);
    try (AdminClientUnitTestEnv env = new AdminClientUnitTestEnv(cluster)) {
        env.kafkaClient().setNodeApiVersions(NodeApiVersions.create());
        DeleteRecordsResponseData m = new DeleteRecordsResponseData();
        m.topics().add(new DeleteRecordsResponseData.DeleteRecordsTopicResult().setName(myTopicPartition0.topic()).setPartitions(new DeleteRecordsResponseData.DeleteRecordsPartitionResultCollection(asList(new DeleteRecordsResponseData.DeleteRecordsPartitionResult().setPartitionIndex(myTopicPartition0.partition()).setLowWatermark(3).setErrorCode(Errors.NONE.code()), new DeleteRecordsResponseData.DeleteRecordsPartitionResult().setPartitionIndex(myTopicPartition1.partition()).setLowWatermark(DeleteRecordsResponse.INVALID_LOW_WATERMARK).setErrorCode(Errors.OFFSET_OUT_OF_RANGE.code()), new DeleteRecordsResponseData.DeleteRecordsPartitionResult().setPartitionIndex(myTopicPartition3.partition()).setLowWatermark(DeleteRecordsResponse.INVALID_LOW_WATERMARK).setErrorCode(Errors.NOT_LEADER_OR_FOLLOWER.code()), new DeleteRecordsResponseData.DeleteRecordsPartitionResult().setPartitionIndex(myTopicPartition4.partition()).setLowWatermark(DeleteRecordsResponse.INVALID_LOW_WATERMARK).setErrorCode(Errors.UNKNOWN_TOPIC_OR_PARTITION.code())).iterator())));
        List<MetadataResponse.TopicMetadata> t = new ArrayList<>();
        List<MetadataResponse.PartitionMetadata> p = new ArrayList<>();
        p.add(new MetadataResponse.PartitionMetadata(Errors.NONE, myTopicPartition0, Optional.of(nodes.get(0).id()), Optional.of(5), singletonList(nodes.get(0).id()), singletonList(nodes.get(0).id()), Collections.emptyList()));
        p.add(new MetadataResponse.PartitionMetadata(Errors.NONE, myTopicPartition1, Optional.of(nodes.get(0).id()), Optional.of(5), singletonList(nodes.get(0).id()), singletonList(nodes.get(0).id()), Collections.emptyList()));
        p.add(new MetadataResponse.PartitionMetadata(Errors.LEADER_NOT_AVAILABLE, myTopicPartition2, Optional.empty(), Optional.empty(), singletonList(nodes.get(0).id()), singletonList(nodes.get(0).id()), Collections.emptyList()));
        p.add(new MetadataResponse.PartitionMetadata(Errors.NONE, myTopicPartition3, Optional.of(nodes.get(0).id()), Optional.of(5), singletonList(nodes.get(0).id()), singletonList(nodes.get(0).id()), Collections.emptyList()));
        p.add(new MetadataResponse.PartitionMetadata(Errors.NONE, myTopicPartition4, Optional.of(nodes.get(0).id()), Optional.of(5), singletonList(nodes.get(0).id()), singletonList(nodes.get(0).id()), Collections.emptyList()));
        t.add(new MetadataResponse.TopicMetadata(Errors.NONE, "my_topic", false, p));
        env.kafkaClient().prepareResponse(RequestTestUtils.metadataResponse(cluster.nodes(), cluster.clusterResource().clusterId(), cluster.controller().id(), t));
        env.kafkaClient().prepareResponse(new DeleteRecordsResponse(m));
        Map<TopicPartition, RecordsToDelete> recordsToDelete = new HashMap<>();
        recordsToDelete.put(myTopicPartition0, RecordsToDelete.beforeOffset(3L));
        recordsToDelete.put(myTopicPartition1, RecordsToDelete.beforeOffset(10L));
        recordsToDelete.put(myTopicPartition2, RecordsToDelete.beforeOffset(10L));
        recordsToDelete.put(myTopicPartition3, RecordsToDelete.beforeOffset(10L));
        recordsToDelete.put(myTopicPartition4, RecordsToDelete.beforeOffset(10L));
        DeleteRecordsResult results = env.adminClient().deleteRecords(recordsToDelete);
        // success on records deletion for partition 0
        Map<TopicPartition, KafkaFuture<DeletedRecords>> values = results.lowWatermarks();
        KafkaFuture<DeletedRecords> myTopicPartition0Result = values.get(myTopicPartition0);
        long lowWatermark = myTopicPartition0Result.get().lowWatermark();
        assertEquals(lowWatermark, 3);
        // "offset out of range" failure on records deletion for partition 1
        KafkaFuture<DeletedRecords> myTopicPartition1Result = values.get(myTopicPartition1);
        try {
            myTopicPartition1Result.get();
            fail("get() should throw ExecutionException");
        } catch (ExecutionException e0) {
            assertTrue(e0.getCause() instanceof OffsetOutOfRangeException);
        }
        // "leader not available" failure on metadata request for partition 2
        KafkaFuture<DeletedRecords> myTopicPartition2Result = values.get(myTopicPartition2);
        try {
            myTopicPartition2Result.get();
            fail("get() should throw ExecutionException");
        } catch (ExecutionException e1) {
            assertTrue(e1.getCause() instanceof LeaderNotAvailableException);
        }
        // "not leader for partition" failure on records deletion for partition 3
        KafkaFuture<DeletedRecords> myTopicPartition3Result = values.get(myTopicPartition3);
        try {
            myTopicPartition3Result.get();
            fail("get() should throw ExecutionException");
        } catch (ExecutionException e1) {
            assertTrue(e1.getCause() instanceof NotLeaderOrFollowerException);
        }
        // "unknown topic or partition" failure on records deletion for partition 4
        KafkaFuture<DeletedRecords> myTopicPartition4Result = values.get(myTopicPartition4);
        try {
            myTopicPartition4Result.get();
            fail("get() should throw ExecutionException");
        } catch (ExecutionException e1) {
            assertTrue(e1.getCause() instanceof UnknownTopicOrPartitionException);
        }
    }
}
Also used : HashMap(java.util.HashMap) Node(org.apache.kafka.common.Node) UnknownTopicOrPartitionException(org.apache.kafka.common.errors.UnknownTopicOrPartitionException) ArrayList(java.util.ArrayList) LeaderNotAvailableException(org.apache.kafka.common.errors.LeaderNotAvailableException) MetadataResponse(org.apache.kafka.common.requests.MetadataResponse) DeleteRecordsResponse(org.apache.kafka.common.requests.DeleteRecordsResponse) PartitionInfo(org.apache.kafka.common.PartitionInfo) NotLeaderOrFollowerException(org.apache.kafka.common.errors.NotLeaderOrFollowerException) ExecutionException(java.util.concurrent.ExecutionException) KafkaFuture(org.apache.kafka.common.KafkaFuture) Cluster(org.apache.kafka.common.Cluster) TopicPartition(org.apache.kafka.common.TopicPartition) OffsetOutOfRangeException(org.apache.kafka.common.errors.OffsetOutOfRangeException) DeleteRecordsResponseData(org.apache.kafka.common.message.DeleteRecordsResponseData) ParameterizedTest(org.junit.jupiter.params.ParameterizedTest) Test(org.junit.jupiter.api.Test)

Aggregations

OffsetOutOfRangeException (org.apache.kafka.common.errors.OffsetOutOfRangeException)5 LeaderNotAvailableException (org.apache.kafka.common.errors.LeaderNotAvailableException)4 ExecutionException (java.util.concurrent.ExecutionException)3 ArrayList (java.util.ArrayList)2 HashMap (java.util.HashMap)2 Cluster (org.apache.kafka.common.Cluster)2 KafkaFuture (org.apache.kafka.common.KafkaFuture)2 Node (org.apache.kafka.common.Node)2 PartitionInfo (org.apache.kafka.common.PartitionInfo)2 TopicPartition (org.apache.kafka.common.TopicPartition)2 UnknownTopicOrPartitionException (org.apache.kafka.common.errors.UnknownTopicOrPartitionException)2 DeleteRecordsResponse (org.apache.kafka.common.requests.DeleteRecordsResponse)2 MetadataResponse (org.apache.kafka.common.requests.MetadataResponse)2 ILoggingEvent (ch.qos.logback.classic.spi.ILoggingEvent)1 NotFoundException (co.cask.cdap.common.NotFoundException)1 Checkpoint (co.cask.cdap.logging.meta.Checkpoint)1 IOException (java.io.IOException)1 SimpleConsumer (kafka.javaapi.consumer.SimpleConsumer)1 ByteBufferMessageSet (kafka.javaapi.message.ByteBufferMessageSet)1 MessageAndOffset (kafka.message.MessageAndOffset)1