Search in sources :

Example 1 with TimeoutException

use of org.apache.kafka.common.errors.TimeoutException in project kafka by apache.

the class StoreChangelogReaderTest method shouldThrowStreamsExceptionIfTimeoutOccursDuringPartitionsFor.

@SuppressWarnings("unchecked")
@Test
public void shouldThrowStreamsExceptionIfTimeoutOccursDuringPartitionsFor() throws Exception {
    final MockConsumer<byte[], byte[]> consumer = new MockConsumer(OffsetResetStrategy.EARLIEST) {

        @Override
        public List<PartitionInfo> partitionsFor(final String topic) {
            throw new TimeoutException("KABOOM!");
        }
    };
    final StoreChangelogReader changelogReader = new StoreChangelogReader(consumer, new MockTime(), 5);
    try {
        changelogReader.validatePartitionExists(topicPartition, "store");
        fail("Should have thrown streams exception");
    } catch (final StreamsException e) {
    // pass
    }
}
Also used : StreamsException(org.apache.kafka.streams.errors.StreamsException) PartitionInfo(org.apache.kafka.common.PartitionInfo) MockConsumer(org.apache.kafka.clients.consumer.MockConsumer) MockTime(org.apache.kafka.common.utils.MockTime) TimeoutException(org.apache.kafka.common.errors.TimeoutException) Test(org.junit.Test)

Example 2 with TimeoutException

use of org.apache.kafka.common.errors.TimeoutException in project kafka by apache.

the class KafkaProducer method waitOnMetadata.

/**
     * Wait for cluster metadata including partitions for the given topic to be available.
     * @param topic The topic we want metadata for
     * @param partition A specific partition expected to exist in metadata, or null if there's no preference
     * @param maxWaitMs The maximum time in ms for waiting on the metadata
     * @return The cluster containing topic metadata and the amount of time we waited in ms
     */
private ClusterAndWaitTime waitOnMetadata(String topic, Integer partition, long maxWaitMs) throws InterruptedException {
    // add topic to metadata topic list if it is not there already and reset expiry
    metadata.add(topic);
    Cluster cluster = metadata.fetch();
    Integer partitionsCount = cluster.partitionCountForTopic(topic);
    // or within the known partition range
    if (partitionsCount != null && (partition == null || partition < partitionsCount))
        return new ClusterAndWaitTime(cluster, 0);
    long begin = time.milliseconds();
    long remainingWaitMs = maxWaitMs;
    long elapsed;
    // is stale and the number of partitions for this topic has increased in the meantime.
    do {
        log.trace("Requesting metadata update for topic {}.", topic);
        int version = metadata.requestUpdate();
        sender.wakeup();
        try {
            metadata.awaitUpdate(version, remainingWaitMs);
        } catch (TimeoutException ex) {
            // Rethrow with original maxWaitMs to prevent logging exception with remainingWaitMs
            throw new TimeoutException("Failed to update metadata after " + maxWaitMs + " ms.");
        }
        cluster = metadata.fetch();
        elapsed = time.milliseconds() - begin;
        if (elapsed >= maxWaitMs)
            throw new TimeoutException("Failed to update metadata after " + maxWaitMs + " ms.");
        if (cluster.unauthorizedTopics().contains(topic))
            throw new TopicAuthorizationException(topic);
        remainingWaitMs = maxWaitMs - elapsed;
        partitionsCount = cluster.partitionCountForTopic(topic);
    } while (partitionsCount == null);
    if (partition != null && partition >= partitionsCount) {
        throw new KafkaException(String.format("Invalid partition given with record: %d is not in the range [0...%d).", partition, partitionsCount));
    }
    return new ClusterAndWaitTime(cluster, elapsed);
}
Also used : AtomicInteger(java.util.concurrent.atomic.AtomicInteger) Cluster(org.apache.kafka.common.Cluster) KafkaException(org.apache.kafka.common.KafkaException) TopicAuthorizationException(org.apache.kafka.common.errors.TopicAuthorizationException) TimeoutException(org.apache.kafka.common.errors.TimeoutException)

Example 3 with TimeoutException

use of org.apache.kafka.common.errors.TimeoutException in project kafka by apache.

the class ConsumerNetworkClient method failExpiredRequests.

private void failExpiredRequests(long now) {
    // clear all expired unsent requests and fail their corresponding futures
    Collection<ClientRequest> expiredRequests = unsent.removeExpiredRequests(now, unsentExpiryMs);
    for (ClientRequest request : expiredRequests) {
        RequestFutureCompletionHandler handler = (RequestFutureCompletionHandler) request.callback();
        handler.onFailure(new TimeoutException("Failed to send request after " + unsentExpiryMs + " ms."));
    }
}
Also used : ClientRequest(org.apache.kafka.clients.ClientRequest) TimeoutException(org.apache.kafka.common.errors.TimeoutException)

Example 4 with TimeoutException

use of org.apache.kafka.common.errors.TimeoutException in project kafka by apache.

the class RecordAccumulatorTest method testAppendInExpiryCallback.

@Test
public void testAppendInExpiryCallback() throws InterruptedException {
    long retryBackoffMs = 100L;
    long lingerMs = 3000L;
    int requestTimeout = 60;
    int messagesPerBatch = 1024 / msgSize;
    final RecordAccumulator accum = new RecordAccumulator(1024, 10 * 1024, CompressionType.NONE, lingerMs, retryBackoffMs, metrics, time);
    final AtomicInteger expiryCallbackCount = new AtomicInteger();
    final AtomicReference<Exception> unexpectedException = new AtomicReference<Exception>();
    Callback callback = new Callback() {

        @Override
        public void onCompletion(RecordMetadata metadata, Exception exception) {
            if (exception instanceof TimeoutException) {
                expiryCallbackCount.incrementAndGet();
                try {
                    accum.append(tp1, 0L, key, value, null, maxBlockTimeMs);
                } catch (InterruptedException e) {
                    throw new RuntimeException("Unexpected interruption", e);
                }
            } else if (exception != null)
                unexpectedException.compareAndSet(null, exception);
        }
    };
    for (int i = 0; i < messagesPerBatch + 1; i++) accum.append(tp1, 0L, key, value, callback, maxBlockTimeMs);
    assertEquals(2, accum.batches().get(tp1).size());
    assertTrue("First batch not full", accum.batches().get(tp1).peekFirst().isFull());
    // Advance the clock to expire the first batch.
    time.sleep(requestTimeout + 1);
    List<ProducerBatch> expiredBatches = accum.abortExpiredBatches(requestTimeout, time.milliseconds());
    assertEquals("The batch was not expired", 1, expiredBatches.size());
    assertEquals("Callbacks not invoked for expiry", messagesPerBatch, expiryCallbackCount.get());
    assertNull("Unexpected exception", unexpectedException.get());
    assertEquals("Some messages not appended from expiry callbacks", 2, accum.batches().get(tp1).size());
    assertTrue("First batch not full after expiry callbacks with appends", accum.batches().get(tp1).peekFirst().isFull());
}
Also used : AtomicReference(java.util.concurrent.atomic.AtomicReference) TimeoutException(org.apache.kafka.common.errors.TimeoutException) RecordMetadata(org.apache.kafka.clients.producer.RecordMetadata) Callback(org.apache.kafka.clients.producer.Callback) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) TimeoutException(org.apache.kafka.common.errors.TimeoutException) Test(org.junit.Test)

Example 5 with TimeoutException

use of org.apache.kafka.common.errors.TimeoutException in project kafka by apache.

the class ErrorsTest method testForExceptionInheritance.

@Test
public void testForExceptionInheritance() {
    class ExtendedTimeoutException extends TimeoutException {
    }
    Errors expectedError = Errors.forException(new TimeoutException());
    Errors actualError = Errors.forException(new ExtendedTimeoutException());
    assertEquals("forException should match super classes", expectedError, actualError);
}
Also used : TimeoutException(org.apache.kafka.common.errors.TimeoutException) Test(org.junit.Test)

Aggregations

TimeoutException (org.apache.kafka.common.errors.TimeoutException)18 Test (org.junit.Test)8 Callback (org.apache.kafka.clients.producer.Callback)5 StreamsException (org.apache.kafka.streams.errors.StreamsException)4 AtomicInteger (java.util.concurrent.atomic.AtomicInteger)3 ProducerRecord (org.apache.kafka.clients.producer.ProducerRecord)3 PartitionInfo (org.apache.kafka.common.PartitionInfo)3 MockTime (org.apache.kafka.common.utils.MockTime)3 IOException (java.io.IOException)2 ByteBuffer (java.nio.ByteBuffer)2 List (java.util.List)2 Future (java.util.concurrent.Future)2 MockConsumer (org.apache.kafka.clients.consumer.MockConsumer)2 MockProducer (org.apache.kafka.clients.producer.MockProducer)2 RecordMetadata (org.apache.kafka.clients.producer.RecordMetadata)2 DefaultPartitioner (org.apache.kafka.clients.producer.internals.DefaultPartitioner)2 Cluster (org.apache.kafka.common.Cluster)2 KafkaException (org.apache.kafka.common.KafkaException)2 TopicPartition (org.apache.kafka.common.TopicPartition)2 TopicAuthorizationException (org.apache.kafka.common.errors.TopicAuthorizationException)2