Search in sources :

Example 66 with StreamsException

use of org.apache.kafka.streams.errors.StreamsException in project kafka by apache.

the class RepartitionTopicsTest method shouldReturnMissingSourceTopics.

@Test
public void shouldReturnMissingSourceTopics() {
    final Set<String> missingSourceTopics = mkSet(SOURCE_TOPIC_NAME1);
    expect(internalTopologyBuilder.subtopologyToTopicsInfo()).andReturn(mkMap(mkEntry(SUBTOPOLOGY_0, TOPICS_INFO1), mkEntry(SUBTOPOLOGY_1, TOPICS_INFO2)));
    expect(internalTopologyBuilder.copartitionGroups()).andReturn(Collections.emptyList());
    copartitionedTopicsEnforcer.enforce(eq(Collections.emptySet()), anyObject(), eq(clusterMetadata));
    expect(internalTopicManager.makeReady(mkMap(mkEntry(REPARTITION_TOPIC_NAME1, REPARTITION_TOPIC_CONFIG1)))).andReturn(Collections.emptySet());
    setupClusterWithMissingTopics(missingSourceTopics);
    replay(internalTopicManager, internalTopologyBuilder, clusterMetadata);
    final RepartitionTopics repartitionTopics = new RepartitionTopics(new TopologyMetadata(internalTopologyBuilder, config), internalTopicManager, copartitionedTopicsEnforcer, clusterMetadata, "[test] ");
    repartitionTopics.setup();
    assertThat(repartitionTopics.topologiesWithMissingInputTopics(), equalTo(Collections.singleton(UNNAMED_TOPOLOGY)));
    final StreamsException exception = repartitionTopics.missingSourceTopicExceptions().poll();
    assertThat(exception, notNullValue());
    assertThat(exception.taskId().isPresent(), is(true));
    assertThat(exception.taskId().get(), equalTo(new TaskId(0, 0)));
}
Also used : TaskId(org.apache.kafka.streams.processor.TaskId) StreamsException(org.apache.kafka.streams.errors.StreamsException) PrepareForTest(org.powermock.core.classloader.annotations.PrepareForTest) Test(org.junit.Test)

Example 67 with StreamsException

use of org.apache.kafka.streams.errors.StreamsException in project kafka by apache.

the class RecordCollectorImpl method recordSendError.

private void recordSendError(final String topic, final Exception exception, final ProducerRecord<byte[], byte[]> serializedRecord) {
    String errorMessage = String.format(SEND_EXCEPTION_MESSAGE, topic, taskId, exception.toString());
    if (isFatalException(exception)) {
        errorMessage += "\nWritten offsets would not be recorded and no more records would be sent since this is a fatal error.";
        sendException.set(new StreamsException(errorMessage, exception));
    } else if (exception instanceof ProducerFencedException || exception instanceof InvalidProducerEpochException || exception instanceof OutOfOrderSequenceException) {
        errorMessage += "\nWritten offsets would not be recorded and no more records would be sent since the producer is fenced, " + "indicating the task may be migrated out";
        sendException.set(new TaskMigratedException(errorMessage, exception));
    } else {
        if (exception instanceof RetriableException) {
            errorMessage += "\nThe broker is either slow or in bad state (like not having enough replicas) in responding the request, " + "or the connection to broker was interrupted sending the request or receiving the response. " + "\nConsider overwriting `max.block.ms` and /or " + "`delivery.timeout.ms` to a larger value to wait longer for such scenarios and avoid timeout errors";
            sendException.set(new TaskCorruptedException(Collections.singleton(taskId)));
        } else {
            if (productionExceptionHandler.handle(serializedRecord, exception) == ProductionExceptionHandlerResponse.FAIL) {
                errorMessage += "\nException handler choose to FAIL the processing, no more records would be sent.";
                sendException.set(new StreamsException(errorMessage, exception));
            } else {
                errorMessage += "\nException handler choose to CONTINUE processing in spite of this error but written offsets would not be recorded.";
                droppedRecordsSensor.record();
            }
        }
    }
    log.error(errorMessage, exception);
}
Also used : InvalidProducerEpochException(org.apache.kafka.common.errors.InvalidProducerEpochException) TaskCorruptedException(org.apache.kafka.streams.errors.TaskCorruptedException) StreamsException(org.apache.kafka.streams.errors.StreamsException) ProducerFencedException(org.apache.kafka.common.errors.ProducerFencedException) OutOfOrderSequenceException(org.apache.kafka.common.errors.OutOfOrderSequenceException) TaskMigratedException(org.apache.kafka.streams.errors.TaskMigratedException) RetriableException(org.apache.kafka.common.errors.RetriableException)

Example 68 with StreamsException

use of org.apache.kafka.streams.errors.StreamsException in project kafka by apache.

the class RecordCollectorImpl method send.

@Override
public <K, V> void send(final String topic, final K key, final V value, final Headers headers, final Integer partition, final Long timestamp, final Serializer<K> keySerializer, final Serializer<V> valueSerializer) {
    checkForException();
    final byte[] keyBytes;
    final byte[] valBytes;
    try {
        keyBytes = keySerializer.serialize(topic, headers, key);
        valBytes = valueSerializer.serialize(topic, headers, value);
    } catch (final ClassCastException exception) {
        final String keyClass = key == null ? "unknown because key is null" : key.getClass().getName();
        final String valueClass = value == null ? "unknown because value is null" : value.getClass().getName();
        throw new StreamsException(String.format("ClassCastException while producing data to topic %s. " + "A serializer (key: %s / value: %s) is not compatible to the actual key or value type " + "(key type: %s / value type: %s). " + "Change the default Serdes in StreamConfig or provide correct Serdes via method parameters " + "(for example if using the DSL, `#to(String topic, Produced<K, V> produced)` with " + "`Produced.keySerde(WindowedSerdes.timeWindowedSerdeFrom(String.class))`).", topic, keySerializer.getClass().getName(), valueSerializer.getClass().getName(), keyClass, valueClass), exception);
    } catch (final RuntimeException exception) {
        final String errorMessage = String.format(SEND_EXCEPTION_MESSAGE, topic, taskId, exception.toString());
        throw new StreamsException(errorMessage, exception);
    }
    final ProducerRecord<byte[], byte[]> serializedRecord = new ProducerRecord<>(topic, partition, timestamp, keyBytes, valBytes, headers);
    streamsProducer.send(serializedRecord, (metadata, exception) -> {
        // if there's already an exception record, skip logging offsets or new exceptions
        if (sendException.get() != null) {
            return;
        }
        if (exception == null) {
            final TopicPartition tp = new TopicPartition(metadata.topic(), metadata.partition());
            if (metadata.offset() >= 0L) {
                offsets.put(tp, metadata.offset());
            } else {
                log.warn("Received offset={} in produce response for {}", metadata.offset(), tp);
            }
        } else {
            recordSendError(topic, exception, serializedRecord);
            // KAFKA-7510 only put message key and value in TRACE level log so we don't leak data by default
            log.trace("Failed record: (key {} value {} timestamp {}) topic=[{}] partition=[{}]", key, value, timestamp, topic, partition);
        }
    });
}
Also used : TopicPartition(org.apache.kafka.common.TopicPartition) StreamsException(org.apache.kafka.streams.errors.StreamsException) ProducerRecord(org.apache.kafka.clients.producer.ProducerRecord)

Example 69 with StreamsException

use of org.apache.kafka.streams.errors.StreamsException in project kafka by apache.

the class RecordCollectorImpl method send.

/**
 * @throws StreamsException fatal error that should cause the thread to die
 * @throws TaskMigratedException recoverable error that would cause the task to be removed
 */
@Override
public <K, V> void send(final String topic, final K key, final V value, final Headers headers, final Long timestamp, final Serializer<K> keySerializer, final Serializer<V> valueSerializer, final StreamPartitioner<? super K, ? super V> partitioner) {
    final Integer partition;
    if (partitioner != null) {
        final List<PartitionInfo> partitions;
        try {
            partitions = streamsProducer.partitionsFor(topic);
        } catch (final TimeoutException timeoutException) {
            log.warn("Could not get partitions for topic {}, will retry", topic);
            // re-throw to trigger `task.timeout.ms`
            throw timeoutException;
        } catch (final KafkaException fatal) {
            // so we treat everything the same as a fatal exception
            throw new StreamsException("Could not determine the number of partitions for topic '" + topic + "' for task " + taskId + " due to " + fatal.toString(), fatal);
        }
        if (partitions.size() > 0) {
            partition = partitioner.partition(topic, key, value, partitions.size());
        } else {
            throw new StreamsException("Could not get partition information for topic " + topic + " for task " + taskId + ". This can happen if the topic does not exist.");
        }
    } else {
        partition = null;
    }
    send(topic, key, value, headers, partition, timestamp, keySerializer, valueSerializer);
}
Also used : StreamsException(org.apache.kafka.streams.errors.StreamsException) KafkaException(org.apache.kafka.common.KafkaException) PartitionInfo(org.apache.kafka.common.PartitionInfo) TimeoutException(org.apache.kafka.common.errors.TimeoutException)

Example 70 with StreamsException

use of org.apache.kafka.streams.errors.StreamsException in project kafka by apache.

the class InternalTopicManager method processCreateTopicResults.

private void processCreateTopicResults(final CreateTopicsResult createTopicsResult, final Set<String> topicStillToCreate, final Set<String> createdTopics, final long deadline) {
    final Map<String, Throwable> lastErrorsSeenForTopic = new HashMap<>();
    final Map<String, KafkaFuture<Void>> createResultForTopic = createTopicsResult.values();
    while (!createResultForTopic.isEmpty()) {
        for (final String topicName : new HashSet<>(topicStillToCreate)) {
            if (!createResultForTopic.containsKey(topicName)) {
                cleanUpCreatedTopics(createdTopics);
                throw new IllegalStateException("Create topic results do not contain internal topic " + topicName + " to setup. " + BUG_ERROR_MESSAGE);
            }
            final KafkaFuture<Void> createResult = createResultForTopic.get(topicName);
            if (createResult.isDone()) {
                try {
                    createResult.get();
                    createdTopics.add(topicName);
                    topicStillToCreate.remove(topicName);
                } catch (final ExecutionException executionException) {
                    final Throwable cause = executionException.getCause();
                    if (cause instanceof TopicExistsException) {
                        lastErrorsSeenForTopic.put(topicName, cause);
                        log.info("Internal topic {} already exists. Topic is probably marked for deletion. " + "Will retry to create this topic later (to let broker complete async delete operation first)", topicName);
                    } else if (cause instanceof TimeoutException) {
                        lastErrorsSeenForTopic.put(topicName, cause);
                        log.info("Creating internal topic {} timed out.", topicName);
                    } else {
                        cleanUpCreatedTopics(createdTopics);
                        log.error("Unexpected error during creation of internal topic: ", cause);
                        throw new StreamsException(String.format("Could not create internal topic %s for the following reason: ", topicName), cause);
                    }
                } catch (final InterruptedException interruptedException) {
                    throw new InterruptException(interruptedException);
                } finally {
                    createResultForTopic.remove(topicName);
                }
            }
        }
        maybeThrowTimeoutExceptionDuringSetup(topicStillToCreate, createdTopics, lastErrorsSeenForTopic, deadline);
        if (!createResultForTopic.isEmpty()) {
            Utils.sleep(100);
        }
    }
}
Also used : KafkaFuture(org.apache.kafka.common.KafkaFuture) HashMap(java.util.HashMap) StreamsException(org.apache.kafka.streams.errors.StreamsException) InterruptException(org.apache.kafka.common.errors.InterruptException) TopicExistsException(org.apache.kafka.common.errors.TopicExistsException) ExecutionException(java.util.concurrent.ExecutionException) HashSet(java.util.HashSet) TimeoutException(org.apache.kafka.common.errors.TimeoutException)

Aggregations

StreamsException (org.apache.kafka.streams.errors.StreamsException)186 Test (org.junit.Test)90 KafkaException (org.apache.kafka.common.KafkaException)41 TopicPartition (org.apache.kafka.common.TopicPartition)38 TimeoutException (org.apache.kafka.common.errors.TimeoutException)36 HashMap (java.util.HashMap)27 Map (java.util.Map)25 HashSet (java.util.HashSet)18 Properties (java.util.Properties)17 TaskId (org.apache.kafka.streams.processor.TaskId)14 ConsumerRecord (org.apache.kafka.clients.consumer.ConsumerRecord)13 StreamsConfig (org.apache.kafka.streams.StreamsConfig)12 ArrayList (java.util.ArrayList)11 ExecutionException (java.util.concurrent.ExecutionException)11 TaskMigratedException (org.apache.kafka.streams.errors.TaskMigratedException)11 IOException (java.io.IOException)10 Set (java.util.Set)10 LogContext (org.apache.kafka.common.utils.LogContext)10 MockTime (org.apache.kafka.common.utils.MockTime)10 StateStore (org.apache.kafka.streams.processor.StateStore)10