Search in sources :

Example 6 with TaskMigratedException

use of org.apache.kafka.streams.errors.TaskMigratedException in project apache-kafka-on-k8s by banzaicloud.

the class StreamTask method initializeTopology.

/**
 * <pre>
 * - (re-)initialize the topology of the task
 * </pre>
 * @throws TaskMigratedException if the task producer got fenced (EOS only)
 */
@Override
public void initializeTopology() {
    initTopology();
    if (eosEnabled) {
        try {
            this.producer.beginTransaction();
        } catch (final ProducerFencedException fatal) {
            throw new TaskMigratedException(this, fatal);
        }
        transactionInFlight = true;
    }
    processorContext.initialized();
    taskInitialized = true;
}
Also used : ProducerFencedException(org.apache.kafka.common.errors.ProducerFencedException) TaskMigratedException(org.apache.kafka.streams.errors.TaskMigratedException)

Example 7 with TaskMigratedException

use of org.apache.kafka.streams.errors.TaskMigratedException in project apache-kafka-on-k8s by banzaicloud.

the class StreamTask method process.

/**
 * Process one record.
 *
 * @return true if this method processes a record, false if it does not process a record.
 * @throws TaskMigratedException if the task producer got fenced (EOS only)
 */
@SuppressWarnings("unchecked")
public boolean process() {
    // get the next record to process
    final StampedRecord record = partitionGroup.nextRecord(recordInfo);
    // if there is no record to process, return immediately
    if (record == null) {
        return false;
    }
    try {
        // process the record by passing to the source node of the topology
        final ProcessorNode currNode = recordInfo.node();
        final TopicPartition partition = recordInfo.partition();
        log.trace("Start processing one record [{}]", record);
        updateProcessorContext(record, currNode);
        currNode.process(record.key(), record.value());
        log.trace("Completed processing one record [{}]", record);
        // update the consumed offset map after processing is done
        consumedOffsets.put(partition, record.offset());
        commitOffsetNeeded = true;
        // decreased to the threshold, we can then resume the consumption on this partition
        if (recordInfo.queue().size() == maxBufferedSize) {
            consumer.resume(singleton(partition));
        }
    } catch (final ProducerFencedException fatal) {
        throw new TaskMigratedException(this, fatal);
    } catch (final KafkaException e) {
        throw new StreamsException(format("Exception caught in process. taskId=%s, processor=%s, topic=%s, partition=%d, offset=%d", id(), processorContext.currentNode().name(), record.topic(), record.partition(), record.offset()), e);
    } finally {
        processorContext.setCurrentNode(null);
    }
    return true;
}
Also used : TopicPartition(org.apache.kafka.common.TopicPartition) StreamsException(org.apache.kafka.streams.errors.StreamsException) KafkaException(org.apache.kafka.common.KafkaException) ProducerFencedException(org.apache.kafka.common.errors.ProducerFencedException) TaskMigratedException(org.apache.kafka.streams.errors.TaskMigratedException)

Example 8 with TaskMigratedException

use of org.apache.kafka.streams.errors.TaskMigratedException in project apache-kafka-on-k8s by banzaicloud.

the class StreamTask method flushState.

@Override
protected void flushState() {
    log.trace("Flushing state and producer");
    super.flushState();
    try {
        recordCollector.flush();
    } catch (final ProducerFencedException fatal) {
        throw new TaskMigratedException(this, fatal);
    }
}
Also used : ProducerFencedException(org.apache.kafka.common.errors.ProducerFencedException) TaskMigratedException(org.apache.kafka.streams.errors.TaskMigratedException)

Example 9 with TaskMigratedException

use of org.apache.kafka.streams.errors.TaskMigratedException in project apache-kafka-on-k8s by banzaicloud.

the class StreamThread method addRecordsToTasks.

/**
 * Take records and add them to each respective task
 * @param records Records, can be null
 */
private void addRecordsToTasks(final ConsumerRecords<byte[], byte[]> records) {
    int numAddedRecords = 0;
    for (final TopicPartition partition : records.partitions()) {
        final StreamTask task = taskManager.activeTask(partition);
        if (task.isClosed()) {
            log.warn("Stream task {} is already closed, probably because it got unexpectedly migrated to another thread already. " + "Notifying the thread to trigger a new rebalance immediately.", task.id());
            throw new TaskMigratedException(task);
        }
        numAddedRecords += task.addRecords(partition, records.records(partition));
    }
    streamsMetrics.skippedRecordsSensor.record(records.count() - numAddedRecords, timerStartedMs);
}
Also used : TopicPartition(org.apache.kafka.common.TopicPartition) TaskMigratedException(org.apache.kafka.streams.errors.TaskMigratedException)

Example 10 with TaskMigratedException

use of org.apache.kafka.streams.errors.TaskMigratedException in project apache-kafka-on-k8s by banzaicloud.

the class StreamThread method maybeUpdateStandbyTasks.

private void maybeUpdateStandbyTasks(final long now) {
    if (state == State.RUNNING && taskManager.hasStandbyRunningTasks()) {
        if (processStandbyRecords) {
            if (!standbyRecords.isEmpty()) {
                final Map<TopicPartition, List<ConsumerRecord<byte[], byte[]>>> remainingStandbyRecords = new HashMap<>();
                for (final Map.Entry<TopicPartition, List<ConsumerRecord<byte[], byte[]>>> entry : standbyRecords.entrySet()) {
                    final TopicPartition partition = entry.getKey();
                    List<ConsumerRecord<byte[], byte[]>> remaining = entry.getValue();
                    if (remaining != null) {
                        final StandbyTask task = taskManager.standbyTask(partition);
                        if (task.isClosed()) {
                            log.warn("Standby task {} is already closed, probably because it got unexpectly migrated to another thread already. " + "Notifying the thread to trigger a new rebalance immediately.", task.id());
                            throw new TaskMigratedException(task);
                        }
                        remaining = task.update(partition, remaining);
                        if (remaining != null) {
                            remainingStandbyRecords.put(partition, remaining);
                        } else {
                            restoreConsumer.resume(singleton(partition));
                        }
                    }
                }
                standbyRecords = remainingStandbyRecords;
                log.debug("Updated standby tasks {} in {}ms", taskManager.standbyTaskIds(), time.milliseconds() - now);
            }
            processStandbyRecords = false;
        }
        try {
            final ConsumerRecords<byte[], byte[]> records = restoreConsumer.poll(0);
            if (!records.isEmpty()) {
                for (final TopicPartition partition : records.partitions()) {
                    final StandbyTask task = taskManager.standbyTask(partition);
                    if (task == null) {
                        throw new StreamsException(logPrefix + "Missing standby task for partition " + partition);
                    }
                    if (task.isClosed()) {
                        log.warn("Standby task {} is already closed, probably because it got unexpectedly migrated to another thread already. " + "Notifying the thread to trigger a new rebalance immediately.", task.id());
                        throw new TaskMigratedException(task);
                    }
                    final List<ConsumerRecord<byte[], byte[]>> remaining = task.update(partition, records.records(partition));
                    if (remaining != null) {
                        restoreConsumer.pause(singleton(partition));
                        standbyRecords.put(partition, remaining);
                    }
                }
            }
        } catch (final InvalidOffsetException recoverableException) {
            log.warn("Updating StandbyTasks failed. Deleting StandbyTasks stores to recreate from scratch.", recoverableException);
            final Set<TopicPartition> partitions = recoverableException.partitions();
            for (final TopicPartition partition : partitions) {
                final StandbyTask task = taskManager.standbyTask(partition);
                if (task.isClosed()) {
                    log.warn("Standby task {} is already closed, probably because it got unexpectly migrated to another thread already. " + "Notifying the thread to trigger a new rebalance immediately.", task.id());
                    throw new TaskMigratedException(task);
                }
                log.info("Reinitializing StandbyTask {}", task);
                task.reinitializeStateStoresForPartitions(recoverableException.partitions());
            }
            restoreConsumer.seekToBeginning(partitions);
        }
    }
}
Also used : HashSet(java.util.HashSet) Set(java.util.Set) HashMap(java.util.HashMap) StreamsException(org.apache.kafka.streams.errors.StreamsException) InvalidOffsetException(org.apache.kafka.clients.consumer.InvalidOffsetException) ConsumerRecord(org.apache.kafka.clients.consumer.ConsumerRecord) TopicPartition(org.apache.kafka.common.TopicPartition) ArrayList(java.util.ArrayList) List(java.util.List) HashMap(java.util.HashMap) Map(java.util.Map) TaskMigratedException(org.apache.kafka.streams.errors.TaskMigratedException)

Aggregations

TaskMigratedException (org.apache.kafka.streams.errors.TaskMigratedException)16 Test (org.junit.Test)9 TopicPartition (org.apache.kafka.common.TopicPartition)6 HashMap (java.util.HashMap)4 ProducerFencedException (org.apache.kafka.common.errors.ProducerFencedException)4 ArrayList (java.util.ArrayList)3 HashSet (java.util.HashSet)3 Set (java.util.Set)3 StreamsException (org.apache.kafka.streams.errors.StreamsException)3 Map (java.util.Map)2 MockConsumer (org.apache.kafka.clients.consumer.MockConsumer)2 StreamsConfig (org.apache.kafka.streams.StreamsConfig)2 InternalStreamsBuilderTest (org.apache.kafka.streams.kstream.internals.InternalStreamsBuilderTest)2 TaskId (org.apache.kafka.streams.processor.TaskId)2 List (java.util.List)1 AtomicReference (java.util.concurrent.atomic.AtomicReference)1 CommitFailedException (org.apache.kafka.clients.consumer.CommitFailedException)1 ConsumerRecord (org.apache.kafka.clients.consumer.ConsumerRecord)1 InvalidOffsetException (org.apache.kafka.clients.consumer.InvalidOffsetException)1 OffsetAndMetadata (org.apache.kafka.clients.consumer.OffsetAndMetadata)1