Search in sources :

Example 31 with KafkaFuture

use of org.apache.kafka.common.KafkaFuture in project kafka by apache.

the class TopicAdmin method describeTopics.

/**
 * Attempt to fetch the descriptions of the given topics
 * Apache Kafka added support for describing topics in 0.10.0.0, so this method works as expected with that and later versions.
 * With brokers older than 0.10.0.0, this method is unable to describe topics and always returns an empty set.
 *
 * @param topics the topics to describe
 * @return a map of topic names to topic descriptions of the topics that were requested; never null but possibly empty
 * @throws RetriableException if a retriable error occurs, the operation takes too long, or the
 * thread is interrupted while attempting to perform this operation
 * @throws ConnectException if a non retriable error occurs
 */
public Map<String, TopicDescription> describeTopics(String... topics) {
    if (topics == null) {
        return Collections.emptyMap();
    }
    String bootstrapServers = bootstrapServers();
    String topicNameList = String.join(", ", topics);
    Map<String, KafkaFuture<TopicDescription>> newResults = admin.describeTopics(Arrays.asList(topics), new DescribeTopicsOptions()).topicNameValues();
    // Iterate over each future so that we can handle individual failures like when some topics don't exist
    Map<String, TopicDescription> existingTopics = new HashMap<>();
    newResults.forEach((topic, desc) -> {
        try {
            existingTopics.put(topic, desc.get());
        } catch (ExecutionException e) {
            Throwable cause = e.getCause();
            if (cause instanceof UnknownTopicOrPartitionException) {
                log.debug("Topic '{}' does not exist on the brokers at {}", topic, bootstrapServers);
                return;
            }
            if (cause instanceof ClusterAuthorizationException || cause instanceof TopicAuthorizationException) {
                String msg = String.format("Not authorized to describe topic(s) '%s' on the brokers %s", topicNameList, bootstrapServers);
                throw new ConnectException(msg, cause);
            }
            if (cause instanceof UnsupportedVersionException) {
                String msg = String.format("Unable to describe topic(s) '%s' since the brokers " + "at %s do not support the DescribeTopics API.", topicNameList, bootstrapServers);
                throw new ConnectException(msg, cause);
            }
            if (cause instanceof TimeoutException) {
                // Timed out waiting for the operation to complete
                throw new RetriableException("Timed out while describing topics '" + topicNameList + "'", cause);
            }
            throw new ConnectException("Error while attempting to describe topics '" + topicNameList + "'", e);
        } catch (InterruptedException e) {
            Thread.interrupted();
            throw new RetriableException("Interrupted while attempting to describe topics '" + topicNameList + "'", e);
        }
    });
    return existingTopics;
}
Also used : KafkaFuture(org.apache.kafka.common.KafkaFuture) HashMap(java.util.HashMap) UnknownTopicOrPartitionException(org.apache.kafka.common.errors.UnknownTopicOrPartitionException) DescribeTopicsOptions(org.apache.kafka.clients.admin.DescribeTopicsOptions) TopicDescription(org.apache.kafka.clients.admin.TopicDescription) ExecutionException(java.util.concurrent.ExecutionException) ClusterAuthorizationException(org.apache.kafka.common.errors.ClusterAuthorizationException) TopicAuthorizationException(org.apache.kafka.common.errors.TopicAuthorizationException) ConnectException(org.apache.kafka.connect.errors.ConnectException) UnsupportedVersionException(org.apache.kafka.common.errors.UnsupportedVersionException) TimeoutException(org.apache.kafka.common.errors.TimeoutException) RetriableException(org.apache.kafka.connect.errors.RetriableException)

Example 32 with KafkaFuture

use of org.apache.kafka.common.KafkaFuture in project kafka by apache.

the class TopicAdminTest method topicDescription.

protected TopicDescription topicDescription(MockAdminClient admin, String topicName) throws ExecutionException, InterruptedException {
    DescribeTopicsResult result = admin.describeTopics(Collections.singleton(topicName));
    Map<String, KafkaFuture<TopicDescription>> byName = result.topicNameValues();
    return byName.get(topicName).get();
}
Also used : KafkaFuture(org.apache.kafka.common.KafkaFuture) DescribeTopicsResult(org.apache.kafka.clients.admin.DescribeTopicsResult)

Example 33 with KafkaFuture

use of org.apache.kafka.common.KafkaFuture in project kafka by apache.

the class StreamsPartitionAssignor method populateClientStatesMap.

/**
 * Builds a map from client to state, and readies each ClientState for assignment by adding any missing prev tasks
 * and computing the per-task overall lag based on the fetched end offsets for each changelog.
 *
 * @param clientStates a map from each client to its state, including offset lags. Populated by this method.
 * @param clientMetadataMap a map from each client to its full metadata
 * @param taskForPartition map from topic partition to its corresponding task
 * @param changelogTopics object that manages changelog topics
 *
 * @return whether we were able to successfully fetch the changelog end offsets and compute each client's lag
 */
private boolean populateClientStatesMap(final Map<UUID, ClientState> clientStates, final Map<UUID, ClientMetadata> clientMetadataMap, final Map<TopicPartition, TaskId> taskForPartition, final ChangelogTopics changelogTopics) {
    boolean fetchEndOffsetsSuccessful;
    Map<TaskId, Long> allTaskEndOffsetSums;
    try {
        // Make the listOffsets request first so it can  fetch the offsets for non-source changelogs
        // asynchronously while we use the blocking Consumer#committed call to fetch source-changelog offsets
        final KafkaFuture<Map<TopicPartition, ListOffsetsResultInfo>> endOffsetsFuture = fetchEndOffsetsFuture(changelogTopics.preExistingNonSourceTopicBasedPartitions(), adminClient);
        final Map<TopicPartition, Long> sourceChangelogEndOffsets = fetchCommittedOffsets(changelogTopics.preExistingSourceTopicBasedPartitions(), mainConsumerSupplier.get());
        final Map<TopicPartition, ListOffsetsResultInfo> endOffsets = ClientUtils.getEndOffsets(endOffsetsFuture);
        allTaskEndOffsetSums = computeEndOffsetSumsByTask(endOffsets, sourceChangelogEndOffsets, changelogTopics);
        fetchEndOffsetsSuccessful = true;
    } catch (final StreamsException | TimeoutException e) {
        allTaskEndOffsetSums = changelogTopics.statefulTaskIds().stream().collect(Collectors.toMap(t -> t, t -> UNKNOWN_OFFSET_SUM));
        fetchEndOffsetsSuccessful = false;
    }
    for (final Map.Entry<UUID, ClientMetadata> entry : clientMetadataMap.entrySet()) {
        final UUID uuid = entry.getKey();
        final ClientState state = entry.getValue().state;
        state.initializePrevTasks(taskForPartition, taskManager.topologyMetadata().hasNamedTopologies());
        state.computeTaskLags(uuid, allTaskEndOffsetSums);
        clientStates.put(uuid, state);
    }
    return fetchEndOffsetsSuccessful;
}
Also used : ClientUtils.fetchEndOffsetsFuture(org.apache.kafka.streams.processor.internals.ClientUtils.fetchEndOffsetsFuture) FallbackPriorTaskAssignor(org.apache.kafka.streams.processor.internals.assignment.FallbackPriorTaskAssignor) SortedSet(java.util.SortedSet) ConsumerGroupMetadata(org.apache.kafka.clients.consumer.ConsumerGroupMetadata) PriorityQueue(java.util.PriorityQueue) KafkaException(org.apache.kafka.common.KafkaException) StreamsException(org.apache.kafka.streams.errors.StreamsException) ClientUtils.fetchCommittedOffsets(org.apache.kafka.streams.processor.internals.ClientUtils.fetchCommittedOffsets) ByteBuffer(java.nio.ByteBuffer) UNKNOWN_OFFSET_SUM(org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfo.UNKNOWN_OFFSET_SUM) Cluster(org.apache.kafka.common.Cluster) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) LogContext(org.apache.kafka.common.utils.LogContext) Map(java.util.Map) MissingSourceTopicException(org.apache.kafka.streams.errors.MissingSourceTopicException) Consumer(org.apache.kafka.clients.consumer.Consumer) TopicPartition(org.apache.kafka.common.TopicPartition) Configurable(org.apache.kafka.common.Configurable) Time(org.apache.kafka.common.utils.Time) ReferenceContainer(org.apache.kafka.streams.processor.internals.assignment.ReferenceContainer) LATEST_SUPPORTED_VERSION(org.apache.kafka.streams.processor.internals.assignment.StreamsAssignmentProtocolVersions.LATEST_SUPPORTED_VERSION) Collection(java.util.Collection) Set(java.util.Set) KafkaFuture(org.apache.kafka.common.KafkaFuture) PartitionInfo(org.apache.kafka.common.PartitionInfo) UUID(java.util.UUID) Collectors(java.util.stream.Collectors) AssignorConfiguration(org.apache.kafka.streams.processor.internals.assignment.AssignorConfiguration) Objects(java.util.Objects) ListOffsetsResultInfo(org.apache.kafka.clients.admin.ListOffsetsResult.ListOffsetsResultInfo) Utils.filterMap(org.apache.kafka.common.utils.Utils.filterMap) List(java.util.List) Node(org.apache.kafka.common.Node) Queue(java.util.Queue) SubscriptionInfo(org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfo) TaskId(org.apache.kafka.streams.processor.TaskId) AssignmentConfigs(org.apache.kafka.streams.processor.internals.assignment.AssignorConfiguration.AssignmentConfigs) AssignmentInfo(org.apache.kafka.streams.processor.internals.assignment.AssignmentInfo) HostInfo(org.apache.kafka.streams.state.HostInfo) HashMap(java.util.HashMap) AssignmentListener(org.apache.kafka.streams.processor.internals.assignment.AssignorConfiguration.AssignmentListener) Supplier(java.util.function.Supplier) TreeSet(java.util.TreeSet) ArrayList(java.util.ArrayList) CopartitionedTopicsEnforcer(org.apache.kafka.streams.processor.internals.assignment.CopartitionedTopicsEnforcer) HashSet(java.util.HashSet) UNKNOWN(org.apache.kafka.streams.processor.internals.assignment.StreamsAssignmentProtocolVersions.UNKNOWN) StickyTaskAssignor(org.apache.kafka.streams.processor.internals.assignment.StickyTaskAssignor) Admin(org.apache.kafka.clients.admin.Admin) LinkedList(java.util.LinkedList) ConsumerPartitionAssignor(org.apache.kafka.clients.consumer.ConsumerPartitionAssignor) Utils(org.apache.kafka.common.utils.Utils) EARLIEST_PROBEABLE_VERSION(org.apache.kafka.streams.processor.internals.assignment.StreamsAssignmentProtocolVersions.EARLIEST_PROBEABLE_VERSION) TimeoutException(org.apache.kafka.common.errors.TimeoutException) Logger(org.slf4j.Logger) Iterator(java.util.Iterator) TaskAssignmentException(org.apache.kafka.streams.errors.TaskAssignmentException) TopicsInfo(org.apache.kafka.streams.processor.internals.InternalTopologyBuilder.TopicsInfo) AssignorError(org.apache.kafka.streams.processor.internals.assignment.AssignorError) AtomicLong(java.util.concurrent.atomic.AtomicLong) UUID.randomUUID(java.util.UUID.randomUUID) TreeMap(java.util.TreeMap) ClientState(org.apache.kafka.streams.processor.internals.assignment.ClientState) TaskAssignor(org.apache.kafka.streams.processor.internals.assignment.TaskAssignor) Comparator(java.util.Comparator) Subtopology(org.apache.kafka.streams.processor.internals.TopologyMetadata.Subtopology) Collections(java.util.Collections) ClientState(org.apache.kafka.streams.processor.internals.assignment.ClientState) TaskId(org.apache.kafka.streams.processor.TaskId) ListOffsetsResultInfo(org.apache.kafka.clients.admin.ListOffsetsResult.ListOffsetsResultInfo) StreamsException(org.apache.kafka.streams.errors.StreamsException) TopicPartition(org.apache.kafka.common.TopicPartition) AtomicLong(java.util.concurrent.atomic.AtomicLong) UUID(java.util.UUID) UUID.randomUUID(java.util.UUID.randomUUID) Map(java.util.Map) Utils.filterMap(org.apache.kafka.common.utils.Utils.filterMap) HashMap(java.util.HashMap) TreeMap(java.util.TreeMap) TimeoutException(org.apache.kafka.common.errors.TimeoutException)

Example 34 with KafkaFuture

use of org.apache.kafka.common.KafkaFuture in project kafka by apache.

the class InternalTopicManager method getNumPartitions.

/**
 * Try to get the number of partitions for the given topics; return the number of partitions for topics that already exists.
 *
 * Topics that were not able to get its description will simply not be returned
 */
// visible for testing
protected Map<String, Integer> getNumPartitions(final Set<String> topics, final Set<String> tempUnknownTopics) {
    log.debug("Trying to check if topics {} have been created with expected number of partitions.", topics);
    final DescribeTopicsResult describeTopicsResult = adminClient.describeTopics(topics);
    final Map<String, KafkaFuture<TopicDescription>> futures = describeTopicsResult.topicNameValues();
    final Map<String, Integer> existedTopicPartition = new HashMap<>();
    for (final Map.Entry<String, KafkaFuture<TopicDescription>> topicFuture : futures.entrySet()) {
        final String topicName = topicFuture.getKey();
        try {
            final TopicDescription topicDescription = topicFuture.getValue().get();
            existedTopicPartition.put(topicName, topicDescription.partitions().size());
        } catch (final InterruptedException fatalException) {
            // this should not happen; if it ever happens it indicate a bug
            Thread.currentThread().interrupt();
            log.error(INTERRUPTED_ERROR_MESSAGE, fatalException);
            throw new IllegalStateException(INTERRUPTED_ERROR_MESSAGE, fatalException);
        } catch (final ExecutionException couldNotDescribeTopicException) {
            final Throwable cause = couldNotDescribeTopicException.getCause();
            if (cause instanceof UnknownTopicOrPartitionException) {
                // This topic didn't exist, proceed to try to create it
                log.debug("Topic {} is unknown or not found, hence not existed yet.\n" + "Error message was: {}", topicName, cause.toString());
            } else if (cause instanceof LeaderNotAvailableException) {
                tempUnknownTopics.add(topicName);
                log.debug("The leader of topic {} is not available.\n" + "Error message was: {}", topicName, cause.toString());
            } else {
                log.error("Unexpected error during topic description for {}.\n" + "Error message was: {}", topicName, cause.toString());
                throw new StreamsException(String.format("Could not create topic %s.", topicName), cause);
            }
        } catch (final TimeoutException retriableException) {
            tempUnknownTopics.add(topicName);
            log.debug("Describing topic {} (to get number of partitions) timed out.\n" + "Error message was: {}", topicName, retriableException.toString());
        }
    }
    return existedTopicPartition;
}
Also used : KafkaFuture(org.apache.kafka.common.KafkaFuture) HashMap(java.util.HashMap) UnknownTopicOrPartitionException(org.apache.kafka.common.errors.UnknownTopicOrPartitionException) StreamsException(org.apache.kafka.streams.errors.StreamsException) LeaderNotAvailableException(org.apache.kafka.common.errors.LeaderNotAvailableException) DescribeTopicsResult(org.apache.kafka.clients.admin.DescribeTopicsResult) TopicDescription(org.apache.kafka.clients.admin.TopicDescription) ExecutionException(java.util.concurrent.ExecutionException) HashMap(java.util.HashMap) Map(java.util.Map) TimeoutException(org.apache.kafka.common.errors.TimeoutException)

Example 35 with KafkaFuture

use of org.apache.kafka.common.KafkaFuture in project kafka by apache.

the class InternalTopicManager method processCreateTopicResults.

private void processCreateTopicResults(final CreateTopicsResult createTopicsResult, final Set<String> topicStillToCreate, final Set<String> createdTopics, final long deadline) {
    final Map<String, Throwable> lastErrorsSeenForTopic = new HashMap<>();
    final Map<String, KafkaFuture<Void>> createResultForTopic = createTopicsResult.values();
    while (!createResultForTopic.isEmpty()) {
        for (final String topicName : new HashSet<>(topicStillToCreate)) {
            if (!createResultForTopic.containsKey(topicName)) {
                cleanUpCreatedTopics(createdTopics);
                throw new IllegalStateException("Create topic results do not contain internal topic " + topicName + " to setup. " + BUG_ERROR_MESSAGE);
            }
            final KafkaFuture<Void> createResult = createResultForTopic.get(topicName);
            if (createResult.isDone()) {
                try {
                    createResult.get();
                    createdTopics.add(topicName);
                    topicStillToCreate.remove(topicName);
                } catch (final ExecutionException executionException) {
                    final Throwable cause = executionException.getCause();
                    if (cause instanceof TopicExistsException) {
                        lastErrorsSeenForTopic.put(topicName, cause);
                        log.info("Internal topic {} already exists. Topic is probably marked for deletion. " + "Will retry to create this topic later (to let broker complete async delete operation first)", topicName);
                    } else if (cause instanceof TimeoutException) {
                        lastErrorsSeenForTopic.put(topicName, cause);
                        log.info("Creating internal topic {} timed out.", topicName);
                    } else {
                        cleanUpCreatedTopics(createdTopics);
                        log.error("Unexpected error during creation of internal topic: ", cause);
                        throw new StreamsException(String.format("Could not create internal topic %s for the following reason: ", topicName), cause);
                    }
                } catch (final InterruptedException interruptedException) {
                    throw new InterruptException(interruptedException);
                } finally {
                    createResultForTopic.remove(topicName);
                }
            }
        }
        maybeThrowTimeoutExceptionDuringSetup(topicStillToCreate, createdTopics, lastErrorsSeenForTopic, deadline);
        if (!createResultForTopic.isEmpty()) {
            Utils.sleep(100);
        }
    }
}
Also used : KafkaFuture(org.apache.kafka.common.KafkaFuture) HashMap(java.util.HashMap) StreamsException(org.apache.kafka.streams.errors.StreamsException) InterruptException(org.apache.kafka.common.errors.InterruptException) TopicExistsException(org.apache.kafka.common.errors.TopicExistsException) ExecutionException(java.util.concurrent.ExecutionException) HashSet(java.util.HashSet) TimeoutException(org.apache.kafka.common.errors.TimeoutException)

Aggregations

KafkaFuture (org.apache.kafka.common.KafkaFuture)70 HashMap (java.util.HashMap)51 Map (java.util.Map)37 KafkaFutureImpl (org.apache.kafka.common.internals.KafkaFutureImpl)31 ExecutionException (java.util.concurrent.ExecutionException)22 TimeoutException (org.apache.kafka.common.errors.TimeoutException)21 ArrayList (java.util.ArrayList)15 UnknownTopicOrPartitionException (org.apache.kafka.common.errors.UnknownTopicOrPartitionException)15 Test (org.junit.jupiter.api.Test)15 ParameterizedTest (org.junit.jupiter.params.ParameterizedTest)14 TopicPartition (org.apache.kafka.common.TopicPartition)13 ConfigResource (org.apache.kafka.common.config.ConfigResource)12 HashSet (java.util.HashSet)11 TopicExistsException (org.apache.kafka.common.errors.TopicExistsException)10 AbstractResponse (org.apache.kafka.common.requests.AbstractResponse)8 UnsupportedVersionException (org.apache.kafka.common.errors.UnsupportedVersionException)7 ChannelBuilder (org.apache.kafka.common.network.ChannelBuilder)7 DescribeTopicsResult (org.apache.kafka.clients.admin.DescribeTopicsResult)6 Node (org.apache.kafka.common.Node)6 TopicPartitionReplica (org.apache.kafka.common.TopicPartitionReplica)6