Search in sources :

Example 26 with AbstractResponse

use of org.apache.kafka.common.requests.AbstractResponse in project kafka by apache.

the class DeleteConsumerGroupOffsetsHandler method handleResponse.

@Override
public ApiResult<CoordinatorKey, Map<TopicPartition, Errors>> handleResponse(Node coordinator, Set<CoordinatorKey> groupIds, AbstractResponse abstractResponse) {
    validateKeys(groupIds);
    final OffsetDeleteResponse response = (OffsetDeleteResponse) abstractResponse;
    final Errors error = Errors.forCode(response.data().errorCode());
    if (error != Errors.NONE) {
        final Map<CoordinatorKey, Throwable> failed = new HashMap<>();
        final Set<CoordinatorKey> groupsToUnmap = new HashSet<>();
        handleGroupError(groupId, error, failed, groupsToUnmap);
        return new ApiResult<>(Collections.emptyMap(), failed, new ArrayList<>(groupsToUnmap));
    } else {
        final Map<TopicPartition, Errors> partitionResults = new HashMap<>();
        response.data().topics().forEach(topic -> topic.partitions().forEach(partition -> partitionResults.put(new TopicPartition(topic.name(), partition.partitionIndex()), Errors.forCode(partition.errorCode()))));
        return ApiResult.completed(groupId, partitionResults);
    }
}
Also used : TopicPartition(org.apache.kafka.common.TopicPartition) Logger(org.slf4j.Logger) AbstractResponse(org.apache.kafka.common.requests.AbstractResponse) OffsetDeleteRequestTopic(org.apache.kafka.common.message.OffsetDeleteRequestData.OffsetDeleteRequestTopic) Set(java.util.Set) HashMap(java.util.HashMap) Collectors(java.util.stream.Collectors) ArrayList(java.util.ArrayList) OffsetDeleteRequest(org.apache.kafka.common.requests.OffsetDeleteRequest) HashSet(java.util.HashSet) OffsetDeleteRequestData(org.apache.kafka.common.message.OffsetDeleteRequestData) OffsetDeleteRequestTopicCollection(org.apache.kafka.common.message.OffsetDeleteRequestData.OffsetDeleteRequestTopicCollection) CoordinatorType(org.apache.kafka.common.requests.FindCoordinatorRequest.CoordinatorType) Map(java.util.Map) LogContext(org.apache.kafka.common.utils.LogContext) Errors(org.apache.kafka.common.protocol.Errors) Node(org.apache.kafka.common.Node) OffsetDeleteResponse(org.apache.kafka.common.requests.OffsetDeleteResponse) OffsetDeleteRequestPartition(org.apache.kafka.common.message.OffsetDeleteRequestData.OffsetDeleteRequestPartition) Collections(java.util.Collections) OffsetDeleteResponse(org.apache.kafka.common.requests.OffsetDeleteResponse) HashMap(java.util.HashMap) Errors(org.apache.kafka.common.protocol.Errors) TopicPartition(org.apache.kafka.common.TopicPartition) HashSet(java.util.HashSet)

Example 27 with AbstractResponse

use of org.apache.kafka.common.requests.AbstractResponse in project kafka by apache.

the class ListTransactionsHandler method handleResponse.

@Override
public ApiResult<AllBrokersStrategy.BrokerKey, Collection<TransactionListing>> handleResponse(Node broker, Set<AllBrokersStrategy.BrokerKey> keys, AbstractResponse abstractResponse) {
    int brokerId = broker.id();
    AllBrokersStrategy.BrokerKey key = requireSingleton(keys, brokerId);
    ListTransactionsResponse response = (ListTransactionsResponse) abstractResponse;
    Errors error = Errors.forCode(response.data().errorCode());
    if (error == Errors.COORDINATOR_LOAD_IN_PROGRESS) {
        log.debug("The `ListTransactions` request sent to broker {} failed because the " + "coordinator is still loading state. Will try again after backing off", brokerId);
        return ApiResult.empty();
    } else if (error == Errors.COORDINATOR_NOT_AVAILABLE) {
        log.debug("The `ListTransactions` request sent to broker {} failed because the " + "coordinator is shutting down", brokerId);
        return ApiResult.failed(key, new CoordinatorNotAvailableException("ListTransactions " + "request sent to broker " + brokerId + " failed because the coordinator is shutting down"));
    } else if (error != Errors.NONE) {
        log.error("The `ListTransactions` request sent to broker {} failed because of an " + "unexpected error {}", brokerId, error);
        return ApiResult.failed(key, error.exception("ListTransactions request " + "sent to broker " + brokerId + " failed with an unexpected exception"));
    } else {
        List<TransactionListing> listings = response.data().transactionStates().stream().map(transactionState -> new TransactionListing(transactionState.transactionalId(), transactionState.producerId(), TransactionState.parse(transactionState.transactionState()))).collect(Collectors.toList());
        return ApiResult.completed(key, listings);
    }
}
Also used : CoordinatorNotAvailableException(org.apache.kafka.common.errors.CoordinatorNotAvailableException) Logger(org.slf4j.Logger) AbstractResponse(org.apache.kafka.common.requests.AbstractResponse) Collection(java.util.Collection) Set(java.util.Set) ListTransactionsRequest(org.apache.kafka.common.requests.ListTransactionsRequest) Collectors(java.util.stream.Collectors) ListTransactionsRequestData(org.apache.kafka.common.message.ListTransactionsRequestData) ArrayList(java.util.ArrayList) TransactionState(org.apache.kafka.clients.admin.TransactionState) ListTransactionsResponse(org.apache.kafka.common.requests.ListTransactionsResponse) List(java.util.List) LogContext(org.apache.kafka.common.utils.LogContext) ListTransactionsOptions(org.apache.kafka.clients.admin.ListTransactionsOptions) Errors(org.apache.kafka.common.protocol.Errors) TransactionListing(org.apache.kafka.clients.admin.TransactionListing) Node(org.apache.kafka.common.Node) Errors(org.apache.kafka.common.protocol.Errors) ListTransactionsResponse(org.apache.kafka.common.requests.ListTransactionsResponse) CoordinatorNotAvailableException(org.apache.kafka.common.errors.CoordinatorNotAvailableException) TransactionListing(org.apache.kafka.clients.admin.TransactionListing)

Example 28 with AbstractResponse

use of org.apache.kafka.common.requests.AbstractResponse in project kafka by apache.

the class DescribeProducersHandler method handleResponse.

@Override
public ApiResult<TopicPartition, PartitionProducerState> handleResponse(Node broker, Set<TopicPartition> keys, AbstractResponse abstractResponse) {
    DescribeProducersResponse response = (DescribeProducersResponse) abstractResponse;
    Map<TopicPartition, PartitionProducerState> completed = new HashMap<>();
    Map<TopicPartition, Throwable> failed = new HashMap<>();
    List<TopicPartition> unmapped = new ArrayList<>();
    for (DescribeProducersResponseData.TopicResponse topicResponse : response.data().topics()) {
        for (DescribeProducersResponseData.PartitionResponse partitionResponse : topicResponse.partitions()) {
            TopicPartition topicPartition = new TopicPartition(topicResponse.name(), partitionResponse.partitionIndex());
            Errors error = Errors.forCode(partitionResponse.errorCode());
            if (error != Errors.NONE) {
                ApiError apiError = new ApiError(error, partitionResponse.errorMessage());
                handlePartitionError(topicPartition, apiError, failed, unmapped);
                continue;
            }
            List<ProducerState> activeProducers = partitionResponse.activeProducers().stream().map(activeProducer -> {
                OptionalLong currentTransactionFirstOffset = activeProducer.currentTxnStartOffset() < 0 ? OptionalLong.empty() : OptionalLong.of(activeProducer.currentTxnStartOffset());
                OptionalInt coordinatorEpoch = activeProducer.coordinatorEpoch() < 0 ? OptionalInt.empty() : OptionalInt.of(activeProducer.coordinatorEpoch());
                return new ProducerState(activeProducer.producerId(), activeProducer.producerEpoch(), activeProducer.lastSequence(), activeProducer.lastTimestamp(), coordinatorEpoch, currentTransactionFirstOffset);
            }).collect(Collectors.toList());
            completed.put(topicPartition, new PartitionProducerState(activeProducers));
        }
    }
    return new ApiResult<>(completed, failed, unmapped);
}
Also used : DescribeProducersOptions(org.apache.kafka.clients.admin.DescribeProducersOptions) ProducerState(org.apache.kafka.clients.admin.ProducerState) AbstractResponse(org.apache.kafka.common.requests.AbstractResponse) HashMap(java.util.HashMap) DescribeProducersRequest(org.apache.kafka.common.requests.DescribeProducersRequest) OptionalInt(java.util.OptionalInt) ApiError(org.apache.kafka.common.requests.ApiError) ArrayList(java.util.ArrayList) HashSet(java.util.HashSet) OptionalLong(java.util.OptionalLong) DescribeProducersResponse(org.apache.kafka.common.requests.DescribeProducersResponse) LogContext(org.apache.kafka.common.utils.LogContext) Map(java.util.Map) TopicPartition(org.apache.kafka.common.TopicPartition) PartitionProducerState(org.apache.kafka.clients.admin.DescribeProducersResult.PartitionProducerState) Logger(org.slf4j.Logger) DescribeProducersRequestData(org.apache.kafka.common.message.DescribeProducersRequestData) Collection(java.util.Collection) InvalidTopicException(org.apache.kafka.common.errors.InvalidTopicException) Set(java.util.Set) Collectors(java.util.stream.Collectors) CollectionUtils(org.apache.kafka.common.utils.CollectionUtils) List(java.util.List) DescribeProducersResponseData(org.apache.kafka.common.message.DescribeProducersResponseData) TopicAuthorizationException(org.apache.kafka.common.errors.TopicAuthorizationException) Errors(org.apache.kafka.common.protocol.Errors) Node(org.apache.kafka.common.Node) Collections(java.util.Collections) HashMap(java.util.HashMap) ArrayList(java.util.ArrayList) PartitionProducerState(org.apache.kafka.clients.admin.DescribeProducersResult.PartitionProducerState) OptionalInt(java.util.OptionalInt) DescribeProducersResponse(org.apache.kafka.common.requests.DescribeProducersResponse) DescribeProducersResponseData(org.apache.kafka.common.message.DescribeProducersResponseData) Errors(org.apache.kafka.common.protocol.Errors) TopicPartition(org.apache.kafka.common.TopicPartition) ProducerState(org.apache.kafka.clients.admin.ProducerState) PartitionProducerState(org.apache.kafka.clients.admin.DescribeProducersResult.PartitionProducerState) OptionalLong(java.util.OptionalLong) ApiError(org.apache.kafka.common.requests.ApiError)

Example 29 with AbstractResponse

use of org.apache.kafka.common.requests.AbstractResponse in project kafka by apache.

the class AllBrokersStrategy method handleResponse.

@Override
public LookupResult<BrokerKey> handleResponse(Set<BrokerKey> keys, AbstractResponse abstractResponse) {
    validateLookupKeys(keys);
    MetadataResponse response = (MetadataResponse) abstractResponse;
    MetadataResponseData.MetadataResponseBrokerCollection brokers = response.data().brokers();
    if (brokers.isEmpty()) {
        log.debug("Metadata response contained no brokers. Will backoff and retry");
        return LookupResult.empty();
    } else {
        log.debug("Discovered all brokers {} to send requests to", brokers);
    }
    Map<BrokerKey, Integer> brokerKeys = brokers.stream().collect(Collectors.toMap(broker -> new BrokerKey(OptionalInt.of(broker.nodeId())), MetadataResponseData.MetadataResponseBroker::nodeId));
    return new LookupResult<>(Collections.singletonList(ANY_BROKER), Collections.emptyMap(), brokerKeys);
}
Also used : Logger(org.slf4j.Logger) AbstractResponse(org.apache.kafka.common.requests.AbstractResponse) Set(java.util.Set) HashMap(java.util.HashMap) OptionalInt(java.util.OptionalInt) Collectors(java.util.stream.Collectors) Objects(java.util.Objects) MetadataRequest(org.apache.kafka.common.requests.MetadataRequest) LogContext(org.apache.kafka.common.utils.LogContext) Map(java.util.Map) MetadataRequestData(org.apache.kafka.common.message.MetadataRequestData) MetadataResponseData(org.apache.kafka.common.message.MetadataResponseData) KafkaFutureImpl(org.apache.kafka.common.internals.KafkaFutureImpl) MetadataResponse(org.apache.kafka.common.requests.MetadataResponse) Collections(java.util.Collections) MetadataResponseData(org.apache.kafka.common.message.MetadataResponseData) MetadataResponse(org.apache.kafka.common.requests.MetadataResponse)

Example 30 with AbstractResponse

use of org.apache.kafka.common.requests.AbstractResponse in project kafka by apache.

the class SaslClientAuthenticator method receiveKafkaResponse.

private AbstractResponse receiveKafkaResponse() throws IOException {
    if (netInBuffer == null)
        netInBuffer = new NetworkReceive(node);
    NetworkReceive receive = netInBuffer;
    try {
        byte[] responseBytes = receiveResponseOrToken();
        if (responseBytes == null)
            return null;
        else {
            AbstractResponse response = NetworkClient.parseResponse(ByteBuffer.wrap(responseBytes), currentRequestHeader);
            currentRequestHeader = null;
            return response;
        }
    } catch (BufferUnderflowException | SchemaException | IllegalArgumentException e) {
        /*
             * Account for the fact that during re-authentication there may be responses
             * arriving for requests that were sent in the past.
             */
        if (reauthInfo.reauthenticating()) {
            /*
                 * It didn't match the current request header, so it must be unrelated to
                 * re-authentication. Save it so it can be processed later.
                 */
            receive.payload().rewind();
            reauthInfo.pendingAuthenticatedReceives.add(receive);
            return null;
        }
        log.debug("Invalid SASL mechanism response, server may be expecting only GSSAPI tokens");
        setSaslState(SaslState.FAILED);
        throw new IllegalSaslStateException("Invalid SASL mechanism response, server may be expecting a different protocol", e);
    }
}
Also used : SchemaException(org.apache.kafka.common.protocol.types.SchemaException) AbstractResponse(org.apache.kafka.common.requests.AbstractResponse) NetworkReceive(org.apache.kafka.common.network.NetworkReceive) IllegalSaslStateException(org.apache.kafka.common.errors.IllegalSaslStateException) BufferUnderflowException(java.nio.BufferUnderflowException)

Aggregations

AbstractResponse (org.apache.kafka.common.requests.AbstractResponse)49 HashMap (java.util.HashMap)38 ChannelBuilder (org.apache.kafka.common.network.ChannelBuilder)38 KafkaFutureImpl (org.apache.kafka.common.internals.KafkaFutureImpl)36 ArrayList (java.util.ArrayList)28 Map (java.util.Map)26 Errors (org.apache.kafka.common.protocol.Errors)21 ApiError (org.apache.kafka.common.requests.ApiError)18 KafkaFuture (org.apache.kafka.common.KafkaFuture)16 List (java.util.List)15 TreeMap (java.util.TreeMap)15 MetadataResponse (org.apache.kafka.common.requests.MetadataResponse)15 AtomicInteger (java.util.concurrent.atomic.AtomicInteger)14 TopicPartition (org.apache.kafka.common.TopicPartition)14 InvalidTopicException (org.apache.kafka.common.errors.InvalidTopicException)14 LinkedList (java.util.LinkedList)13 MetadataRequest (org.apache.kafka.common.requests.MetadataRequest)13 Set (java.util.Set)12 ApiException (org.apache.kafka.common.errors.ApiException)12 HashSet (java.util.HashSet)11