Search in sources :

Example 1 with PartitionReplica

use of io.confluent.kafkarest.entities.PartitionReplica in project kafka-rest by confluentinc.

the class TopicManagerImpl method toPartition.

private static Partition toPartition(String clusterId, String topicName, TopicPartitionInfo partitionInfo) {
    Set<Node> inSyncReplicas = new HashSet<>(partitionInfo.isr());
    List<PartitionReplica> replicas = new ArrayList<>();
    for (Node replica : partitionInfo.replicas()) {
        replicas.add(PartitionReplica.create(clusterId, topicName, partitionInfo.partition(), replica.id(), replica.equals(partitionInfo.leader()), inSyncReplicas.contains(replica)));
    }
    return Partition.create(clusterId, topicName, partitionInfo.partition(), replicas);
}
Also used : PartitionReplica(io.confluent.kafkarest.entities.PartitionReplica) Node(org.apache.kafka.common.Node) ArrayList(java.util.ArrayList) HashSet(java.util.HashSet)

Example 2 with PartitionReplica

use of io.confluent.kafkarest.entities.PartitionReplica in project kafka-rest by confluentinc.

the class ReplicaManagerImplTest method searchByBrokerId_existingBroker_returnsReplicas.

@Test
public void searchByBrokerId_existingBroker_returnsReplicas() throws Exception {
    HashMap<TopicPartition, ReplicaInfo> partitions = new HashMap<>();
    partitions.put(new TopicPartition(TOPIC_NAME, PARTITION_ID_1), null);
    partitions.put(new TopicPartition(TOPIC_NAME, PARTITION_ID_2), null);
    expect(brokerManager.getBroker(CLUSTER_ID, BROKER_ID_1)).andReturn(completedFuture(Optional.of(BROKER_1)));
    expect(adminClient.describeLogDirs(eq(singletonList(BROKER_ID_1)), anyObject())).andReturn(describeLogDirsResult);
    expect(describeLogDirsResult.values()).andReturn(singletonMap(BROKER_ID_1, KafkaFuture.completedFuture(singletonMap(TOPIC_NAME, new LogDirInfo(null, partitions)))));
    expect(partitionManager.getPartition(CLUSTER_ID, TOPIC_NAME, PARTITION_ID_1)).andReturn(completedFuture(Optional.of(PARTITION_1)));
    expect(partitionManager.getPartition(CLUSTER_ID, TOPIC_NAME, PARTITION_ID_2)).andReturn(completedFuture(Optional.of(PARTITION_2)));
    replay(adminClient, describeLogDirsResult, brokerManager, partitionManager);
    List<PartitionReplica> replicas = replicaManager.searchReplicasByBrokerId(CLUSTER_ID, BROKER_ID_1).get();
    assertEquals(new HashSet<>(Arrays.asList(REPLICA_1_1, REPLICA_2_1)), new HashSet<>(replicas));
}
Also used : ReplicaInfo(org.apache.kafka.common.requests.DescribeLogDirsResponse.ReplicaInfo) HashMap(java.util.HashMap) PartitionReplica(io.confluent.kafkarest.entities.PartitionReplica) TopicPartition(org.apache.kafka.common.TopicPartition) LogDirInfo(org.apache.kafka.common.requests.DescribeLogDirsResponse.LogDirInfo) Test(org.junit.jupiter.api.Test)

Aggregations

PartitionReplica (io.confluent.kafkarest.entities.PartitionReplica)2 ArrayList (java.util.ArrayList)1 HashMap (java.util.HashMap)1 HashSet (java.util.HashSet)1 Node (org.apache.kafka.common.Node)1 TopicPartition (org.apache.kafka.common.TopicPartition)1 LogDirInfo (org.apache.kafka.common.requests.DescribeLogDirsResponse.LogDirInfo)1 ReplicaInfo (org.apache.kafka.common.requests.DescribeLogDirsResponse.ReplicaInfo)1 Test (org.junit.jupiter.api.Test)1