Search in sources :

Example 1 with Config

use of org.apache.kafka.clients.admin.Config in project ksql by confluentinc.

the class KafkaTopicClientImpl method isTopicDeleteEnabled.

private static boolean isTopicDeleteEnabled(final AdminClient adminClient) {
    try {
        DescribeClusterResult describeClusterResult = adminClient.describeCluster();
        Collection<Node> nodes = describeClusterResult.nodes().get();
        if (nodes.isEmpty()) {
            log.warn("No available broker found to fetch config info.");
            throw new KsqlException("Could not fetch broker information. KSQL cannot initialize");
        }
        ConfigResource resource = new ConfigResource(ConfigResource.Type.BROKER, String.valueOf(nodes.iterator().next().id()));
        Map<ConfigResource, Config> config = executeWithRetries(() -> adminClient.describeConfigs(Collections.singleton(resource)).all());
        return config.get(resource).entries().stream().anyMatch(configEntry -> configEntry.name().equalsIgnoreCase("delete.topic.enable") && configEntry.value().equalsIgnoreCase("true"));
    } catch (final Exception e) {
        log.error("Failed to initialize TopicClient: {}", e.getMessage());
        throw new KsqlException("Could not fetch broker information. KSQL cannot initialize", e);
    }
}
Also used : DescribeClusterResult(org.apache.kafka.clients.admin.DescribeClusterResult) Config(org.apache.kafka.clients.admin.Config) TopicConfig(org.apache.kafka.common.config.TopicConfig) Node(org.apache.kafka.common.Node) ConfigResource(org.apache.kafka.common.config.ConfigResource) KafkaTopicException(io.confluent.ksql.exception.KafkaTopicException) RetriableException(org.apache.kafka.common.errors.RetriableException) KafkaResponseGetFailedException(io.confluent.ksql.exception.KafkaResponseGetFailedException) ExecutionException(java.util.concurrent.ExecutionException) TopicExistsException(org.apache.kafka.common.errors.TopicExistsException)

Example 2 with Config

use of org.apache.kafka.clients.admin.Config in project strimzi by strimzi.

the class TopicSerializationTest method testFromTopicMetadata.

@Test
public void testFromTopicMetadata() {
    List<ConfigEntry> entries = new ArrayList<>();
    entries.add(new ConfigEntry("foo", "bar"));
    Config topicConfig = new Config(entries);
    TopicMetadata meta = Utils.getTopicMetadata("test-topic", topicConfig);
    Topic topic = TopicSerialization.fromTopicMetadata(meta);
    assertEquals(new TopicName("test-topic"), topic.getTopicName());
    // Null map name because Kafka doesn't know about the map
    assertNull(topic.getMapName());
    assertEquals(singletonMap("foo", "bar"), topic.getConfig());
    assertEquals(2, topic.getNumPartitions());
    assertEquals(3, topic.getNumReplicas());
}
Also used : ConfigEntry(org.apache.kafka.clients.admin.ConfigEntry) Config(org.apache.kafka.clients.admin.Config) ArrayList(java.util.ArrayList) NewTopic(org.apache.kafka.clients.admin.NewTopic) Test(org.junit.Test)

Example 3 with Config

use of org.apache.kafka.clients.admin.Config in project strimzi by strimzi.

the class Utils method getTopicMetadata.

public static TopicMetadata getTopicMetadata(Topic kubeTopic) {
    List<Node> nodes = new ArrayList<>();
    for (int nodeId = 0; nodeId < kubeTopic.getNumReplicas(); nodeId++) {
        nodes.add(new Node(nodeId, "localhost", 9092 + nodeId));
    }
    List<TopicPartitionInfo> partitions = new ArrayList<>();
    for (int partitionId = 0; partitionId < kubeTopic.getNumPartitions(); partitionId++) {
        partitions.add(new TopicPartitionInfo(partitionId, nodes.get(0), nodes, nodes));
    }
    List<ConfigEntry> configs = new ArrayList<>();
    for (Map.Entry<String, String> entry : kubeTopic.getConfig().entrySet()) {
        configs.add(new ConfigEntry(entry.getKey(), entry.getValue()));
    }
    return new TopicMetadata(new TopicDescription(kubeTopic.getTopicName().toString(), false, partitions), new Config(configs));
}
Also used : Config(org.apache.kafka.clients.admin.Config) Node(org.apache.kafka.common.Node) ArrayList(java.util.ArrayList) ConfigEntry(org.apache.kafka.clients.admin.ConfigEntry) TopicPartitionInfo(org.apache.kafka.common.TopicPartitionInfo) TopicDescription(org.apache.kafka.clients.admin.TopicDescription) Map(java.util.Map)

Example 4 with Config

use of org.apache.kafka.clients.admin.Config in project strimzi by strimzi.

the class TopicSerializationTest method testToTopicConfig.

@Test
public void testToTopicConfig() {
    Topic topic = new Topic.Builder().withTopicName("test-topic").withConfigEntry("foo", "bar").withNumPartitions(3).withNumReplicas((short) 2).withMapName("gee").build();
    Map<ConfigResource, Config> config = TopicSerialization.toTopicConfig(topic);
    assertEquals(1, config.size());
    Map.Entry<ConfigResource, Config> c = config.entrySet().iterator().next();
    assertEquals(c.getKey().type(), ConfigResource.Type.TOPIC);
    assertEquals(c.getKey().name(), "test-topic");
    assertEquals(1, c.getValue().entries().size());
    assertEquals("foo", c.getValue().get("foo").name());
    assertEquals("bar", c.getValue().get("foo").value());
}
Also used : Config(org.apache.kafka.clients.admin.Config) ConfigMapBuilder(io.fabric8.kubernetes.api.model.ConfigMapBuilder) NewTopic(org.apache.kafka.clients.admin.NewTopic) HashMap(java.util.HashMap) ConfigMap(io.fabric8.kubernetes.api.model.ConfigMap) Map(java.util.Map) Collections.singletonMap(java.util.Collections.singletonMap) ConfigResource(org.apache.kafka.common.config.ConfigResource) Test(org.junit.Test)

Example 5 with Config

use of org.apache.kafka.clients.admin.Config in project strimzi by strimzi.

the class TopicSerialization method toTopicConfig.

/**
 * Return a singleton map from the topic {@link ConfigResource} for the given topic,
 * to the {@link Config} of the given topic.
 */
public static Map<ConfigResource, Config> toTopicConfig(Topic topic) {
    Set<ConfigEntry> configEntries = new HashSet<>();
    for (Map.Entry<String, String> entry : topic.getConfig().entrySet()) {
        configEntries.add(new ConfigEntry(entry.getKey(), entry.getValue()));
    }
    Config config = new Config(configEntries);
    return Collections.singletonMap(new ConfigResource(ConfigResource.Type.TOPIC, topic.getTopicName().toString()), config);
}
Also used : ConfigEntry(org.apache.kafka.clients.admin.ConfigEntry) Config(org.apache.kafka.clients.admin.Config) LogConfig(kafka.log.LogConfig) HashMap(java.util.HashMap) ConfigMap(io.fabric8.kubernetes.api.model.ConfigMap) Map(java.util.Map) ConfigResource(org.apache.kafka.common.config.ConfigResource) HashSet(java.util.HashSet)

Aggregations

Config (org.apache.kafka.clients.admin.Config)13 ConfigResource (org.apache.kafka.common.config.ConfigResource)10 Map (java.util.Map)7 ConfigEntry (org.apache.kafka.clients.admin.ConfigEntry)7 TopicConfig (org.apache.kafka.common.config.TopicConfig)6 ExecutionException (java.util.concurrent.ExecutionException)5 NewTopic (org.apache.kafka.clients.admin.NewTopic)5 Node (org.apache.kafka.common.Node)5 AdminClient (org.apache.kafka.clients.admin.AdminClient)4 TopicDescription (org.apache.kafka.clients.admin.TopicDescription)4 KafkaResponseGetFailedException (io.confluent.ksql.exception.KafkaResponseGetFailedException)3 KafkaTopicException (io.confluent.ksql.exception.KafkaTopicException)3 Collections (java.util.Collections)3 HashMap (java.util.HashMap)3 Set (java.util.Set)3 DescribeClusterResult (org.apache.kafka.clients.admin.DescribeClusterResult)3 KafkaFuture (org.apache.kafka.common.KafkaFuture)3 RetriableException (org.apache.kafka.common.errors.RetriableException)3 TopicExistsException (org.apache.kafka.common.errors.TopicExistsException)3 Logger (org.slf4j.Logger)3