Search in sources :

Example 1 with ZkUtils

use of kafka.utils.ZkUtils in project kafka by apache.

the class InternalTopicIntegrationTest method getTopicConfigProperties.

private Properties getTopicConfigProperties(final String changelog) {
    // Note: You must initialize the ZkClient with ZKStringSerializer.  If you don't, then
    // createTopics() will only seem to work (it will return without error).  The topic will exist in
    // only ZooKeeper and will be returned when listing topics, but Kafka itself does not create the
    // topic.
    final ZkClient zkClient = new ZkClient(CLUSTER.zKConnectString(), DEFAULT_ZK_SESSION_TIMEOUT_MS, DEFAULT_ZK_CONNECTION_TIMEOUT_MS, ZKStringSerializer$.MODULE$);
    try {
        final boolean isSecure = false;
        final ZkUtils zkUtils = new ZkUtils(zkClient, new ZkConnection(CLUSTER.zKConnectString()), isSecure);
        final Map<String, Properties> topicConfigs = AdminUtils.fetchAllTopicConfigs(zkUtils);
        final Iterator it = topicConfigs.iterator();
        while (it.hasNext()) {
            final Tuple2<String, Properties> topicConfig = (Tuple2<String, Properties>) it.next();
            final String topic = topicConfig._1;
            final Properties prop = topicConfig._2;
            if (topic.equals(changelog)) {
                return prop;
            }
        }
        return new Properties();
    } finally {
        zkClient.close();
    }
}
Also used : ZkClient(org.I0Itec.zkclient.ZkClient) Tuple2(scala.Tuple2) Iterator(scala.collection.Iterator) ZkUtils(kafka.utils.ZkUtils) Properties(java.util.Properties) ZkConnection(org.I0Itec.zkclient.ZkConnection)

Example 2 with ZkUtils

use of kafka.utils.ZkUtils in project kafka by apache.

the class KafkaEmbedded method deleteTopic.

public void deleteTopic(final String topic) {
    log.debug("Deleting topic { name: {} }", topic);
    final ZkClient zkClient = new ZkClient(zookeeperConnect(), DEFAULT_ZK_SESSION_TIMEOUT_MS, DEFAULT_ZK_CONNECTION_TIMEOUT_MS, ZKStringSerializer$.MODULE$);
    final boolean isSecure = false;
    final ZkUtils zkUtils = new ZkUtils(zkClient, new ZkConnection(zookeeperConnect()), isSecure);
    AdminUtils.deleteTopic(zkUtils, topic);
    zkClient.close();
}
Also used : ZkClient(org.I0Itec.zkclient.ZkClient) ZkUtils(kafka.utils.ZkUtils) ZkConnection(org.I0Itec.zkclient.ZkConnection)

Example 3 with ZkUtils

use of kafka.utils.ZkUtils in project kafka by apache.

the class KafkaEmbedded method createTopic.

/**
     * Create a Kafka topic with the given parameters.
     *
     * @param topic       The name of the topic.
     * @param partitions  The number of partitions for this topic.
     * @param replication The replication factor for (partitions of) this topic.
     * @param topicConfig Additional topic-level configuration settings.
     */
public void createTopic(final String topic, final int partitions, final int replication, final Properties topicConfig) {
    log.debug("Creating topic { name: {}, partitions: {}, replication: {}, config: {} }", topic, partitions, replication, topicConfig);
    // Note: You must initialize the ZkClient with ZKStringSerializer.  If you don't, then
    // createTopic() will only seem to work (it will return without error).  The topic will exist in
    // only ZooKeeper and will be returned when listing topics, but Kafka itself does not create the
    // topic.
    final ZkClient zkClient = new ZkClient(zookeeperConnect(), DEFAULT_ZK_SESSION_TIMEOUT_MS, DEFAULT_ZK_CONNECTION_TIMEOUT_MS, ZKStringSerializer$.MODULE$);
    final boolean isSecure = false;
    final ZkUtils zkUtils = new ZkUtils(zkClient, new ZkConnection(zookeeperConnect()), isSecure);
    AdminUtils.createTopic(zkUtils, topic, partitions, replication, topicConfig, RackAwareMode.Enforced$.MODULE$);
    zkClient.close();
}
Also used : ZkClient(org.I0Itec.zkclient.ZkClient) ZkUtils(kafka.utils.ZkUtils) ZkConnection(org.I0Itec.zkclient.ZkConnection)

Example 4 with ZkUtils

use of kafka.utils.ZkUtils in project kafka by apache.

the class ResetIntegrationTest method assertInternalTopicsGotDeleted.

private void assertInternalTopicsGotDeleted(final String intermediateUserTopic) {
    final Set<String> expectedRemainingTopicsAfterCleanup = new HashSet<>();
    expectedRemainingTopicsAfterCleanup.add(INPUT_TOPIC);
    if (intermediateUserTopic != null) {
        expectedRemainingTopicsAfterCleanup.add(intermediateUserTopic);
    }
    expectedRemainingTopicsAfterCleanup.add(OUTPUT_TOPIC);
    expectedRemainingTopicsAfterCleanup.add(OUTPUT_TOPIC_2);
    expectedRemainingTopicsAfterCleanup.add(OUTPUT_TOPIC_2_RERUN);
    expectedRemainingTopicsAfterCleanup.add("__consumer_offsets");
    Set<String> allTopics;
    ZkUtils zkUtils = null;
    try {
        zkUtils = ZkUtils.apply(CLUSTER.zKConnectString(), 30000, 30000, JaasUtils.isZkSecurityEnabled());
        do {
            Utils.sleep(100);
            allTopics = new HashSet<>();
            allTopics.addAll(scala.collection.JavaConversions.seqAsJavaList(zkUtils.getAllTopics()));
        } while (allTopics.size() != expectedRemainingTopicsAfterCleanup.size());
    } finally {
        if (zkUtils != null) {
            zkUtils.close();
        }
    }
    assertThat(allTopics, equalTo(expectedRemainingTopicsAfterCleanup));
}
Also used : ZkUtils(kafka.utils.ZkUtils) HashSet(java.util.HashSet)

Example 5 with ZkUtils

use of kafka.utils.ZkUtils in project kafka by apache.

the class ResetIntegrationTest method testReprocessingFromScratchAfterResetWithIntermediateUserTopic.

@Test
public void testReprocessingFromScratchAfterResetWithIntermediateUserTopic() throws Exception {
    CLUSTER.createTopic(INTERMEDIATE_USER_TOPIC);
    final Properties streamsConfiguration = prepareTest();
    final Properties resultTopicConsumerConfig = TestUtils.consumerConfig(CLUSTER.bootstrapServers(), APP_ID + "-standard-consumer-" + OUTPUT_TOPIC, LongDeserializer.class, LongDeserializer.class);
    // RUN
    KafkaStreams streams = new KafkaStreams(setupTopologyWithIntermediateUserTopic(OUTPUT_TOPIC_2), streamsConfiguration);
    streams.start();
    final List<KeyValue<Long, Long>> result = IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(resultTopicConsumerConfig, OUTPUT_TOPIC, 10, 60000);
    // receive only first values to make sure intermediate user topic is not consumed completely
    // => required to test "seekToEnd" for intermediate topics
    final List<KeyValue<Long, Long>> result2 = IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(resultTopicConsumerConfig, OUTPUT_TOPIC_2, 10);
    streams.close();
    TestUtils.waitForCondition(consumerGroupInactive, TIMEOUT_MULTIPLIER * STREAMS_CONSUMER_TIMEOUT, "Streams Application consumer group did not time out after " + (TIMEOUT_MULTIPLIER * STREAMS_CONSUMER_TIMEOUT) + " ms.");
    // insert bad record to maks sure intermediate user topic gets seekToEnd()
    mockTime.sleep(1);
    IntegrationTestUtils.produceKeyValuesSynchronouslyWithTimestamp(INTERMEDIATE_USER_TOPIC, Collections.singleton(new KeyValue<>(-1L, "badRecord-ShouldBeSkipped")), TestUtils.producerConfig(CLUSTER.bootstrapServers(), LongSerializer.class, StringSerializer.class), mockTime.milliseconds());
    // RESET
    streams = new KafkaStreams(setupTopologyWithIntermediateUserTopic(OUTPUT_TOPIC_2_RERUN), streamsConfiguration);
    streams.cleanUp();
    cleanGlobal(INTERMEDIATE_USER_TOPIC);
    TestUtils.waitForCondition(consumerGroupInactive, TIMEOUT_MULTIPLIER * CLEANUP_CONSUMER_TIMEOUT, "Reset Tool consumer group did not time out after " + (TIMEOUT_MULTIPLIER * CLEANUP_CONSUMER_TIMEOUT) + " ms.");
    assertInternalTopicsGotDeleted(INTERMEDIATE_USER_TOPIC);
    // RE-RUN
    streams.start();
    final List<KeyValue<Long, Long>> resultRerun = IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(resultTopicConsumerConfig, OUTPUT_TOPIC, 10, 60000);
    final List<KeyValue<Long, Long>> resultRerun2 = IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(resultTopicConsumerConfig, OUTPUT_TOPIC_2_RERUN, 10);
    streams.close();
    assertThat(resultRerun, equalTo(result));
    assertThat(resultRerun2, equalTo(result2));
    TestUtils.waitForCondition(consumerGroupInactive, TIMEOUT_MULTIPLIER * CLEANUP_CONSUMER_TIMEOUT, "Reset Tool consumer group did not time out after " + (TIMEOUT_MULTIPLIER * CLEANUP_CONSUMER_TIMEOUT) + " ms.");
    cleanGlobal(INTERMEDIATE_USER_TOPIC);
    CLUSTER.deleteTopic(INTERMEDIATE_USER_TOPIC);
    Set<String> allTopics;
    ZkUtils zkUtils = null;
    try {
        zkUtils = ZkUtils.apply(CLUSTER.zKConnectString(), 30000, 30000, JaasUtils.isZkSecurityEnabled());
        do {
            Utils.sleep(100);
            allTopics = new HashSet<>();
            allTopics.addAll(scala.collection.JavaConversions.seqAsJavaList(zkUtils.getAllTopics()));
        } while (allTopics.contains(INTERMEDIATE_USER_TOPIC));
    } finally {
        if (zkUtils != null) {
            zkUtils.close();
        }
    }
}
Also used : KafkaStreams(org.apache.kafka.streams.KafkaStreams) KeyValue(org.apache.kafka.streams.KeyValue) LongSerializer(org.apache.kafka.common.serialization.LongSerializer) ZkUtils(kafka.utils.ZkUtils) Properties(java.util.Properties) StringSerializer(org.apache.kafka.common.serialization.StringSerializer) Test(org.junit.Test)

Aggregations

ZkUtils (kafka.utils.ZkUtils)61 ZkClient (org.I0Itec.zkclient.ZkClient)26 Properties (java.util.Properties)25 ZkConnection (org.I0Itec.zkclient.ZkConnection)22 Test (org.testng.annotations.Test)18 Configuration (org.apache.commons.configuration.Configuration)16 HashMap (java.util.HashMap)8 KafkaConfig (kafka.server.KafkaConfig)8 TestingServer (org.apache.curator.test.TestingServer)8 KafkaServerStartable (kafka.server.KafkaServerStartable)7 ServerSocket (java.net.ServerSocket)6 Test (org.junit.Test)4 File (java.io.File)3 Path (java.nio.file.Path)2 Level (java.util.logging.Level)2 TopicMetadata (kafka.api.TopicMetadata)2 TopicExistsException (kafka.common.TopicExistsException)2 InstanceSpec (org.apache.curator.test.InstanceSpec)2 TopicDescription (org.apache.kafka.clients.admin.TopicDescription)2 Before (org.junit.Before)2