Search in sources :

Example 1 with IsolatedTest

use of io.strimzi.systemtest.annotations.IsolatedTest in project strimzi by strimzi.

the class CruiseControlST method testCruiseControlWithApiSecurityDisabled.

@IsolatedTest
void testCruiseControlWithApiSecurityDisabled(ExtensionContext extensionContext) {
    String clusterName = mapWithClusterNames.get(extensionContext.getDisplayName());
    resourceManager.createResource(extensionContext, KafkaTemplates.kafkaWithCruiseControl(clusterName, 3, 3).editMetadata().withNamespace(namespace).endMetadata().editOrNewSpec().editCruiseControl().addToConfig("webserver.security.enable", "false").addToConfig("webserver.ssl.enable", "false").endCruiseControl().endSpec().build());
    resourceManager.createResource(extensionContext, KafkaRebalanceTemplates.kafkaRebalance(clusterName).editMetadata().withNamespace(namespace).endMetadata().build());
    KafkaRebalanceUtils.waitForKafkaRebalanceCustomResourceState(namespace, clusterName, KafkaRebalanceState.ProposalReady);
}
Also used : Matchers.containsString(org.hamcrest.Matchers.containsString) IsolatedTest(io.strimzi.systemtest.annotations.IsolatedTest)

Example 2 with IsolatedTest

use of io.strimzi.systemtest.annotations.IsolatedTest in project strimzi by strimzi.

the class CruiseControlST method testCruiseControlWithRebalanceResourceAndRefreshAnnotation.

@IsolatedTest("Using more tha one Kafka cluster in one namespace")
@Tag(ACCEPTANCE)
void testCruiseControlWithRebalanceResourceAndRefreshAnnotation(ExtensionContext extensionContext) {
    String clusterName = mapWithClusterNames.get(extensionContext.getDisplayName());
    resourceManager.createResource(extensionContext, KafkaTemplates.kafkaWithCruiseControl(clusterName, 3, 3).editMetadata().withNamespace(namespace).endMetadata().build());
    resourceManager.createResource(extensionContext, KafkaRebalanceTemplates.kafkaRebalance(clusterName).editMetadata().withNamespace(namespace).endMetadata().build());
    KafkaRebalanceUtils.doRebalancingProcess(new Reconciliation("test", KafkaRebalance.RESOURCE_KIND, namespace, clusterName), namespace, clusterName);
    LOGGER.info("Annotating KafkaRebalance: {} with 'refresh' anno", clusterName);
    KafkaRebalanceUtils.annotateKafkaRebalanceResource(new Reconciliation("test", KafkaRebalance.RESOURCE_KIND, namespace, clusterName), namespace, clusterName, KafkaRebalanceAnnotation.refresh);
    KafkaRebalanceUtils.waitForKafkaRebalanceCustomResourceState(namespace, clusterName, KafkaRebalanceState.ProposalReady);
    LOGGER.info("Trying rebalancing process again");
    KafkaRebalanceUtils.doRebalancingProcess(new Reconciliation("test", KafkaRebalance.RESOURCE_KIND, namespace, clusterName), namespace, clusterName);
}
Also used : Reconciliation(io.strimzi.operator.common.Reconciliation) Matchers.containsString(org.hamcrest.Matchers.containsString) IsolatedTest(io.strimzi.systemtest.annotations.IsolatedTest) Tag(org.junit.jupiter.api.Tag)

Example 3 with IsolatedTest

use of io.strimzi.systemtest.annotations.IsolatedTest in project strimzi by strimzi.

the class CruiseControlST method testAutoCreationOfCruiseControlTopics.

@IsolatedTest
void testAutoCreationOfCruiseControlTopics(ExtensionContext extensionContext) {
    final String clusterName = mapWithClusterNames.get(extensionContext.getDisplayName());
    resourceManager.createResource(extensionContext, KafkaTemplates.kafkaWithCruiseControl(clusterName, 3, 3).editMetadata().withNamespace(namespace).endMetadata().editOrNewSpec().editKafka().addToConfig("auto.create.topics.enable", "false").endKafka().endSpec().build());
    KafkaTopicUtils.waitForKafkaTopicReady(namespace, CRUISE_CONTROL_METRICS_TOPIC);
    KafkaTopicSpec metricsTopic = KafkaTopicResource.kafkaTopicClient().inNamespace(namespace).withName(CRUISE_CONTROL_METRICS_TOPIC).get().getSpec();
    KafkaTopicUtils.waitForKafkaTopicReady(namespace, CRUISE_CONTROL_MODEL_TRAINING_SAMPLES_TOPIC);
    KafkaTopicSpec modelTrainingTopic = KafkaTopicResource.kafkaTopicClient().inNamespace(namespace).withName(CRUISE_CONTROL_MODEL_TRAINING_SAMPLES_TOPIC).get().getSpec();
    KafkaTopicUtils.waitForKafkaTopicReady(namespace, CRUISE_CONTROL_PARTITION_METRICS_SAMPLES_TOPIC);
    KafkaTopicSpec partitionMetricsTopic = KafkaTopicResource.kafkaTopicClient().inNamespace(namespace).withName(CRUISE_CONTROL_PARTITION_METRICS_SAMPLES_TOPIC).get().getSpec();
    LOGGER.info("Checking partitions and replicas for {}", CRUISE_CONTROL_METRICS_TOPIC);
    assertThat(metricsTopic.getPartitions(), is(1));
    assertThat(metricsTopic.getReplicas(), is(3));
    LOGGER.info("Checking partitions and replicas for {}", CRUISE_CONTROL_MODEL_TRAINING_SAMPLES_TOPIC);
    assertThat(modelTrainingTopic.getPartitions(), is(32));
    assertThat(modelTrainingTopic.getReplicas(), is(2));
    LOGGER.info("Checking partitions and replicas for {}", CRUISE_CONTROL_PARTITION_METRICS_SAMPLES_TOPIC);
    assertThat(partitionMetricsTopic.getPartitions(), is(32));
    assertThat(partitionMetricsTopic.getReplicas(), is(2));
}
Also used : Matchers.containsString(org.hamcrest.Matchers.containsString) KafkaTopicSpec(io.strimzi.api.kafka.model.KafkaTopicSpec) IsolatedTest(io.strimzi.systemtest.annotations.IsolatedTest)

Example 4 with IsolatedTest

use of io.strimzi.systemtest.annotations.IsolatedTest in project strimzi by strimzi.

the class LoggingChangeST method testDynamicallySetClusterOperatorLoggingLevels.

@IsolatedTest("Scraping log from shared Cluster Operator")
@Tag(ROLLING_UPDATE)
void testDynamicallySetClusterOperatorLoggingLevels(ExtensionContext extensionContext) throws InterruptedException {
    final Map<String, String> coPod = DeploymentUtils.depSnapshot(INFRA_NAMESPACE, STRIMZI_DEPLOYMENT_NAME);
    final String coPodName = PodUtils.getPodsByPrefixInNameWithDynamicWait(INFRA_NAMESPACE, STRIMZI_DEPLOYMENT_NAME).get(0).getMetadata().getName();
    final String command = "cat /opt/strimzi/custom-config/log4j2.properties";
    String log4jConfig = "name = COConfig\n" + "monitorInterval = 30\n" + "\n" + "    appender.console.type = Console\n" + "    appender.console.name = STDOUT\n" + "    appender.console.layout.type = PatternLayout\n" + "    appender.console.layout.pattern = %d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n\n" + "\n" + "    rootLogger.level = OFF\n" + "    rootLogger.appenderRefs = stdout\n" + "    rootLogger.appenderRef.console.ref = STDOUT\n" + "    rootLogger.additivity = false\n" + "\n" + "    # Kafka AdminClient logging is a bit noisy at INFO level\n" + "    logger.kafka.name = org.apache.kafka\n" + "    logger.kafka.level = OFF\n" + "    logger.kafka.additivity = false\n" + "\n" + "    # Zookeeper is very verbose even on INFO level -> We set it to WARN by default\n" + "    logger.zookeepertrustmanager.name = org.apache.zookeeper\n" + "    logger.zookeepertrustmanager.level = OFF\n" + "    logger.zookeepertrustmanager.additivity = false";
    ConfigMap coMap = new ConfigMapBuilder().withNewMetadata().addToLabels("app", "strimzi").withName(STRIMZI_DEPLOYMENT_NAME).withNamespace(INFRA_NAMESPACE).endMetadata().withData(Collections.singletonMap("log4j2.properties", log4jConfig)).build();
    LOGGER.info("Checking that original logging config is different from the new one");
    assertThat(log4jConfig, not(equalTo(cmdKubeClient().namespace(INFRA_NAMESPACE).execInPod(coPodName, "/bin/bash", "-c", command).out().trim())));
    LOGGER.info("Changing logging for cluster-operator");
    kubeClient().getClient().configMaps().inNamespace(INFRA_NAMESPACE).createOrReplace(coMap);
    LOGGER.info("Waiting for log4j2.properties will contain desired settings");
    TestUtils.waitFor("Logger change", Constants.GLOBAL_POLL_INTERVAL, Constants.GLOBAL_TIMEOUT, () -> cmdKubeClient().namespace(INFRA_NAMESPACE).execInPod(coPodName, "/bin/bash", "-c", command).out().contains("rootLogger.level = OFF"));
    LOGGER.info("Checking log4j2.properties in CO pod");
    String podLogConfig = cmdKubeClient().namespace(INFRA_NAMESPACE).execInPod(coPodName, "/bin/bash", "-c", command).out().trim();
    assertThat(podLogConfig, equalTo(log4jConfig));
    LOGGER.info("Checking if CO rolled its pod");
    assertThat(coPod, equalTo(DeploymentUtils.depSnapshot(INFRA_NAMESPACE, STRIMZI_DEPLOYMENT_NAME)));
    TestUtils.waitFor("log to be empty", Duration.ofMillis(100).toMillis(), Constants.SAFETY_RECONCILIATION_INTERVAL, () -> {
        String coLog = StUtils.getLogFromPodByTime(INFRA_NAMESPACE, coPodName, STRIMZI_DEPLOYMENT_NAME, "30s");
        LOGGER.warn(coLog);
        return coLog != null && coLog.isEmpty() && !DEFAULT_LOG4J_PATTERN.matcher(coLog).find();
    });
    LOGGER.info("Changing all levels from OFF to INFO/WARN");
    log4jConfig = log4jConfig.replaceAll("OFF", "INFO");
    coMap.setData(Collections.singletonMap("log4j2.properties", log4jConfig));
    LOGGER.info("Changing logging for cluster-operator");
    kubeClient().getClient().configMaps().inNamespace(INFRA_NAMESPACE).createOrReplace(coMap);
    LOGGER.info("Waiting for log4j2.properties will contain desired settings");
    TestUtils.waitFor("Logger change", Constants.GLOBAL_POLL_INTERVAL, Constants.GLOBAL_TIMEOUT, () -> cmdKubeClient().namespace(INFRA_NAMESPACE).execInPod(coPodName, "/bin/bash", "-c", command).out().contains("rootLogger.level = INFO"));
    LOGGER.info("Checking log4j2.properties in CO pod");
    podLogConfig = cmdKubeClient().namespace(INFRA_NAMESPACE).execInPod(coPodName, "/bin/bash", "-c", command).out().trim();
    assertThat(podLogConfig, equalTo(log4jConfig));
    LOGGER.info("Checking if CO rolled its pod");
    assertThat(coPod, equalTo(DeploymentUtils.depSnapshot(INFRA_NAMESPACE, STRIMZI_DEPLOYMENT_NAME)));
    TestUtils.waitFor("log to not be empty", Duration.ofMillis(100).toMillis(), Constants.SAFETY_RECONCILIATION_INTERVAL, () -> {
        String coLog = StUtils.getLogFromPodByTime(INFRA_NAMESPACE, coPodName, STRIMZI_DEPLOYMENT_NAME, "30s");
        return coLog != null && !coLog.isEmpty() && DEFAULT_LOG4J_PATTERN.matcher(coLog).find();
    });
}
Also used : ConfigMap(io.fabric8.kubernetes.api.model.ConfigMap) ConfigMapBuilder(io.fabric8.kubernetes.api.model.ConfigMapBuilder) IsolatedTest(io.strimzi.systemtest.annotations.IsolatedTest) Tag(org.junit.jupiter.api.Tag)

Example 5 with IsolatedTest

use of io.strimzi.systemtest.annotations.IsolatedTest in project strimzi by strimzi.

the class MetricsIsolatedST method testKafkaExporterDataAfterExchange.

@IsolatedTest
@Tag(ACCEPTANCE)
@Tag(INTERNAL_CLIENTS_USED)
void testKafkaExporterDataAfterExchange(ExtensionContext extensionContext) {
    InternalKafkaClient internalKafkaClient = new InternalKafkaClient.Builder().withUsingPodName(kafkaClientsPodName).withTopicName(kafkaExporterTopic).withNamespaceName(INFRA_NAMESPACE).withClusterName(metricsClusterName).withMessageCount(5000).withListenerName(Constants.PLAIN_LISTENER_DEFAULT_NAME).build();
    internalKafkaClient.checkProducedAndConsumedMessages(internalKafkaClient.sendMessagesPlain(), internalKafkaClient.receiveMessagesPlain());
    kafkaExporterMetricsData = collector.toBuilder().withNamespaceName(INFRA_NAMESPACE).withComponentType(ComponentType.KafkaExporter).build().collectMetricsFromPods();
    TestUtils.waitFor("Kafka Exporter will contain correct metrics", Constants.GLOBAL_POLL_INTERVAL, GLOBAL_TIMEOUT, () -> {
        try {
            assertThat("Kafka Exporter metrics should be non-empty", kafkaExporterMetricsData.size() > 0);
            kafkaExporterMetricsData.forEach((key, value) -> {
                assertThat("Value from collected metric should be non-empty", !value.isEmpty());
                assertThat(value, CoreMatchers.containsString("kafka_consumergroup_current_offset"));
                assertThat(value, CoreMatchers.containsString("kafka_consumergroup_lag"));
                assertThat(value, CoreMatchers.containsString("kafka_topic_partitions{topic=\"" + kafkaExporterTopic + "\"} 7"));
            });
            return true;
        } catch (Exception e) {
            return false;
        }
    });
}
Also used : InternalKafkaClient(io.strimzi.systemtest.kafkaclients.clients.InternalKafkaClient) IOException(java.io.IOException) ExecutionException(java.util.concurrent.ExecutionException) IsolatedTest(io.strimzi.systemtest.annotations.IsolatedTest) Tag(org.junit.jupiter.api.Tag)

Aggregations

IsolatedTest (io.strimzi.systemtest.annotations.IsolatedTest)92 Tag (org.junit.jupiter.api.Tag)64 GenericKafkaListenerBuilder (io.strimzi.api.kafka.model.listener.arraylistener.GenericKafkaListenerBuilder)30 HashMap (java.util.HashMap)22 LabelSelector (io.fabric8.kubernetes.api.model.LabelSelector)20 KafkaTemplates (io.strimzi.systemtest.templates.crd.KafkaTemplates)20 Collectors (java.util.stream.Collectors)20 ExtensionContext (org.junit.jupiter.api.extension.ExtensionContext)20 AbstractST (io.strimzi.systemtest.AbstractST)18 REGRESSION (io.strimzi.systemtest.Constants.REGRESSION)18 InternalKafkaClient (io.strimzi.systemtest.kafkaclients.clients.InternalKafkaClient)18 KafkaTopicTemplates (io.strimzi.systemtest.templates.crd.KafkaTopicTemplates)18 Map (java.util.Map)18 KafkaResources (io.strimzi.api.kafka.model.KafkaResources)16 KafkaClientsBuilder (io.strimzi.systemtest.kafkaclients.internalClients.KafkaClientsBuilder)16 LogManager (org.apache.logging.log4j.LogManager)16 Logger (org.apache.logging.log4j.Logger)16 Constants (io.strimzi.systemtest.Constants)14 SetupClusterOperator (io.strimzi.systemtest.resources.operator.SetupClusterOperator)14 RollingUpdateUtils (io.strimzi.systemtest.utils.RollingUpdateUtils)14