Search in sources :

Example 16 with ParallelNamespaceTest

use of io.strimzi.systemtest.annotations.ParallelNamespaceTest in project strimzi by strimzi.

the class ConnectIsolatedST method testKafkaConnectWithScramShaAuthenticationRolledAfterPasswordChanged.

@ParallelNamespaceTest
@Tag(INTERNAL_CLIENTS_USED)
// changing the password in secret should cause the RU of connect pod
void testKafkaConnectWithScramShaAuthenticationRolledAfterPasswordChanged(ExtensionContext extensionContext) {
    final String namespaceName = StUtils.getNamespaceBasedOnRbac(INFRA_NAMESPACE, extensionContext);
    final String clusterName = mapWithClusterNames.get(extensionContext.getDisplayName());
    final String userName = mapWithTestUsers.get(extensionContext.getDisplayName());
    final String topicName = mapWithTestTopics.get(extensionContext.getDisplayName());
    final String kafkaClientsName = mapWithKafkaClientNames.get(extensionContext.getDisplayName());
    resourceManager.createResource(extensionContext, KafkaTemplates.kafkaEphemeral(clusterName, 3).editSpec().editKafka().withListeners(new GenericKafkaListenerBuilder().withName(Constants.PLAIN_LISTENER_DEFAULT_NAME).withPort(9092).withType(KafkaListenerType.INTERNAL).withTls(false).withAuth(new KafkaListenerAuthenticationScramSha512()).build()).endKafka().endSpec().build());
    Secret passwordSecret = new SecretBuilder().withNewMetadata().withName("custom-pwd-secret").endMetadata().addToData("pwd", "MTIzNDU2Nzg5").build();
    kubeClient(namespaceName).createSecret(passwordSecret);
    KafkaUser kafkaUser = KafkaUserTemplates.scramShaUser(clusterName, userName).editSpec().withNewKafkaUserScramSha512ClientAuthentication().withNewPassword().withNewValueFrom().withNewSecretKeyRef("pwd", "custom-pwd-secret", false).endValueFrom().endPassword().endKafkaUserScramSha512ClientAuthentication().endSpec().build();
    resourceManager.createResource(extensionContext, kafkaUser);
    resourceManager.createResource(extensionContext, KafkaClientsTemplates.kafkaClients(false, kafkaClientsName).build());
    resourceManager.createResource(extensionContext, KafkaUserTemplates.scramShaUser(clusterName, userName).build());
    resourceManager.createResource(extensionContext, KafkaTopicTemplates.topic(clusterName, topicName).build());
    resourceManager.createResource(extensionContext, KafkaConnectTemplates.kafkaConnect(extensionContext, clusterName, 1).withNewSpec().withBootstrapServers(KafkaResources.plainBootstrapAddress(clusterName)).withNewKafkaClientAuthenticationScramSha512().withUsername(userName).withPasswordSecret(new PasswordSecretSourceBuilder().withSecretName(userName).withPassword("password").build()).endKafkaClientAuthenticationScramSha512().addToConfig("key.converter.schemas.enable", false).addToConfig("value.converter.schemas.enable", false).addToConfig("key.converter", "org.apache.kafka.connect.storage.StringConverter").addToConfig("value.converter", "org.apache.kafka.connect.storage.StringConverter").withVersion(Environment.ST_KAFKA_VERSION).withReplicas(1).endSpec().build());
    final String kafkaConnectPodName = kubeClient(namespaceName).listPodsByPrefixInName(KafkaConnectResources.deploymentName(clusterName)).get(0).getMetadata().getName();
    KafkaConnectUtils.waitUntilKafkaConnectRestApiIsAvailable(namespaceName, kafkaConnectPodName);
    Map<String, String> connectSnapshot = DeploymentUtils.depSnapshot(namespaceName, KafkaConnectResources.deploymentName(clusterName));
    String newPassword = "bmVjb0ppbmVob05lelNwcmF2bnlQYXNzd29yZA==";
    Secret newPasswordSecret = new SecretBuilder().withNewMetadata().withName("new-custom-pwd-secret").endMetadata().addToData("pwd", newPassword).build();
    kubeClient(namespaceName).createSecret(newPasswordSecret);
    kafkaUser = KafkaUserTemplates.scramShaUser(clusterName, userName).editSpec().withNewKafkaUserScramSha512ClientAuthentication().withNewPassword().withNewValueFrom().withNewSecretKeyRef("pwd", "new-custom-pwd-secret", false).endValueFrom().endPassword().endKafkaUserScramSha512ClientAuthentication().endSpec().build();
    resourceManager.createResource(extensionContext, kafkaUser);
    DeploymentUtils.waitTillDepHasRolled(namespaceName, KafkaConnectResources.deploymentName(clusterName), 1, connectSnapshot);
    final String kafkaConnectPodNameAfterRU = kubeClient(namespaceName).listPodsByPrefixInName(KafkaConnectResources.deploymentName(clusterName)).get(0).getMetadata().getName();
    KafkaConnectUtils.waitUntilKafkaConnectRestApiIsAvailable(namespaceName, kafkaConnectPodNameAfterRU);
}
Also used : KafkaListenerAuthenticationScramSha512(io.strimzi.api.kafka.model.listener.KafkaListenerAuthenticationScramSha512) Secret(io.fabric8.kubernetes.api.model.Secret) SecretBuilder(io.fabric8.kubernetes.api.model.SecretBuilder) GenericKafkaListenerBuilder(io.strimzi.api.kafka.model.listener.arraylistener.GenericKafkaListenerBuilder) Matchers.containsString(org.hamcrest.Matchers.containsString) KafkaUser(io.strimzi.api.kafka.model.KafkaUser) PasswordSecretSourceBuilder(io.strimzi.api.kafka.model.PasswordSecretSourceBuilder) ParallelNamespaceTest(io.strimzi.systemtest.annotations.ParallelNamespaceTest) Tag(org.junit.jupiter.api.Tag)

Example 17 with ParallelNamespaceTest

use of io.strimzi.systemtest.annotations.ParallelNamespaceTest in project strimzi by strimzi.

the class ConnectIsolatedST method testMountingSecretAndConfigMapAsVolumesAndEnvVars.

@ParallelNamespaceTest
@SuppressWarnings({ "checkstyle:MethodLength" })
void testMountingSecretAndConfigMapAsVolumesAndEnvVars(ExtensionContext extensionContext) {
    final String namespaceName = StUtils.getNamespaceBasedOnRbac(INFRA_NAMESPACE, extensionContext);
    final String clusterName = mapWithClusterNames.get(extensionContext.getDisplayName());
    final String kafkaClientsName = mapWithKafkaClientNames.get(extensionContext.getDisplayName());
    final String secretPassword = "password";
    final String encodedPassword = Base64.getEncoder().encodeToString(secretPassword.getBytes());
    final String secretEnv = "MY_CONNECT_SECRET";
    final String configMapEnv = "MY_CONNECT_CONFIG_MAP";
    final String dotedSecretEnv = "MY_DOTED_CONNECT_SECRET";
    final String dotedConfigMapEnv = "MY_DOTED_CONNECT_CONFIG_MAP";
    final String configMapName = "connect-config-map";
    final String secretName = "connect-secret";
    final String dotedConfigMapName = "connect.config.map";
    final String dotedSecretName = "connect.secret";
    final String configMapKey = "my-key";
    final String secretKey = "my-secret-key";
    final String configMapValue = "my-value";
    Secret connectSecret = new SecretBuilder().withNewMetadata().withName(secretName).endMetadata().withType("Opaque").addToData(secretKey, encodedPassword).build();
    ConfigMap configMap = new ConfigMapBuilder().editOrNewMetadata().withName(configMapName).endMetadata().addToData(configMapKey, configMapValue).build();
    Secret dotedConnectSecret = new SecretBuilder().withNewMetadata().withName(dotedSecretName).endMetadata().withType("Opaque").addToData(secretKey, encodedPassword).build();
    ConfigMap dotedConfigMap = new ConfigMapBuilder().editOrNewMetadata().withName(dotedConfigMapName).endMetadata().addToData(configMapKey, configMapValue).build();
    kubeClient(namespaceName).createSecret(connectSecret);
    kubeClient(namespaceName).createSecret(dotedConnectSecret);
    kubeClient(namespaceName).getClient().configMaps().inNamespace(namespaceName).createOrReplace(configMap);
    kubeClient(namespaceName).getClient().configMaps().inNamespace(namespaceName).createOrReplace(dotedConfigMap);
    resourceManager.createResource(extensionContext, KafkaClientsTemplates.kafkaClients(false, kafkaClientsName).build());
    resourceManager.createResource(extensionContext, KafkaTemplates.kafkaEphemeral(clusterName, 3).build());
    resourceManager.createResource(extensionContext, KafkaConnectTemplates.kafkaConnect(extensionContext, clusterName, 1).editMetadata().addToAnnotations(Annotations.STRIMZI_IO_USE_CONNECTOR_RESOURCES, "true").endMetadata().editSpec().withNewExternalConfiguration().addNewVolume().withName(secretName).withSecret(new SecretVolumeSourceBuilder().withSecretName(secretName).build()).endVolume().addNewVolume().withName(configMapName).withConfigMap(new ConfigMapVolumeSourceBuilder().withName(configMapName).build()).endVolume().addNewVolume().withName(dotedSecretName).withSecret(new SecretVolumeSourceBuilder().withSecretName(dotedSecretName).build()).endVolume().addNewVolume().withName(dotedConfigMapName).withConfigMap(new ConfigMapVolumeSourceBuilder().withName(dotedConfigMapName).build()).endVolume().addNewEnv().withName(secretEnv).withNewValueFrom().withSecretKeyRef(new SecretKeySelectorBuilder().withKey(secretKey).withName(connectSecret.getMetadata().getName()).withOptional(false).build()).endValueFrom().endEnv().addNewEnv().withName(configMapEnv).withNewValueFrom().withConfigMapKeyRef(new ConfigMapKeySelectorBuilder().withKey(configMapKey).withName(configMap.getMetadata().getName()).withOptional(false).build()).endValueFrom().endEnv().addNewEnv().withName(dotedSecretEnv).withNewValueFrom().withSecretKeyRef(new SecretKeySelectorBuilder().withKey(secretKey).withName(dotedConnectSecret.getMetadata().getName()).withOptional(false).build()).endValueFrom().endEnv().addNewEnv().withName(dotedConfigMapEnv).withNewValueFrom().withConfigMapKeyRef(new ConfigMapKeySelectorBuilder().withKey(configMapKey).withName(dotedConfigMap.getMetadata().getName()).withOptional(false).build()).endValueFrom().endEnv().endExternalConfiguration().endSpec().build());
    String connectPodName = kubeClient(namespaceName).listPodsByPrefixInName(KafkaConnectResources.deploymentName(clusterName)).get(0).getMetadata().getName();
    LOGGER.info("Check if the ENVs contains desired values");
    assertThat(cmdKubeClient(namespaceName).execInPod(connectPodName, "/bin/bash", "-c", "printenv " + secretEnv).out().trim(), equalTo(secretPassword));
    assertThat(cmdKubeClient(namespaceName).execInPod(connectPodName, "/bin/bash", "-c", "printenv " + configMapEnv).out().trim(), equalTo(configMapValue));
    assertThat(cmdKubeClient(namespaceName).execInPod(connectPodName, "/bin/bash", "-c", "printenv " + dotedSecretEnv).out().trim(), equalTo(secretPassword));
    assertThat(cmdKubeClient(namespaceName).execInPod(connectPodName, "/bin/bash", "-c", "printenv " + dotedConfigMapEnv).out().trim(), equalTo(configMapValue));
    LOGGER.info("Check if volumes contains desired values");
    assertThat(cmdKubeClient(namespaceName).execInPod(connectPodName, "/bin/bash", "-c", "cat external-configuration/" + configMapName + "/" + configMapKey).out().trim(), equalTo(configMapValue));
    assertThat(cmdKubeClient(namespaceName).execInPod(connectPodName, "/bin/bash", "-c", "cat external-configuration/" + secretName + "/" + secretKey).out().trim(), equalTo(secretPassword));
    assertThat(cmdKubeClient(namespaceName).execInPod(connectPodName, "/bin/bash", "-c", "cat external-configuration/" + dotedConfigMapName + "/" + configMapKey).out().trim(), equalTo(configMapValue));
    assertThat(cmdKubeClient(namespaceName).execInPod(connectPodName, "/bin/bash", "-c", "cat external-configuration/" + dotedSecretName + "/" + secretKey).out().trim(), equalTo(secretPassword));
}
Also used : Secret(io.fabric8.kubernetes.api.model.Secret) SecretBuilder(io.fabric8.kubernetes.api.model.SecretBuilder) SecretVolumeSourceBuilder(io.fabric8.kubernetes.api.model.SecretVolumeSourceBuilder) ConfigMap(io.fabric8.kubernetes.api.model.ConfigMap) ConfigMapKeySelectorBuilder(io.fabric8.kubernetes.api.model.ConfigMapKeySelectorBuilder) ConfigMapBuilder(io.fabric8.kubernetes.api.model.ConfigMapBuilder) ConfigMapVolumeSourceBuilder(io.fabric8.kubernetes.api.model.ConfigMapVolumeSourceBuilder) Matchers.containsString(org.hamcrest.Matchers.containsString) SecretKeySelectorBuilder(io.fabric8.kubernetes.api.model.SecretKeySelectorBuilder) ParallelNamespaceTest(io.strimzi.systemtest.annotations.ParallelNamespaceTest)

Example 18 with ParallelNamespaceTest

use of io.strimzi.systemtest.annotations.ParallelNamespaceTest in project strimzi by strimzi.

the class ConnectIsolatedST method testScaleConnectAndConnectorSubresource.

@ParallelNamespaceTest
@Tag(SCALABILITY)
@Tag(CONNECTOR_OPERATOR)
void testScaleConnectAndConnectorSubresource(ExtensionContext extensionContext) {
    final String namespaceName = StUtils.getNamespaceBasedOnRbac(INFRA_NAMESPACE, extensionContext);
    final String clusterName = mapWithClusterNames.get(extensionContext.getDisplayName());
    final String topicName = mapWithTestTopics.get(extensionContext.getDisplayName());
    final String kafkaClientsName = mapWithKafkaClientNames.get(extensionContext.getDisplayName());
    resourceManager.createResource(extensionContext, KafkaClientsTemplates.kafkaClients(false, kafkaClientsName).build());
    resourceManager.createResource(extensionContext, KafkaTemplates.kafkaEphemeral(clusterName, 3).build());
    resourceManager.createResource(extensionContext, KafkaConnectTemplates.kafkaConnect(extensionContext, clusterName, 1).editMetadata().addToAnnotations(Annotations.STRIMZI_IO_USE_CONNECTOR_RESOURCES, "true").endMetadata().build());
    resourceManager.createResource(extensionContext, KafkaConnectorTemplates.kafkaConnector(clusterName).editSpec().withClassName("org.apache.kafka.connect.file.FileStreamSinkConnector").addToConfig("file", Constants.DEFAULT_SINK_FILE_PATH).addToConfig("key.converter", "org.apache.kafka.connect.storage.StringConverter").addToConfig("value.converter", "org.apache.kafka.connect.storage.StringConverter").addToConfig("topics", topicName).endSpec().build());
    final int scaleTo = 4;
    final long connectObsGen = KafkaConnectResource.kafkaConnectClient().inNamespace(namespaceName).withName(clusterName).get().getStatus().getObservedGeneration();
    final String connectGenName = kubeClient(namespaceName).listPodsByPrefixInName(KafkaConnectResources.deploymentName(clusterName)).get(0).getMetadata().getGenerateName();
    LOGGER.info("-------> Scaling KafkaConnect subresource <-------");
    LOGGER.info("Scaling subresource replicas to {}", scaleTo);
    cmdKubeClient(namespaceName).scaleByName(KafkaConnect.RESOURCE_KIND, clusterName, scaleTo);
    DeploymentUtils.waitForDeploymentAndPodsReady(namespaceName, KafkaConnectResources.deploymentName(clusterName), scaleTo);
    LOGGER.info("Check if replicas is set to {}, observed generation is higher - for spec and status - naming prefix should be same", scaleTo);
    List<Pod> connectPods = kubeClient(namespaceName).listPodsByPrefixInName(KafkaConnectResources.deploymentName(clusterName));
    assertThat(connectPods.size(), is(4));
    assertThat(KafkaConnectResource.kafkaConnectClient().inNamespace(namespaceName).withName(clusterName).get().getSpec().getReplicas(), is(4));
    assertThat(KafkaConnectResource.kafkaConnectClient().inNamespace(namespaceName).withName(clusterName).get().getStatus().getReplicas(), is(4));
    /*
        observed generation should be higher than before scaling -> after change of spec and successful reconciliation,
        the observed generation is increased
        */
    assertThat(connectObsGen < KafkaConnectResource.kafkaConnectClient().inNamespace(namespaceName).withName(clusterName).get().getStatus().getObservedGeneration(), is(true));
    for (Pod pod : connectPods) {
        assertThat(pod.getMetadata().getName().contains(connectGenName), is(true));
    }
    LOGGER.info("-------> Scaling KafkaConnector subresource <-------");
    LOGGER.info("Scaling subresource task max to {}", scaleTo);
    cmdKubeClient(namespaceName).scaleByName(KafkaConnector.RESOURCE_KIND, clusterName, scaleTo);
    KafkaConnectorUtils.waitForConnectorsTaskMaxChange(namespaceName, clusterName, scaleTo);
    LOGGER.info("Check if taskMax is set to {}", scaleTo);
    assertThat(KafkaConnectorResource.kafkaConnectorClient().inNamespace(namespaceName).withName(clusterName).get().getSpec().getTasksMax(), is(scaleTo));
    assertThat(KafkaConnectorResource.kafkaConnectorClient().inNamespace(namespaceName).withName(clusterName).get().getStatus().getTasksMax(), is(scaleTo));
    LOGGER.info("Check taskMax on Connect pods API");
    for (Pod pod : connectPods) {
        JsonObject json = new JsonObject(KafkaConnectorUtils.getConnectorSpecFromConnectAPI(namespaceName, pod.getMetadata().getName(), clusterName));
        assertThat(Integer.parseInt(json.getJsonObject("config").getString("tasks.max")), is(scaleTo));
    }
}
Also used : Pod(io.fabric8.kubernetes.api.model.Pod) JsonObject(io.vertx.core.json.JsonObject) Matchers.containsString(org.hamcrest.Matchers.containsString) ParallelNamespaceTest(io.strimzi.systemtest.annotations.ParallelNamespaceTest) Tag(org.junit.jupiter.api.Tag)

Example 19 with ParallelNamespaceTest

use of io.strimzi.systemtest.annotations.ParallelNamespaceTest in project strimzi by strimzi.

the class CruiseControlConfigurationST method testDeployAndUnDeployCruiseControl.

@ParallelNamespaceTest
void testDeployAndUnDeployCruiseControl(ExtensionContext extensionContext) throws IOException {
    final String namespaceName = StUtils.getNamespaceBasedOnRbac(namespace, extensionContext);
    final String clusterName = mapWithClusterNames.get(extensionContext.getDisplayName());
    final LabelSelector kafkaSelector = KafkaResource.getLabelSelector(clusterName, KafkaResources.kafkaStatefulSetName(clusterName));
    resourceManager.createResource(extensionContext, KafkaTemplates.kafkaWithCruiseControl(clusterName, 3, 3).build());
    Map<String, String> kafkaPods = PodUtils.podSnapshot(namespaceName, kafkaSelector);
    KafkaResource.replaceKafkaResourceInSpecificNamespace(clusterName, kafka -> {
        LOGGER.info("Removing Cruise Control to the classic Kafka.");
        kafka.getSpec().setCruiseControl(null);
    }, namespaceName);
    kafkaPods = RollingUpdateUtils.waitTillComponentHasRolled(namespaceName, kafkaSelector, 3, kafkaPods);
    LOGGER.info("Verifying that in {} is not present in the Kafka cluster", Constants.CRUISE_CONTROL_NAME);
    assertThat(KafkaResource.kafkaClient().inNamespace(namespaceName).withName(clusterName).get().getSpec().getCruiseControl(), nullValue());
    LOGGER.info("Verifying that {} pod is not present", clusterName + "-cruise-control-");
    PodUtils.waitUntilPodStabilityReplicasCount(namespaceName, clusterName + "-cruise-control-", 0);
    LOGGER.info("Verifying that in Kafka config map there is no configuration to cruise control metric reporter");
    assertThrows(WaitException.class, () -> CruiseControlUtils.verifyCruiseControlMetricReporterConfigurationInKafkaConfigMapIsPresent(CruiseControlUtils.getKafkaCruiseControlMetricsReporterConfiguration(namespaceName, clusterName)));
    LOGGER.info("Cruise Control topics will not be deleted and will stay in the Kafka cluster");
    CruiseControlUtils.verifyThatCruiseControlTopicsArePresent(namespaceName);
    KafkaResource.replaceKafkaResourceInSpecificNamespace(clusterName, kafka -> {
        LOGGER.info("Adding Cruise Control to the classic Kafka.");
        kafka.getSpec().setCruiseControl(new CruiseControlSpec());
    }, namespaceName);
    RollingUpdateUtils.waitTillComponentHasRolled(namespaceName, kafkaSelector, 3, kafkaPods);
    LOGGER.info("Verifying that in Kafka config map there is configuration to cruise control metric reporter");
    CruiseControlUtils.verifyCruiseControlMetricReporterConfigurationInKafkaConfigMapIsPresent(CruiseControlUtils.getKafkaCruiseControlMetricsReporterConfiguration(namespaceName, clusterName));
    LOGGER.info("Verifying that {} topics are created after CC is instantiated.", Constants.CRUISE_CONTROL_NAME);
    CruiseControlUtils.verifyThatCruiseControlTopicsArePresent(namespaceName);
}
Also used : CruiseControlSpec(io.strimzi.api.kafka.model.CruiseControlSpec) LabelSelector(io.fabric8.kubernetes.api.model.LabelSelector) ParallelNamespaceTest(io.strimzi.systemtest.annotations.ParallelNamespaceTest)

Example 20 with ParallelNamespaceTest

use of io.strimzi.systemtest.annotations.ParallelNamespaceTest in project strimzi by strimzi.

the class CruiseControlApiST method testCruiseControlBasicAPIRequests.

@ParallelNamespaceTest
void testCruiseControlBasicAPIRequests(ExtensionContext extensionContext) {
    final TestStorage testStorage = new TestStorage(extensionContext);
    resourceManager.createResource(extensionContext, KafkaTemplates.kafkaWithCruiseControl(testStorage.getClusterName(), 3, 3).build());
    LOGGER.info("----> CRUISE CONTROL DEPLOYMENT STATE ENDPOINT <----");
    String response = CruiseControlUtils.callApi(testStorage.getNamespaceName(), CruiseControlUtils.SupportedHttpMethods.POST, CruiseControlEndpoints.STATE, CruiseControlUtils.SupportedSchemes.HTTPS, true);
    assertThat(response, is("Unrecognized endpoint in request '/state'\n" + "Supported POST endpoints: [ADD_BROKER, REMOVE_BROKER, FIX_OFFLINE_REPLICAS, REBALANCE, STOP_PROPOSAL_EXECUTION, PAUSE_SAMPLING, " + "RESUME_SAMPLING, DEMOTE_BROKER, ADMIN, REVIEW, TOPIC_CONFIGURATION, RIGHTSIZE]\n"));
    response = CruiseControlUtils.callApi(testStorage.getNamespaceName(), CruiseControlUtils.SupportedHttpMethods.GET, CruiseControlEndpoints.STATE, CruiseControlUtils.SupportedSchemes.HTTPS, true);
    LOGGER.info("Verifying that {} REST API is available", CRUISE_CONTROL_NAME);
    assertThat(response, not(containsString("404")));
    assertThat(response, containsString("RUNNING"));
    assertThat(response, containsString("NO_TASK_IN_PROGRESS"));
    CruiseControlUtils.verifyThatCruiseControlTopicsArePresent(testStorage.getNamespaceName());
    LOGGER.info("----> KAFKA REBALANCE <----");
    response = CruiseControlUtils.callApi(testStorage.getNamespaceName(), CruiseControlUtils.SupportedHttpMethods.GET, CruiseControlEndpoints.REBALANCE, CruiseControlUtils.SupportedSchemes.HTTPS, true);
    assertThat(response, is("Unrecognized endpoint in request '/rebalance'\n" + "Supported GET endpoints: [BOOTSTRAP, TRAIN, LOAD, PARTITION_LOAD, PROPOSALS, STATE, KAFKA_CLUSTER_STATE, USER_TASKS, REVIEW_BOARD]\n"));
    LOGGER.info("Waiting for CC will have for enough metrics to be recorded to make a proposal ");
    CruiseControlUtils.waitForRebalanceEndpointIsReady(testStorage.getNamespaceName());
    response = CruiseControlUtils.callApi(testStorage.getNamespaceName(), CruiseControlUtils.SupportedHttpMethods.POST, CruiseControlEndpoints.REBALANCE, CruiseControlUtils.SupportedSchemes.HTTPS, true);
    // all goals stats that contains
    assertThat(response, containsString("RackAwareGoal"));
    assertThat(response, containsString("ReplicaCapacityGoal"));
    assertThat(response, containsString("DiskCapacityGoal"));
    assertThat(response, containsString("NetworkInboundCapacityGoal"));
    assertThat(response, containsString("NetworkOutboundCapacityGoal"));
    assertThat(response, containsString("CpuCapacityGoal"));
    assertThat(response, containsString("ReplicaDistributionGoal"));
    assertThat(response, containsString("DiskUsageDistributionGoal"));
    assertThat(response, containsString("NetworkInboundUsageDistributionGoal"));
    assertThat(response, containsString("NetworkOutboundUsageDistributionGoal"));
    assertThat(response, containsString("CpuUsageDistributionGoal"));
    assertThat(response, containsString("TopicReplicaDistributionGoal"));
    assertThat(response, containsString("LeaderReplicaDistributionGoal"));
    assertThat(response, containsString("LeaderBytesInDistributionGoal"));
    assertThat(response, containsString("PreferredLeaderElectionGoal"));
    assertThat(response, containsString("Cluster load after rebalance"));
    LOGGER.info("----> EXECUTION OF STOP PROPOSAL <----");
    response = CruiseControlUtils.callApi(testStorage.getNamespaceName(), CruiseControlUtils.SupportedHttpMethods.GET, CruiseControlEndpoints.STOP, CruiseControlUtils.SupportedSchemes.HTTPS, true);
    assertThat(response, is("Unrecognized endpoint in request '/stop_proposal_execution'\n" + "Supported GET endpoints: [BOOTSTRAP, TRAIN, LOAD, PARTITION_LOAD, PROPOSALS, STATE, KAFKA_CLUSTER_STATE, USER_TASKS, REVIEW_BOARD]\n"));
    response = CruiseControlUtils.callApi(testStorage.getNamespaceName(), CruiseControlUtils.SupportedHttpMethods.POST, CruiseControlEndpoints.STOP, CruiseControlUtils.SupportedSchemes.HTTPS, true);
    assertThat(response, containsString("Proposal execution stopped."));
    LOGGER.info("----> USER TASKS <----");
    response = CruiseControlUtils.callApi(testStorage.getNamespaceName(), CruiseControlUtils.SupportedHttpMethods.POST, CruiseControlEndpoints.USER_TASKS, CruiseControlUtils.SupportedSchemes.HTTPS, true);
    assertThat(response, is("Unrecognized endpoint in request '/user_tasks'\n" + "Supported POST endpoints: [ADD_BROKER, REMOVE_BROKER, FIX_OFFLINE_REPLICAS, REBALANCE, STOP_PROPOSAL_EXECUTION, PAUSE_SAMPLING, " + "RESUME_SAMPLING, DEMOTE_BROKER, ADMIN, REVIEW, TOPIC_CONFIGURATION, RIGHTSIZE]\n"));
    response = CruiseControlUtils.callApi(testStorage.getNamespaceName(), CruiseControlUtils.SupportedHttpMethods.GET, CruiseControlEndpoints.USER_TASKS, CruiseControlUtils.SupportedSchemes.HTTPS, true);
    assertThat(response, containsString("GET"));
    assertThat(response, containsString(CruiseControlEndpoints.STATE.toString()));
    assertThat(response, containsString("POST"));
    assertThat(response, containsString(CruiseControlEndpoints.REBALANCE.toString()));
    assertThat(response, containsString(CruiseControlEndpoints.STOP.toString()));
    assertThat(response, containsString(CruiseControlUserTaskStatus.COMPLETED.toString()));
    LOGGER.info("Verifying that {} REST API doesn't allow HTTP requests", CRUISE_CONTROL_NAME);
    response = CruiseControlUtils.callApi(testStorage.getNamespaceName(), CruiseControlUtils.SupportedHttpMethods.GET, CruiseControlEndpoints.STATE, CruiseControlUtils.SupportedSchemes.HTTP, false);
    assertThat(response, not(containsString("RUNNING")));
    assertThat(response, not(containsString("NO_TASK_IN_PROGRESS")));
    LOGGER.info("Verifying that {} REST API doesn't allow unauthenticated requests", CRUISE_CONTROL_NAME);
    response = CruiseControlUtils.callApi(testStorage.getNamespaceName(), CruiseControlUtils.SupportedHttpMethods.GET, CruiseControlEndpoints.STATE, CruiseControlUtils.SupportedSchemes.HTTPS, false);
    assertThat(response, containsString("401 Unauthorized"));
}
Also used : TestStorage(io.strimzi.systemtest.storage.TestStorage) CoreMatchers.containsString(org.hamcrest.CoreMatchers.containsString) ParallelNamespaceTest(io.strimzi.systemtest.annotations.ParallelNamespaceTest)

Aggregations

ParallelNamespaceTest (io.strimzi.systemtest.annotations.ParallelNamespaceTest)302 Tag (org.junit.jupiter.api.Tag)166 Matchers.containsString (org.hamcrest.Matchers.containsString)148 GenericKafkaListenerBuilder (io.strimzi.api.kafka.model.listener.arraylistener.GenericKafkaListenerBuilder)114 InternalKafkaClient (io.strimzi.systemtest.kafkaclients.clients.InternalKafkaClient)112 LabelSelector (io.fabric8.kubernetes.api.model.LabelSelector)94 KafkaUser (io.strimzi.api.kafka.model.KafkaUser)86 CoreMatchers.containsString (org.hamcrest.CoreMatchers.containsString)76 SecretBuilder (io.fabric8.kubernetes.api.model.SecretBuilder)66 HashMap (java.util.HashMap)52 ResourceRequirementsBuilder (io.fabric8.kubernetes.api.model.ResourceRequirementsBuilder)50 TestUtils.fromYamlString (io.strimzi.test.TestUtils.fromYamlString)48 Matchers.emptyOrNullString (org.hamcrest.Matchers.emptyOrNullString)48 ExternalKafkaClient (io.strimzi.systemtest.kafkaclients.externalClients.ExternalKafkaClient)42 ConfigMap (io.fabric8.kubernetes.api.model.ConfigMap)40 ConfigMapBuilder (io.fabric8.kubernetes.api.model.ConfigMapBuilder)40 Secret (io.fabric8.kubernetes.api.model.Secret)40 ConfigMapKeySelectorBuilder (io.fabric8.kubernetes.api.model.ConfigMapKeySelectorBuilder)38 KafkaListenerAuthenticationTls (io.strimzi.api.kafka.model.listener.KafkaListenerAuthenticationTls)36 Pod (io.fabric8.kubernetes.api.model.Pod)32