Search in sources :

Example 1 with KafkaClients

use of io.strimzi.systemtest.kafkaclients.internalClients.KafkaClients in project strimzi by strimzi.

the class ConfigProviderST method testConnectWithConnectorUsingConfigAndEnvProvider.

@ParallelNamespaceTest
void testConnectWithConnectorUsingConfigAndEnvProvider(ExtensionContext extensionContext) {
    final String clusterName = mapWithClusterNames.get(extensionContext.getDisplayName());
    final String topicName = mapWithTestTopics.get(extensionContext.getDisplayName());
    final String namespaceName = StUtils.getNamespaceBasedOnRbac(namespace, extensionContext);
    final String producerName = "producer-" + ClientUtils.generateRandomConsumerGroup();
    final String customFileSinkPath = "/tmp/my-own-path.txt";
    resourceManager.createResource(extensionContext, KafkaTemplates.kafkaEphemeral(clusterName, 3).build());
    Map<String, String> configData = new HashMap<>();
    configData.put("topics", topicName);
    configData.put("file", customFileSinkPath);
    configData.put("key", "org.apache.kafka.connect.storage.StringConverter");
    configData.put("value", "org.apache.kafka.connect.storage.StringConverter");
    String cmName = "connector-config";
    String configRoleName = "connector-config-role";
    ConfigMap connectorConfig = new ConfigMapBuilder().editOrNewMetadata().withName(cmName).endMetadata().withData(configData).build();
    kubeClient().getClient().configMaps().inNamespace(namespaceName).create(connectorConfig);
    resourceManager.createResource(extensionContext, KafkaConnectTemplates.kafkaConnect(extensionContext, clusterName, 1, false).editOrNewMetadata().addToAnnotations(Annotations.STRIMZI_IO_USE_CONNECTOR_RESOURCES, "true").endMetadata().editOrNewSpec().addToConfig("key.converter.schemas.enable", false).addToConfig("value.converter.schemas.enable", false).addToConfig("key.converter", "org.apache.kafka.connect.storage.StringConverter").addToConfig("value.converter", "org.apache.kafka.connect.storage.StringConverter").addToConfig("config.providers", "configmaps,env").addToConfig("config.providers.configmaps.class", "io.strimzi.kafka.KubernetesConfigMapConfigProvider").addToConfig("config.providers.env.class", "io.strimzi.kafka.EnvVarConfigProvider").editOrNewExternalConfiguration().addNewEnv().withName("FILE_SINK_FILE").withNewValueFrom().withNewConfigMapKeyRef("file", cmName, false).endValueFrom().endEnv().endExternalConfiguration().endSpec().build());
    LOGGER.info("Creating needed RoleBinding and Role for Kubernetes Config Provider");
    ResourceManager.getInstance().createResource(extensionContext, new RoleBindingBuilder().editOrNewMetadata().withName("connector-config-rb").withNamespace(namespaceName).endMetadata().withSubjects(new SubjectBuilder().withKind("ServiceAccount").withName(clusterName + "-connect").withNamespace(namespaceName).build()).withRoleRef(new RoleRefBuilder().withKind("Role").withName(configRoleName).withApiGroup("rbac.authorization.k8s.io").build()).build());
    // create a role
    Role configRole = new RoleBuilder().editOrNewMetadata().withName(configRoleName).withNamespace(namespaceName).endMetadata().addNewRule().withApiGroups("").withResources("configmaps").withResourceNames(cmName).withVerbs("get").endRule().build();
    kubeClient().getClient().resource(configRole).createOrReplace();
    String configPrefix = "configmaps:" + namespaceName + "/connector-config:";
    resourceManager.createResource(extensionContext, KafkaConnectorTemplates.kafkaConnector(clusterName).editSpec().withClassName("org.apache.kafka.connect.file.FileStreamSinkConnector").addToConfig("file", "${env:FILE_SINK_FILE}").addToConfig("key.converter", "${" + configPrefix + "key}").addToConfig("value.converter", "${" + configPrefix + "value}").addToConfig("topics", "${" + configPrefix + "topics}").endSpec().build());
    KafkaClients kafkaBasicClientJob = new KafkaClientsBuilder().withProducerName(producerName).withBootstrapAddress(KafkaResources.plainBootstrapAddress(clusterName)).withTopicName(topicName).withMessageCount(MESSAGE_COUNT).withDelayMs(0).withNamespaceName(namespaceName).build();
    resourceManager.createResource(extensionContext, kafkaBasicClientJob.producerStrimzi());
    String kafkaConnectPodName = kubeClient().listPods(namespaceName, clusterName, Labels.STRIMZI_KIND_LABEL, KafkaConnect.RESOURCE_KIND).get(0).getMetadata().getName();
    KafkaConnectUtils.waitForMessagesInKafkaConnectFileSink(namespaceName, kafkaConnectPodName, customFileSinkPath, "Hello-world - 99");
}
Also used : Role(io.fabric8.kubernetes.api.model.rbac.Role) KafkaClientsBuilder(io.strimzi.systemtest.kafkaclients.internalClients.KafkaClientsBuilder) RoleBindingBuilder(io.fabric8.kubernetes.api.model.rbac.RoleBindingBuilder) ConfigMap(io.fabric8.kubernetes.api.model.ConfigMap) KafkaClients(io.strimzi.systemtest.kafkaclients.internalClients.KafkaClients) HashMap(java.util.HashMap) ConfigMapBuilder(io.fabric8.kubernetes.api.model.ConfigMapBuilder) SubjectBuilder(io.fabric8.kubernetes.api.model.rbac.SubjectBuilder) RoleBuilder(io.fabric8.kubernetes.api.model.rbac.RoleBuilder) RoleRefBuilder(io.fabric8.kubernetes.api.model.rbac.RoleRefBuilder) ParallelNamespaceTest(io.strimzi.systemtest.annotations.ParallelNamespaceTest)

Example 2 with KafkaClients

use of io.strimzi.systemtest.kafkaclients.internalClients.KafkaClients in project strimzi by strimzi.

the class MirrorMaker2IsolatedST method testRestoreOffsetsInConsumerGroup.

@ParallelNamespaceTest
@SuppressWarnings({ "checkstyle:MethodLength" })
void testRestoreOffsetsInConsumerGroup(ExtensionContext extensionContext) {
    final String namespaceName = StUtils.getNamespaceBasedOnRbac(INFRA_NAMESPACE, extensionContext);
    final String clusterName = mapWithClusterNames.get(extensionContext.getDisplayName());
    final String kafkaClusterSourceName = clusterName + "-source";
    final String kafkaClusterTargetName = clusterName + "-target";
    final String syncGroupOffsetsIntervalSeconds = "1";
    final String topicSourceNameMirrored = "test-sync-offset-" + new Random().nextInt(Integer.MAX_VALUE);
    final String topicTargetNameMirrored = kafkaClusterSourceName + "." + topicSourceNameMirrored;
    final String consumerGroup = "mm2-test-consumer-group";
    final String sourceProducerName = "mm2-producer-source-" + ClientUtils.generateRandomConsumerGroup();
    final String sourceConsumerName = "mm2-consumer-source-" + ClientUtils.generateRandomConsumerGroup();
    final String targetProducerName = "mm2-producer-target-" + ClientUtils.generateRandomConsumerGroup();
    final String targetConsumerName = "mm2-consumer-target-" + ClientUtils.generateRandomConsumerGroup();
    final String mm2SrcTrgName = clusterName + "-src-trg";
    final String mm2TrgSrcName = clusterName + "-trg-src";
    resourceManager.createResource(extensionContext, false, // Deploy source kafka
    KafkaTemplates.kafkaPersistent(kafkaClusterSourceName, 1, 1).build(), // Deploy target kafka
    KafkaTemplates.kafkaPersistent(kafkaClusterTargetName, 1, 1).build());
    // Wait for Kafka clusters readiness
    KafkaUtils.waitForKafkaReady(namespaceName, kafkaClusterSourceName);
    KafkaUtils.waitForKafkaReady(namespaceName, kafkaClusterTargetName);
    resourceManager.createResource(extensionContext, // *.replication.factor(s) to 1 are added just to speed up test by using only 1 ZK and 1 Kafka
    KafkaMirrorMaker2Templates.kafkaMirrorMaker2(mm2TrgSrcName, kafkaClusterTargetName, kafkaClusterSourceName, 1, false).editSpec().editFirstMirror().editSourceConnector().addToConfig("refresh.topics.interval.seconds", "1").addToConfig("replication.factor", "1").addToConfig("offset-syncs.topic.replication.factor", "1").endSourceConnector().editCheckpointConnector().addToConfig("refresh.groups.interval.seconds", "1").addToConfig("sync.group.offsets.enabled", "true").addToConfig("sync.group.offsets.interval.seconds", syncGroupOffsetsIntervalSeconds).addToConfig("emit.checkpoints.enabled", "true").addToConfig("emit.checkpoints.interval.seconds", "1").addToConfig("checkpoints.topic.replication.factor", "1").endCheckpointConnector().editHeartbeatConnector().addToConfig("heartbeats.topic.replication.factor", "1").endHeartbeatConnector().withTopicsPattern(".*").withGroupsPattern(".*").endMirror().endSpec().build(), // MM2 Active (S) <-> Active (T) // direction S <- T mirroring
    KafkaMirrorMaker2Templates.kafkaMirrorMaker2(mm2SrcTrgName, kafkaClusterSourceName, kafkaClusterTargetName, 1, false).editSpec().editFirstMirror().editSourceConnector().addToConfig("refresh.topics.interval.seconds", "1").addToConfig("replication.factor", "1").addToConfig("offset-syncs.topic.replication.factor", "1").endSourceConnector().editCheckpointConnector().addToConfig("refresh.groups.interval.seconds", "1").addToConfig("sync.group.offsets.enabled", "true").addToConfig("sync.group.offsets.interval.seconds", syncGroupOffsetsIntervalSeconds).addToConfig("emit.checkpoints.enabled", "true").addToConfig("emit.checkpoints.interval.seconds", "1").addToConfig("checkpoints.topic.replication.factor", "1").endCheckpointConnector().editHeartbeatConnector().addToConfig("heartbeats.topic.replication.factor", "1").endHeartbeatConnector().withTopicsPattern(".*").withGroupsPattern(".*").endMirror().endSpec().build(), // deploy topic
    KafkaTopicTemplates.topic(kafkaClusterSourceName, topicSourceNameMirrored, 3).build());
    KafkaClients initialInternalClientSourceJob = new KafkaClientsBuilder().withProducerName(sourceProducerName).withConsumerName(sourceConsumerName).withBootstrapAddress(KafkaResources.plainBootstrapAddress(kafkaClusterSourceName)).withTopicName(topicSourceNameMirrored).withMessageCount(MESSAGE_COUNT).withMessage("Producer A").withConsumerGroup(consumerGroup).withNamespaceName(namespaceName).build();
    KafkaClients initialInternalClientTargetJob = new KafkaClientsBuilder().withProducerName(targetProducerName).withConsumerName(targetConsumerName).withBootstrapAddress(KafkaResources.plainBootstrapAddress(kafkaClusterTargetName)).withTopicName(topicTargetNameMirrored).withMessageCount(MESSAGE_COUNT).withConsumerGroup(consumerGroup).withNamespaceName(namespaceName).build();
    LOGGER.info("Send & receive {} messages to/from Source cluster.", MESSAGE_COUNT);
    resourceManager.createResource(extensionContext, initialInternalClientSourceJob.producerStrimzi(), initialInternalClientSourceJob.consumerStrimzi());
    ClientUtils.waitForClientSuccess(sourceProducerName, namespaceName, MESSAGE_COUNT);
    ClientUtils.waitForClientSuccess(sourceConsumerName, namespaceName, MESSAGE_COUNT);
    JobUtils.deleteJobWithWait(namespaceName, sourceProducerName);
    JobUtils.deleteJobWithWait(namespaceName, sourceConsumerName);
    LOGGER.info("Send {} messages to Source cluster.", MESSAGE_COUNT);
    KafkaClients internalClientSourceJob = new KafkaClientsBuilder(initialInternalClientSourceJob).withMessage("Producer B").build();
    resourceManager.createResource(extensionContext, internalClientSourceJob.producerStrimzi());
    ClientUtils.waitForClientSuccess(sourceProducerName, namespaceName, MESSAGE_COUNT);
    LOGGER.info("Wait 1 second as 'sync.group.offsets.interval.seconds=1'. As this is insignificant wait, we're skipping it");
    LOGGER.info("Receive {} messages from mirrored topic on Target cluster.", MESSAGE_COUNT);
    resourceManager.createResource(extensionContext, initialInternalClientTargetJob.consumerStrimzi());
    ClientUtils.waitForClientSuccess(targetConsumerName, namespaceName, MESSAGE_COUNT);
    JobUtils.deleteJobWithWait(namespaceName, sourceProducerName);
    JobUtils.deleteJobWithWait(namespaceName, targetConsumerName);
    LOGGER.info("Send 50 messages to Source cluster");
    internalClientSourceJob = new KafkaClientsBuilder(internalClientSourceJob).withMessageCount(50).withMessage("Producer C").build();
    resourceManager.createResource(extensionContext, internalClientSourceJob.producerStrimzi());
    ClientUtils.waitForClientSuccess(sourceProducerName, namespaceName, 50);
    JobUtils.deleteJobWithWait(namespaceName, sourceProducerName);
    LOGGER.info("Wait 1 second as 'sync.group.offsets.interval.seconds=1'. As this is insignificant wait, we're skipping it");
    LOGGER.info("Receive 10 msgs from source cluster");
    internalClientSourceJob = new KafkaClientsBuilder(internalClientSourceJob).withMessageCount(10).withAdditionalConfig("max.poll.records=10").build();
    resourceManager.createResource(extensionContext, internalClientSourceJob.consumerStrimzi());
    ClientUtils.waitForClientSuccess(sourceConsumerName, namespaceName, 10);
    JobUtils.deleteJobWithWait(namespaceName, sourceConsumerName);
    LOGGER.info("Wait 1 second as 'sync.group.offsets.interval.seconds=1'. As this is insignificant wait, we're skipping it");
    LOGGER.info("Receive 40 msgs from mirrored topic on Target cluster");
    KafkaClients internalClientTargetJob = new KafkaClientsBuilder(initialInternalClientTargetJob).withMessageCount(40).build();
    resourceManager.createResource(extensionContext, internalClientTargetJob.consumerStrimzi());
    ClientUtils.waitForClientSuccess(targetConsumerName, namespaceName, 40);
    JobUtils.deleteJobWithWait(namespaceName, targetConsumerName);
    LOGGER.info("There should be no more messages to read. Try to consume at least 1 message. " + "This client job should fail on timeout.");
    resourceManager.createResource(extensionContext, initialInternalClientTargetJob.consumerStrimzi());
    assertDoesNotThrow(() -> ClientUtils.waitForClientTimeout(targetConsumerName, namespaceName, 1));
    LOGGER.info("As it's Active-Active MM2 mode, there should be no more messages to read from Source cluster" + " topic. This client job should fail on timeout.");
    resourceManager.createResource(extensionContext, initialInternalClientSourceJob.consumerStrimzi());
    assertDoesNotThrow(() -> ClientUtils.waitForClientTimeout(sourceConsumerName, namespaceName, 1));
}
Also used : KafkaClientsBuilder(io.strimzi.systemtest.kafkaclients.internalClients.KafkaClientsBuilder) Random(java.util.Random) KafkaClients(io.strimzi.systemtest.kafkaclients.internalClients.KafkaClients) Matchers.containsString(org.hamcrest.Matchers.containsString) ParallelNamespaceTest(io.strimzi.systemtest.annotations.ParallelNamespaceTest)

Example 3 with KafkaClients

use of io.strimzi.systemtest.kafkaclients.internalClients.KafkaClients in project strimzi by strimzi.

the class MultipleClusterOperatorsIsolatedST method testMultipleCOsInDifferentNamespaces.

@IsolatedTest
@Tag(CONNECT)
@Tag(CONNECT_COMPONENTS)
void testMultipleCOsInDifferentNamespaces(ExtensionContext extensionContext) {
    // Strimzi is deployed with cluster-wide access in this class STRIMZI_RBAC_SCOPE=NAMESPACE won't work
    assumeFalse(Environment.isNamespaceRbacScope());
    String clusterName = mapWithClusterNames.get(extensionContext.getDisplayName());
    String topicName = mapWithTestTopics.get(extensionContext.getDisplayName());
    String producerName = "hello-world-producer";
    String consumerName = "hello-world-consumer";
    deployCOInNamespace(extensionContext, FIRST_CO_NAME, FIRST_NAMESPACE, FIRST_CO_SELECTOR_ENV, true);
    deployCOInNamespace(extensionContext, SECOND_CO_NAME, SECOND_NAMESPACE, SECOND_CO_SELECTOR_ENV, true);
    cluster.createNamespace(DEFAULT_NAMESPACE);
    cluster.setNamespace(DEFAULT_NAMESPACE);
    LOGGER.info("Deploying Kafka without CR selector");
    resourceManager.createResource(extensionContext, false, KafkaTemplates.kafkaEphemeral(clusterName, 3, 3).editMetadata().withNamespace(DEFAULT_NAMESPACE).endMetadata().build());
    // checking that no pods with prefix 'clusterName' will be created in some time
    PodUtils.waitUntilPodStabilityReplicasCount(DEFAULT_NAMESPACE, clusterName, 0);
    LOGGER.info("Adding {} selector of {} into Kafka CR", FIRST_CO_SELECTOR, FIRST_CO_NAME);
    KafkaResource.replaceKafkaResourceInSpecificNamespace(clusterName, kafka -> kafka.getMetadata().setLabels(FIRST_CO_SELECTOR), DEFAULT_NAMESPACE);
    KafkaUtils.waitForKafkaReady(DEFAULT_NAMESPACE, clusterName);
    resourceManager.createResource(extensionContext, KafkaTopicTemplates.topic(clusterName, topicName).editMetadata().withNamespace(DEFAULT_NAMESPACE).endMetadata().build(), KafkaConnectTemplates.kafkaConnect(extensionContext, clusterName, 1, false).editOrNewMetadata().addToLabels(FIRST_CO_SELECTOR).addToAnnotations(Annotations.STRIMZI_IO_USE_CONNECTOR_RESOURCES, "true").withNamespace(DEFAULT_NAMESPACE).endMetadata().build());
    String kafkaConnectPodName = kubeClient(DEFAULT_NAMESPACE).listPods(DEFAULT_NAMESPACE, clusterName, Labels.STRIMZI_KIND_LABEL, KafkaConnect.RESOURCE_KIND).get(0).getMetadata().getName();
    LOGGER.info("Deploying KafkaConnector with file sink and CR selector - {} - different than selector in Kafka", SECOND_CO_SELECTOR);
    resourceManager.createResource(extensionContext, KafkaConnectorTemplates.kafkaConnector(clusterName).editOrNewMetadata().addToLabels(SECOND_CO_SELECTOR).withNamespace(DEFAULT_NAMESPACE).endMetadata().editSpec().withClassName("org.apache.kafka.connect.file.FileStreamSinkConnector").addToConfig("file", Constants.DEFAULT_SINK_FILE_PATH).addToConfig("key.converter", "org.apache.kafka.connect.storage.StringConverter").addToConfig("value.converter", "org.apache.kafka.connect.storage.StringConverter").addToConfig("topics", topicName).endSpec().build());
    KafkaClients basicClients = new KafkaClientsBuilder().withNamespaceName(DEFAULT_NAMESPACE).withProducerName(producerName).withConsumerName(consumerName).withBootstrapAddress(KafkaResources.plainBootstrapAddress(clusterName)).withTopicName(topicName).withMessageCount(MESSAGE_COUNT).build();
    resourceManager.createResource(extensionContext, basicClients.producerStrimzi());
    ClientUtils.waitForClientSuccess(producerName, DEFAULT_NAMESPACE, MESSAGE_COUNT);
    KafkaConnectUtils.waitForMessagesInKafkaConnectFileSink(kafkaConnectPodName, Constants.DEFAULT_SINK_FILE_PATH, "Hello-world - 99");
}
Also used : KafkaClientsBuilder(io.strimzi.systemtest.kafkaclients.internalClients.KafkaClientsBuilder) KafkaClients(io.strimzi.systemtest.kafkaclients.internalClients.KafkaClients) IsolatedTest(io.strimzi.systemtest.annotations.IsolatedTest) Tag(org.junit.jupiter.api.Tag)

Example 4 with KafkaClients

use of io.strimzi.systemtest.kafkaclients.internalClients.KafkaClients in project strimzi by strimzi.

the class OauthScopeIsolatedST method testClientScopeKafkaSetIncorrectly.

@IsolatedTest("Modification of shared Kafka cluster")
void testClientScopeKafkaSetIncorrectly(ExtensionContext extensionContext) throws UnexpectedException {
    final String kafkaClientsName = mapWithKafkaClientNames.get(extensionContext.getDisplayName());
    final String clusterName = mapWithClusterNames.get(extensionContext.getDisplayName());
    final String producerName = OAUTH_PRODUCER_NAME + "-" + clusterName;
    final String consumerName = OAUTH_CONSUMER_NAME + "-" + clusterName;
    final String topicName = mapWithTestTopics.get(extensionContext.getDisplayName());
    final LabelSelector kafkaSelector = KafkaResource.getLabelSelector(oauthClusterName, KafkaResources.kafkaStatefulSetName(oauthClusterName));
    KafkaClients oauthInternalClientChecksJob = new KafkaClientsBuilder().withNamespaceName(INFRA_NAMESPACE).withProducerName(producerName).withConsumerName(consumerName).withBootstrapAddress(KafkaResources.bootstrapServiceName(oauthClusterName) + ":" + scopeListenerPort).withTopicName(topicName).withMessageCount(MESSAGE_COUNT).withAdditionalConfig(additionalOauthConfig).build();
    // re-configuring Kafka listener to have client scope assigned to null
    KafkaResource.replaceKafkaResourceInSpecificNamespace(oauthClusterName, kafka -> {
        List<GenericKafkaListener> scopeListeners = kafka.getSpec().getKafka().getListeners().stream().filter(listener -> listener.getName().equals(scopeListener)).collect(Collectors.toList());
        ((KafkaListenerAuthenticationOAuth) scopeListeners.get(0).getAuth()).setClientScope(null);
        kafka.getSpec().getKafka().getListeners().set(0, scopeListeners.get(0));
    }, INFRA_NAMESPACE);
    RollingUpdateUtils.waitForComponentAndPodsReady(INFRA_NAMESPACE, kafkaSelector, 1);
    // verification phase client should fail here because clientScope is set to 'null'
    resourceManager.createResource(extensionContext, KafkaTopicTemplates.topic(oauthClusterName, topicName, INFRA_NAMESPACE).build());
    resourceManager.createResource(extensionContext, oauthInternalClientChecksJob.producerStrimzi());
    // client should fail because the listener requires scope: 'test' in JWT token but was (the listener) temporarily
    // configured without clientScope resulting in a JWT token without the scope claim when using the clientId and
    // secret passed via SASL/PLAIN to obtain an access token in client's name.
    ClientUtils.waitForClientTimeout(producerName, INFRA_NAMESPACE, MESSAGE_COUNT);
    JobUtils.deleteJobWithWait(INFRA_NAMESPACE, producerName);
    // rollback previous configuration
    // re-configuring Kafka listener to have client scope assigned to 'test'
    KafkaResource.replaceKafkaResourceInSpecificNamespace(oauthClusterName, kafka -> {
        List<GenericKafkaListener> scopeListeners = kafka.getSpec().getKafka().getListeners().stream().filter(listener -> listener.getName().equals(scopeListener)).collect(Collectors.toList());
        ((KafkaListenerAuthenticationOAuth) scopeListeners.get(0).getAuth()).setClientScope("test");
        kafka.getSpec().getKafka().getListeners().set(0, scopeListeners.get(0));
    }, INFRA_NAMESPACE);
    RollingUpdateUtils.waitForComponentAndPodsReady(INFRA_NAMESPACE, kafkaSelector, 1);
}
Also used : KafkaClientsBuilder(io.strimzi.systemtest.kafkaclients.internalClients.KafkaClientsBuilder) ParallelTest(io.strimzi.systemtest.annotations.ParallelTest) CoreMatchers(org.hamcrest.CoreMatchers) GenericKafkaListener(io.strimzi.api.kafka.model.listener.arraylistener.GenericKafkaListener) KafkaClientsTemplates(io.strimzi.systemtest.templates.crd.KafkaClientsTemplates) LabelSelector(io.fabric8.kubernetes.api.model.LabelSelector) KafkaConnectTemplates(io.strimzi.systemtest.templates.crd.KafkaConnectTemplates) CONNECT(io.strimzi.systemtest.Constants.CONNECT) KafkaResource(io.strimzi.systemtest.resources.crd.KafkaResource) Level(org.apache.logging.log4j.Level) ResourceManager.kubeClient(io.strimzi.systemtest.resources.ResourceManager.kubeClient) ExtensionContext(org.junit.jupiter.api.extension.ExtensionContext) INFRA_NAMESPACE(io.strimzi.systemtest.Constants.INFRA_NAMESPACE) AfterAll(org.junit.jupiter.api.AfterAll) PodUtils(io.strimzi.systemtest.utils.kubeUtils.objects.PodUtils) KafkaResources(io.strimzi.api.kafka.model.KafkaResources) KubeClusterResource(io.strimzi.test.k8s.KubeClusterResource) BeforeAll(org.junit.jupiter.api.BeforeAll) Tag(org.junit.jupiter.api.Tag) MatcherAssert.assertThat(org.hamcrest.MatcherAssert.assertThat) StUtils(io.strimzi.systemtest.utils.StUtils) KafkaTemplates(io.strimzi.systemtest.templates.crd.KafkaTemplates) RollingUpdateUtils(io.strimzi.systemtest.utils.RollingUpdateUtils) IsolatedSuite(io.strimzi.systemtest.annotations.IsolatedSuite) KafkaClients(io.strimzi.systemtest.kafkaclients.internalClients.KafkaClients) JobUtils(io.strimzi.systemtest.utils.kubeUtils.controllers.JobUtils) KafkaClientsBuilder(io.strimzi.systemtest.kafkaclients.internalClients.KafkaClientsBuilder) GenericKafkaListenerBuilder(io.strimzi.api.kafka.model.listener.arraylistener.GenericKafkaListenerBuilder) UnexpectedException(java.rmi.UnexpectedException) OAUTH(io.strimzi.systemtest.Constants.OAUTH) Collectors(java.util.stream.Collectors) KafkaListenerAuthenticationOAuth(io.strimzi.api.kafka.model.listener.KafkaListenerAuthenticationOAuth) ClientUtils(io.strimzi.systemtest.utils.ClientUtils) IsolatedTest(io.strimzi.systemtest.annotations.IsolatedTest) KeycloakUtils(io.strimzi.systemtest.utils.specific.KeycloakUtils) List(java.util.List) KafkaListenerType(io.strimzi.api.kafka.model.listener.arraylistener.KafkaListenerType) KafkaTopicTemplates(io.strimzi.systemtest.templates.crd.KafkaTopicTemplates) REGRESSION(io.strimzi.systemtest.Constants.REGRESSION) KafkaConnectResources(io.strimzi.api.kafka.model.KafkaConnectResources) GenericKafkaListener(io.strimzi.api.kafka.model.listener.arraylistener.GenericKafkaListener) KafkaClients(io.strimzi.systemtest.kafkaclients.internalClients.KafkaClients) LabelSelector(io.fabric8.kubernetes.api.model.LabelSelector) KafkaListenerAuthenticationOAuth(io.strimzi.api.kafka.model.listener.KafkaListenerAuthenticationOAuth) IsolatedTest(io.strimzi.systemtest.annotations.IsolatedTest)

Example 5 with KafkaClients

use of io.strimzi.systemtest.kafkaclients.internalClients.KafkaClients in project strimzi by strimzi.

the class FeatureGatesIsolatedST method testStrimziPodSetsFeatureGate.

/**
 * UseStrimziPodSets feature gate
 * https://github.com/strimzi/proposals/blob/main/031-statefulset-removal.md
 */
@IsolatedTest("Feature Gates test for enabled UseStrimziPodSets gate")
@Tag(INTERNAL_CLIENTS_USED)
public void testStrimziPodSetsFeatureGate(ExtensionContext extensionContext) {
    assumeFalse(Environment.isOlmInstall() || Environment.isHelmInstall());
    final String clusterName = mapWithClusterNames.get(extensionContext.getDisplayName());
    final String producerName = "producer-test-" + new Random().nextInt(Integer.MAX_VALUE);
    final String consumerName = "consumer-test-" + new Random().nextInt(Integer.MAX_VALUE);
    final String topicName = KafkaTopicUtils.generateRandomNameOfTopic();
    final LabelSelector zooSelector = KafkaResource.getLabelSelector(clusterName, KafkaResources.zookeeperStatefulSetName(clusterName));
    final LabelSelector kafkaSelector = KafkaResource.getLabelSelector(clusterName, KafkaResources.kafkaStatefulSetName(clusterName));
    int messageCount = 600;
    List<EnvVar> testEnvVars = new ArrayList<>();
    int zooReplicas = 3;
    int kafkaReplicas = 3;
    testEnvVars.add(new EnvVar(Environment.STRIMZI_FEATURE_GATES_ENV, "+UseStrimziPodSets", null));
    clusterOperator.unInstall();
    clusterOperator = new SetupClusterOperator.SetupClusterOperatorBuilder().withExtensionContext(BeforeAllOnce.getSharedExtensionContext()).withNamespace(INFRA_NAMESPACE).withWatchingNamespaces(Constants.WATCH_ALL_NAMESPACES).withExtraEnvVars(testEnvVars).createInstallation().runInstallation();
    resourceManager.createResource(extensionContext, KafkaTemplates.kafkaPersistent(clusterName, kafkaReplicas).build());
    LOGGER.info("Try to send some messages to Kafka over next few minutes.");
    KafkaTopic kafkaTopic = KafkaTopicTemplates.topic(clusterName, topicName).editSpec().withReplicas(kafkaReplicas).withPartitions(kafkaReplicas).endSpec().build();
    resourceManager.createResource(extensionContext, kafkaTopic);
    KafkaClients kafkaBasicClientJob = new KafkaClientsBuilder().withProducerName(producerName).withConsumerName(consumerName).withBootstrapAddress(KafkaResources.plainBootstrapAddress(clusterName)).withTopicName(topicName).withMessageCount(messageCount).withDelayMs(500).withNamespaceName(INFRA_NAMESPACE).build();
    resourceManager.createResource(extensionContext, kafkaBasicClientJob.producerStrimzi());
    resourceManager.createResource(extensionContext, kafkaBasicClientJob.consumerStrimzi());
    JobUtils.waitForJobRunning(consumerName, INFRA_NAMESPACE);
    // Delete one Zoo Pod
    Pod zooPod = PodUtils.getPodsByPrefixInNameWithDynamicWait(INFRA_NAMESPACE, KafkaResources.zookeeperStatefulSetName(clusterName) + "-").get(0);
    LOGGER.info("Delete first found ZooKeeper pod {}", zooPod.getMetadata().getName());
    kubeClient(INFRA_NAMESPACE).deletePod(INFRA_NAMESPACE, zooPod);
    RollingUpdateUtils.waitForComponentAndPodsReady(zooSelector, zooReplicas);
    // Delete one Kafka Pod
    Pod kafkaPod = PodUtils.getPodsByPrefixInNameWithDynamicWait(INFRA_NAMESPACE, KafkaResources.kafkaStatefulSetName(clusterName) + "-").get(0);
    LOGGER.info("Delete first found Kafka broker pod {}", kafkaPod.getMetadata().getName());
    kubeClient(INFRA_NAMESPACE).deletePod(INFRA_NAMESPACE, kafkaPod);
    RollingUpdateUtils.waitForComponentAndPodsReady(kafkaSelector, kafkaReplicas);
    // Roll Zoo
    LOGGER.info("Force Rolling Update of ZooKeeper via annotation.");
    Map<String, String> zooPods = PodUtils.podSnapshot(INFRA_NAMESPACE, zooSelector);
    zooPods.keySet().forEach(podName -> {
        kubeClient(INFRA_NAMESPACE).editPod(podName).edit(pod -> new PodBuilder(pod).editMetadata().addToAnnotations(Annotations.ANNO_STRIMZI_IO_MANUAL_ROLLING_UPDATE, "true").endMetadata().build());
    });
    LOGGER.info("Wait for next reconciliation to happen.");
    RollingUpdateUtils.waitTillComponentHasRolled(INFRA_NAMESPACE, zooSelector, zooReplicas, zooPods);
    // Roll Kafka
    LOGGER.info("Force Rolling Update of Kafka via annotation.");
    Map<String, String> kafkaPods = PodUtils.podSnapshot(INFRA_NAMESPACE, kafkaSelector);
    kafkaPods.keySet().forEach(podName -> {
        kubeClient(INFRA_NAMESPACE).editPod(podName).edit(pod -> new PodBuilder(pod).editMetadata().addToAnnotations(Annotations.ANNO_STRIMZI_IO_MANUAL_ROLLING_UPDATE, "true").endMetadata().build());
    });
    LOGGER.info("Wait for next reconciliation to happen.");
    RollingUpdateUtils.waitTillComponentHasRolled(INFRA_NAMESPACE, kafkaSelector, kafkaReplicas, kafkaPods);
    LOGGER.info("Waiting for clients to finish sending/receiving messages.");
    ClientUtils.waitForClientSuccess(producerName, INFRA_NAMESPACE, MESSAGE_COUNT);
    ClientUtils.waitForClientSuccess(consumerName, INFRA_NAMESPACE, MESSAGE_COUNT);
}
Also used : KafkaClientsBuilder(io.strimzi.systemtest.kafkaclients.internalClients.KafkaClientsBuilder) KafkaClients(io.strimzi.systemtest.kafkaclients.internalClients.KafkaClients) Pod(io.fabric8.kubernetes.api.model.Pod) SetupClusterOperator(io.strimzi.systemtest.resources.operator.SetupClusterOperator) PodBuilder(io.fabric8.kubernetes.api.model.PodBuilder) ArrayList(java.util.ArrayList) LabelSelector(io.fabric8.kubernetes.api.model.LabelSelector) Random(java.util.Random) KafkaTopic(io.strimzi.api.kafka.model.KafkaTopic) EnvVar(io.fabric8.kubernetes.api.model.EnvVar) IsolatedTest(io.strimzi.test.annotations.IsolatedTest) Tag(org.junit.jupiter.api.Tag)

Aggregations

KafkaClients (io.strimzi.systemtest.kafkaclients.internalClients.KafkaClients)40 KafkaClientsBuilder (io.strimzi.systemtest.kafkaclients.internalClients.KafkaClientsBuilder)40 Tag (org.junit.jupiter.api.Tag)22 LabelSelector (io.fabric8.kubernetes.api.model.LabelSelector)12 KafkaResources (io.strimzi.api.kafka.model.KafkaResources)12 IsolatedTest (io.strimzi.systemtest.annotations.IsolatedTest)12 ParallelNamespaceTest (io.strimzi.systemtest.annotations.ParallelNamespaceTest)12 KafkaTemplates (io.strimzi.systemtest.templates.crd.KafkaTemplates)12 KafkaTopicTemplates (io.strimzi.systemtest.templates.crd.KafkaTopicTemplates)12 ClientUtils (io.strimzi.systemtest.utils.ClientUtils)12 List (java.util.List)12 ExtensionContext (org.junit.jupiter.api.extension.ExtensionContext)12 PodBuilder (io.fabric8.kubernetes.api.model.PodBuilder)10 AbstractST (io.strimzi.systemtest.AbstractST)10 REGRESSION (io.strimzi.systemtest.Constants.REGRESSION)10 SetupClusterOperator (io.strimzi.systemtest.resources.operator.SetupClusterOperator)10 Random (java.util.Random)10 LogManager (org.apache.logging.log4j.LogManager)10 Logger (org.apache.logging.log4j.Logger)10 GenericKafkaListenerBuilder (io.strimzi.api.kafka.model.listener.arraylistener.GenericKafkaListenerBuilder)8