Search in sources :

Example 1 with InternalKafkaClient

use of io.strimzi.systemtest.kafkaclients.clients.InternalKafkaClient in project strimzi by strimzi.

the class ClientUtils method waitUntilProducerAndConsumerSuccessfullySendAndReceiveMessages.

public static void waitUntilProducerAndConsumerSuccessfullySendAndReceiveMessages(ExtensionContext extensionContext, InternalKafkaClient internalKafkaClient) throws Exception {
    String topicName = KafkaTopicUtils.generateRandomNameOfTopic();
    ResourceManager.getInstance().createResource(extensionContext, KafkaTopicTemplates.topic(internalKafkaClient.getClusterName(), topicName).editMetadata().withNamespace(internalKafkaClient.getNamespaceName()).endMetadata().build());
    InternalKafkaClient client = internalKafkaClient.toBuilder().withConsumerGroupName(ClientUtils.generateRandomConsumerGroup()).withNamespaceName(internalKafkaClient.getNamespaceName()).withTopicName(topicName).build();
    LOGGER.info("Sending messages to - topic {}, cluster {} and message count of {}", internalKafkaClient.getTopicName(), internalKafkaClient.getClusterName(), internalKafkaClient.getMessageCount());
    int sent = client.sendMessagesTls();
    int received = client.receiveMessagesTls();
    LOGGER.info("Sent {} and received {}", sent, received);
    if (sent != received) {
        throw new Exception("Client sent " + sent + " and received " + received + " ,which does not match!");
    }
}
Also used : InternalKafkaClient(io.strimzi.systemtest.kafkaclients.clients.InternalKafkaClient) WaitException(io.strimzi.test.WaitException) UnexpectedException(java.rmi.UnexpectedException)

Example 2 with InternalKafkaClient

use of io.strimzi.systemtest.kafkaclients.clients.InternalKafkaClient in project strimzi by strimzi.

the class KafkaConnectUtils method sendReceiveMessagesThroughConnect.

/**
 * Send and receive messages through file sink connector (using Kafka Connect).
 * @param connectPodName kafkaConnect pod name
 * @param topicName topic to be used
 * @param kafkaClientsPodName kafkaClients pod name
 * @param namespace namespace name
 * @param clusterName cluster name
 */
public static void sendReceiveMessagesThroughConnect(String connectPodName, String topicName, String kafkaClientsPodName, String namespace, String clusterName) {
    LOGGER.info("Send and receive messages through KafkaConnect");
    KafkaConnectUtils.waitUntilKafkaConnectRestApiIsAvailable(connectPodName);
    KafkaConnectorUtils.createFileSinkConnector(kafkaClientsPodName, topicName, Constants.DEFAULT_SINK_FILE_PATH, KafkaConnectResources.url(clusterName, namespace, 8083));
    InternalKafkaClient internalKafkaClient = new InternalKafkaClient.Builder().withUsingPodName(kafkaClientsPodName).withTopicName(topicName).withNamespaceName(namespace).withClusterName(clusterName).withMessageCount(100).withListenerName(Constants.PLAIN_LISTENER_DEFAULT_NAME).build();
    internalKafkaClient.checkProducedAndConsumedMessages(internalKafkaClient.sendMessagesPlain(), internalKafkaClient.receiveMessagesPlain());
    KafkaConnectUtils.waitForMessagesInKafkaConnectFileSink(connectPodName, Constants.DEFAULT_SINK_FILE_PATH, "99");
}
Also used : InternalKafkaClient(io.strimzi.systemtest.kafkaclients.clients.InternalKafkaClient)

Example 3 with InternalKafkaClient

use of io.strimzi.systemtest.kafkaclients.clients.InternalKafkaClient in project strimzi by strimzi.

the class CustomAuthorizerST method testAclWithSuperUser.

@ParallelTest
@Tag(INTERNAL_CLIENTS_USED)
void testAclWithSuperUser(ExtensionContext extensionContext) {
    final String topicName = mapWithTestTopics.get(extensionContext.getDisplayName());
    final String kafkaClientsName = mapWithKafkaClientNames.get(extensionContext.getDisplayName());
    resourceManager.createResource(extensionContext, KafkaTopicTemplates.topic(CLUSTER_NAME, topicName, namespace).build());
    KafkaUser adminUser = KafkaUserTemplates.tlsUser(namespace, CLUSTER_NAME, ADMIN).editSpec().withNewKafkaUserAuthorizationSimple().addNewAcl().withNewAclRuleTopicResource().withName(topicName).endAclRuleTopicResource().withOperation(AclOperation.WRITE).endAcl().addNewAcl().withNewAclRuleTopicResource().withName(topicName).endAclRuleTopicResource().withOperation(AclOperation.DESCRIBE).endAcl().endKafkaUserAuthorizationSimple().endSpec().build();
    resourceManager.createResource(extensionContext, adminUser);
    resourceManager.createResource(extensionContext, KafkaClientsTemplates.kafkaClients(namespace, true, kafkaClientsName, adminUser).build());
    String kafkaClientsPodName = kubeClient(namespace).listPodsByPrefixInName(namespace, kafkaClientsName).get(0).getMetadata().getName();
    LOGGER.info("Checking kafka super user:{} that is able to send messages to topic:{}", ADMIN, topicName);
    InternalKafkaClient internalKafkaClient = new InternalKafkaClient.Builder().withTopicName(topicName).withNamespaceName(namespace).withClusterName(CLUSTER_NAME).withKafkaUsername(ADMIN).withMessageCount(MESSAGE_COUNT).withListenerName(Constants.TLS_LISTENER_DEFAULT_NAME).withUsingPodName(kafkaClientsPodName).build();
    assertThat(internalKafkaClient.sendMessagesTls(), is(MESSAGE_COUNT));
    LOGGER.info("Checking kafka super user:{} that is able to read messages to topic:{} regardless that " + "we configured Acls with only write operation", ADMIN, TOPIC_NAME);
    assertThat(internalKafkaClient.receiveMessagesTls(), is(MESSAGE_COUNT));
}
Also used : GenericKafkaListenerBuilder(io.strimzi.api.kafka.model.listener.arraylistener.GenericKafkaListenerBuilder) InternalKafkaClient(io.strimzi.systemtest.kafkaclients.clients.InternalKafkaClient) KafkaUser(io.strimzi.api.kafka.model.KafkaUser) ParallelTest(io.strimzi.systemtest.annotations.ParallelTest) Tag(org.junit.jupiter.api.Tag)

Example 4 with InternalKafkaClient

use of io.strimzi.systemtest.kafkaclients.clients.InternalKafkaClient in project strimzi by strimzi.

the class AllNamespaceIsolatedST method testUserInDifferentNamespace.

@IsolatedTest
void testUserInDifferentNamespace(ExtensionContext extensionContext) {
    String startingNamespace = cluster.setNamespace(SECOND_NAMESPACE);
    KafkaUser user = KafkaUserTemplates.tlsUser(MAIN_NAMESPACE_CLUSTER_NAME, USER_NAME).build();
    resourceManager.createResource(extensionContext, user);
    Condition kafkaCondition = KafkaUserResource.kafkaUserClient().inNamespace(SECOND_NAMESPACE).withName(USER_NAME).get().getStatus().getConditions().get(0);
    LOGGER.info("KafkaUser condition status: {}", kafkaCondition.getStatus());
    LOGGER.info("KafkaUser condition type: {}", kafkaCondition.getType());
    assertThat(kafkaCondition.getType(), is(Ready.toString()));
    List<Secret> secretsOfSecondNamespace = kubeClient(SECOND_NAMESPACE).listSecrets();
    cluster.setNamespace(THIRD_NAMESPACE);
    for (Secret s : secretsOfSecondNamespace) {
        if (s.getMetadata().getName().equals(USER_NAME)) {
            LOGGER.info("Copying secret {} from namespace {} to namespace {}", s, SECOND_NAMESPACE, THIRD_NAMESPACE);
            copySecret(s, THIRD_NAMESPACE, USER_NAME);
        }
    }
    resourceManager.createResource(extensionContext, KafkaClientsTemplates.kafkaClients(true, MAIN_NAMESPACE_CLUSTER_NAME + "-" + Constants.KAFKA_CLIENTS, user).build());
    final String defaultKafkaClientsPodName = ResourceManager.kubeClient().listPodsByPrefixInName(MAIN_NAMESPACE_CLUSTER_NAME + "-" + Constants.KAFKA_CLIENTS).get(0).getMetadata().getName();
    InternalKafkaClient internalKafkaClient = new InternalKafkaClient.Builder().withUsingPodName(defaultKafkaClientsPodName).withTopicName(TOPIC_NAME).withNamespaceName(THIRD_NAMESPACE).withClusterName(MAIN_NAMESPACE_CLUSTER_NAME).withMessageCount(MESSAGE_COUNT).withKafkaUsername(USER_NAME).withListenerName(Constants.TLS_LISTENER_DEFAULT_NAME).build();
    LOGGER.info("Checking produced and consumed messages to pod:{}", defaultKafkaClientsPodName);
    int sent = internalKafkaClient.sendMessagesTls();
    assertThat(sent, is(MESSAGE_COUNT));
    int received = internalKafkaClient.receiveMessagesTls();
    assertThat(received, is(MESSAGE_COUNT));
    cluster.setNamespace(startingNamespace);
}
Also used : Condition(io.strimzi.api.kafka.model.status.Condition) Secret(io.fabric8.kubernetes.api.model.Secret) SecretBuilder(io.fabric8.kubernetes.api.model.SecretBuilder) InternalKafkaClient(io.strimzi.systemtest.kafkaclients.clients.InternalKafkaClient) KafkaUser(io.strimzi.api.kafka.model.KafkaUser) IsolatedTest(io.strimzi.systemtest.annotations.IsolatedTest)

Example 5 with InternalKafkaClient

use of io.strimzi.systemtest.kafkaclients.clients.InternalKafkaClient in project strimzi by strimzi.

the class RollingUpdateST method testZookeeperScaleUpScaleDown.

@ParallelNamespaceTest
@Tag(SCALABILITY)
void testZookeeperScaleUpScaleDown(ExtensionContext extensionContext) {
    final String namespaceName = StUtils.getNamespaceBasedOnRbac(namespace, extensionContext);
    final String clusterName = mapWithClusterNames.get(extensionContext.getDisplayName());
    final String topicName = mapWithTestTopics.get(extensionContext.getDisplayName());
    final String userName = mapWithTestUsers.get(extensionContext.getDisplayName());
    final String kafkaClientsName = mapWithKafkaClientNames.get(extensionContext.getDisplayName());
    final LabelSelector zkSelector = KafkaResource.getLabelSelector(clusterName, KafkaResources.zookeeperStatefulSetName(clusterName));
    resourceManager.createResource(extensionContext, KafkaTemplates.kafkaPersistent(clusterName, 3, 3).build());
    resourceManager.createResource(extensionContext, KafkaTopicTemplates.topic(clusterName, topicName).build());
    KafkaUser user = KafkaUserTemplates.tlsUser(namespaceName, clusterName, userName).build();
    resourceManager.createResource(extensionContext, user);
    // kafka cluster already deployed
    LOGGER.info("Running zookeeperScaleUpScaleDown with cluster {}", clusterName);
    final int initialZkReplicas = kubeClient(namespaceName).getClient().pods().inNamespace(namespaceName).withLabelSelector(zkSelector).list().getItems().size();
    assertThat(initialZkReplicas, is(3));
    resourceManager.createResource(extensionContext, false, KafkaClientsTemplates.kafkaClients(true, kafkaClientsName, user).build());
    final String defaultKafkaClientsPodName = PodUtils.getPodsByPrefixInNameWithDynamicWait(namespaceName, kafkaClientsName).get(0).getMetadata().getName();
    InternalKafkaClient internalKafkaClient = new InternalKafkaClient.Builder().withUsingPodName(defaultKafkaClientsPodName).withTopicName(topicName).withNamespaceName(namespaceName).withClusterName(clusterName).withMessageCount(MESSAGE_COUNT).withKafkaUsername(userName).withListenerName(Constants.TLS_LISTENER_DEFAULT_NAME).build();
    internalKafkaClient.produceTlsMessagesUntilOperationIsSuccessful(MESSAGE_COUNT);
    final int scaleZkTo = initialZkReplicas + 4;
    final List<String> newZkPodNames = new ArrayList<String>() {

        {
            for (int i = initialZkReplicas; i < scaleZkTo; i++) {
                add(KafkaResources.zookeeperPodName(clusterName, i));
            }
        }
    };
    LOGGER.info("Scale up Zookeeper to {}", scaleZkTo);
    KafkaResource.replaceKafkaResourceInSpecificNamespace(clusterName, k -> k.getSpec().getZookeeper().setReplicas(scaleZkTo), namespaceName);
    int received = internalKafkaClient.receiveMessagesTls();
    assertThat(received, is(MESSAGE_COUNT));
    RollingUpdateUtils.waitForComponentAndPodsReady(namespaceName, zkSelector, scaleZkTo);
    // check the new node is either in leader or follower state
    KafkaUtils.waitForZkMntr(namespaceName, clusterName, ZK_SERVER_STATE, 0, 1, 2, 3, 4, 5, 6);
    internalKafkaClient = internalKafkaClient.toBuilder().withConsumerGroupName(ClientUtils.generateRandomConsumerGroup()).build();
    internalKafkaClient.consumesTlsMessagesUntilOperationIsSuccessful(MESSAGE_COUNT);
    // Create new topic to ensure, that ZK is working properly
    String scaleUpTopicName = KafkaTopicUtils.generateRandomNameOfTopic();
    resourceManager.createResource(extensionContext, KafkaTopicTemplates.topic(clusterName, scaleUpTopicName, 1, 1).build());
    internalKafkaClient = internalKafkaClient.toBuilder().withTopicName(scaleUpTopicName).withConsumerGroupName(ClientUtils.generateRandomConsumerGroup()).build();
    internalKafkaClient.produceAndConsumesTlsMessagesUntilBothOperationsAreSuccessful();
    // scale down
    LOGGER.info("Scale down Zookeeper to {}", initialZkReplicas);
    // Get zk-3 uid before deletion
    String uid = kubeClient(namespaceName).getPodUid(newZkPodNames.get(3));
    KafkaResource.replaceKafkaResourceInSpecificNamespace(clusterName, k -> k.getSpec().getZookeeper().setReplicas(initialZkReplicas), namespaceName);
    RollingUpdateUtils.waitForComponentAndPodsReady(namespaceName, zkSelector, initialZkReplicas);
    internalKafkaClient = internalKafkaClient.toBuilder().withConsumerGroupName(ClientUtils.generateRandomConsumerGroup()).build();
    // Wait for one zk pods will became leader and others follower state
    KafkaUtils.waitForZkMntr(namespaceName, clusterName, ZK_SERVER_STATE, 0, 1, 2);
    internalKafkaClient.consumesTlsMessagesUntilOperationIsSuccessful(MESSAGE_COUNT);
    // Create new topic to ensure, that ZK is working properly
    String scaleDownTopicName = KafkaTopicUtils.generateRandomNameOfTopic();
    resourceManager.createResource(extensionContext, KafkaTopicTemplates.topic(clusterName, scaleDownTopicName, 1, 1).build());
    internalKafkaClient = internalKafkaClient.toBuilder().withTopicName(scaleDownTopicName).withConsumerGroupName(ClientUtils.generateRandomConsumerGroup()).build();
    internalKafkaClient.produceAndConsumesTlsMessagesUntilBothOperationsAreSuccessful();
    // Test that the second pod has event 'Killing'
    assertThat(kubeClient(namespaceName).listEventsByResourceUid(uid), hasAllOfReasons(Killing));
}
Also used : ProbeBuilder(io.strimzi.api.kafka.model.ProbeBuilder) ExternalLoggingBuilder(io.strimzi.api.kafka.model.ExternalLoggingBuilder) ResourceRequirementsBuilder(io.fabric8.kubernetes.api.model.ResourceRequirementsBuilder) JmxPrometheusExporterMetricsBuilder(io.strimzi.api.kafka.model.JmxPrometheusExporterMetricsBuilder) ConfigMapBuilder(io.fabric8.kubernetes.api.model.ConfigMapBuilder) ConfigMapKeySelectorBuilder(io.fabric8.kubernetes.api.model.ConfigMapKeySelectorBuilder) ArrayList(java.util.ArrayList) LabelSelector(io.fabric8.kubernetes.api.model.LabelSelector) InternalKafkaClient(io.strimzi.systemtest.kafkaclients.clients.InternalKafkaClient) KafkaUser(io.strimzi.api.kafka.model.KafkaUser) ParallelNamespaceTest(io.strimzi.systemtest.annotations.ParallelNamespaceTest) Tag(org.junit.jupiter.api.Tag)

Aggregations

InternalKafkaClient (io.strimzi.systemtest.kafkaclients.clients.InternalKafkaClient)80 ParallelNamespaceTest (io.strimzi.systemtest.annotations.ParallelNamespaceTest)50 KafkaUser (io.strimzi.api.kafka.model.KafkaUser)47 Tag (org.junit.jupiter.api.Tag)47 GenericKafkaListenerBuilder (io.strimzi.api.kafka.model.listener.arraylistener.GenericKafkaListenerBuilder)43 Matchers.containsString (org.hamcrest.Matchers.containsString)30 SecretBuilder (io.fabric8.kubernetes.api.model.SecretBuilder)23 LabelSelector (io.fabric8.kubernetes.api.model.LabelSelector)21 CoreMatchers.containsString (org.hamcrest.CoreMatchers.containsString)21 ResourceRequirementsBuilder (io.fabric8.kubernetes.api.model.ResourceRequirementsBuilder)17 HashMap (java.util.HashMap)14 Secret (io.fabric8.kubernetes.api.model.Secret)13 ExternalKafkaClient (io.strimzi.systemtest.kafkaclients.externalClients.ExternalKafkaClient)12 ParallelTest (io.strimzi.systemtest.annotations.ParallelTest)10 KafkaTopic (io.strimzi.api.kafka.model.KafkaTopic)9 KafkaListenerAuthenticationTls (io.strimzi.api.kafka.model.listener.KafkaListenerAuthenticationTls)9 KafkaClientsBuilder (io.strimzi.systemtest.kafkaclients.internalClients.KafkaClientsBuilder)9 Quantity (io.fabric8.kubernetes.api.model.Quantity)8 ConfigMapBuilder (io.fabric8.kubernetes.api.model.ConfigMapBuilder)7 ConfigMapKeySelectorBuilder (io.fabric8.kubernetes.api.model.ConfigMapKeySelectorBuilder)7