Search in sources :

Example 51 with Kafka

use of org.bf2.operator.operands.KafkaInstanceConfiguration.Kafka in project kas-fleetshard by bf2fc6cc711aee1a0c2a.

the class KafkaManager method doKafkaUpgradeStabilityCheck.

/**
 * Scheduled job to execute the Kafka stability check
 *
 * @param managedKafka ManagedKafka instance
 */
void doKafkaUpgradeStabilityCheck(ManagedKafka managedKafka) {
    log.infof("[%s/%s] Kafka upgrade stability check", managedKafka.getMetadata().getNamespace(), managedKafka.getMetadata().getName());
    CanaryService canaryService = RestClientBuilder.newBuilder().baseUri(URI.create("http://" + AbstractCanary.canaryName(managedKafka) + "." + managedKafka.getMetadata().getNamespace() + ":8080")).connectTimeout(10, TimeUnit.SECONDS).readTimeout(30, TimeUnit.SECONDS).build(CanaryService.class);
    try {
        Status status = canaryService.getStatus();
        log.infof("[%s/%s] Canary status: timeWindow %d - percentage %d", managedKafka.getMetadata().getNamespace(), managedKafka.getMetadata().getName(), status.getConsuming().getTimeWindow(), status.getConsuming().getPercentage());
        if (status.getConsuming().getPercentage() > consumingPercentageThreshold) {
            log.debugf("[%s/%s] Remove Kafka upgrade start/end annotations", managedKafka.getMetadata().getNamespace(), managedKafka.getMetadata().getName());
            managedKafkaClient.inNamespace(managedKafka.getMetadata().getNamespace()).withName(managedKafka.getMetadata().getName()).edit(mk -> new ManagedKafkaBuilder(mk).editMetadata().removeFromAnnotations(KAFKA_UPGRADE_START_TIMESTAMP_ANNOTATION).removeFromAnnotations(KAFKA_UPGRADE_END_TIMESTAMP_ANNOTATION).endMetadata().build());
        } else {
            log.warnf("[%s/%s] Reported consuming percentage %d less than %d threshold", managedKafka.getMetadata().getNamespace(), managedKafka.getMetadata().getName(), status.getConsuming().getPercentage(), consumingPercentageThreshold);
            managedKafkaClient.inNamespace(managedKafka.getMetadata().getNamespace()).withName(managedKafka.getMetadata().getName()).edit(mk -> new ManagedKafkaBuilder(mk).editMetadata().removeFromAnnotations(KAFKA_UPGRADE_END_TIMESTAMP_ANNOTATION).endMetadata().build());
        }
        // trigger a reconcile on the ManagedKafka instance to push checking if next step
        // Kafka IBP upgrade is needed or another stability check
        informerManager.resyncManagedKafka(managedKafka);
    } catch (Exception e) {
        log.errorf("[%s/%s] Error while checking Kafka upgrade stability", managedKafka.getMetadata().getNamespace(), managedKafka.getMetadata().getName(), e);
    }
}
Also used : Status(org.bf2.operator.clients.canary.Status) CanaryService(org.bf2.operator.clients.canary.CanaryService) ManagedKafkaBuilder(org.bf2.operator.resources.v1alpha1.ManagedKafkaBuilder) SchedulerException(org.quartz.SchedulerException)

Example 52 with Kafka

use of org.bf2.operator.operands.KafkaInstanceConfiguration.Kafka in project kas-fleetshard by bf2fc6cc711aee1a0c2a.

the class KafkaManager method addUpgradeTimeStampAnnotation.

/**
 * Add a Kafka upgrade related timestamp (current UTC time) to the ManagedKafka instance
 *
 * @param managedKafka ManagedKafka instance
 * @param annotation annotation to add, start or end of Kafka upgrade
 */
private void addUpgradeTimeStampAnnotation(ManagedKafka managedKafka, String annotation) {
    log.debugf("[%s/%s] Adding Kafka upgrade %s timestamp annotation", managedKafka.getMetadata().getNamespace(), managedKafka.getMetadata().getName(), annotation);
    managedKafkaClient.inNamespace(managedKafka.getMetadata().getNamespace()).withName(managedKafka.getMetadata().getName()).edit(mk -> new ManagedKafkaBuilder(mk).editMetadata().addToAnnotations(annotation, ZonedDateTime.now(ZoneOffset.UTC).format(DateTimeFormatter.ISO_INSTANT)).endMetadata().build());
}
Also used : ManagedKafkaBuilder(org.bf2.operator.resources.v1alpha1.ManagedKafkaBuilder)

Example 53 with Kafka

use of org.bf2.operator.operands.KafkaInstanceConfiguration.Kafka in project kas-fleetshard by bf2fc6cc711aee1a0c2a.

the class ManagedKafkaAgentController method statusUpdateLoop.

@Timed(value = "controller.status.update", extraTags = { "resource", "ManagedKafkaAgent" }, description = "Time spent processing status updates")
@Counted(value = "controller.status.update", extraTags = { "resource", "ManagedKafkaAgent" }, description = "The number of status updates")
@Scheduled(every = "{agent.status.interval}", concurrentExecution = ConcurrentExecution.SKIP)
void statusUpdateLoop() {
    ManagedKafkaAgent resource = this.agentClient.getByName(this.agentClient.getNamespace(), ManagedKafkaAgentResourceClient.RESOURCE_NAME);
    if (resource != null) {
        // check and reinstate if the observability config changed
        this.observabilityManager.createOrUpdateObservabilitySecret(resource.getSpec().getObservability(), resource);
        log.debugf("Tick to update Kafka agent Status in namespace %s", this.agentClient.getNamespace());
        resource.setStatus(buildStatus(resource));
        this.agentClient.replaceStatus(resource);
    }
}
Also used : ManagedKafkaAgent(org.bf2.operator.resources.v1alpha1.ManagedKafkaAgent) Scheduled(io.quarkus.scheduler.Scheduled) Counted(io.micrometer.core.annotation.Counted) Timed(io.micrometer.core.annotation.Timed)

Example 54 with Kafka

use of org.bf2.operator.operands.KafkaInstanceConfiguration.Kafka in project kas-fleetshard by bf2fc6cc711aee1a0c2a.

the class ManagedKafkaController method validity.

/**
 * Run a validity check on the ManagedKafka custom resource
 *
 * @param managedKafka ManagedKafka custom resource to validate
 * @return readiness indicating an error in the ManagedKafka custom resource, empty Optional otherwise
 */
private Optional<OperandReadiness> validity(ManagedKafka managedKafka) {
    String message = null;
    StrimziVersionStatus strimziVersion = this.strimziManager.getStrimziVersion(managedKafka.getSpec().getVersions().getStrimzi());
    if (strimziVersion == null) {
        message = String.format("The requested Strimzi version %s is not supported", managedKafka.getSpec().getVersions().getStrimzi());
    } else {
        if (!strimziVersion.getKafkaVersions().contains(managedKafka.getSpec().getVersions().getKafka())) {
            message = String.format("The requested Kafka version %s is not supported by the Strimzi version %s", managedKafka.getSpec().getVersions().getKafka(), strimziVersion.getVersion());
        } else if (managedKafka.getSpec().getVersions().getKafkaIbp() != null && !strimziVersion.getKafkaIbpVersions().contains(managedKafka.getSpec().getVersions().getKafkaIbp())) {
            message = String.format("The requested Kafka inter broker protocol version %s is not supported by the Strimzi version %s", managedKafka.getSpec().getVersions().getKafkaIbp(), strimziVersion.getVersion());
        }
    }
    if (message != null) {
        log.error(message);
        return Optional.of(new OperandReadiness(Status.False, Reason.Error, message));
    }
    return Optional.empty();
}
Also used : StrimziVersionStatus(org.bf2.operator.resources.v1alpha1.StrimziVersionStatus) OperandReadiness(org.bf2.operator.operands.OperandReadiness)

Example 55 with Kafka

use of org.bf2.operator.operands.KafkaInstanceConfiguration.Kafka in project kas-fleetshard by bf2fc6cc711aee1a0c2a.

the class ManagedKafkaController method updateManagedKafkaStatus.

/**
 * Extract from the current KafkaInstance overall status (Kafka, Canary and AdminServer)
 * a corresponding list of ManagedKafkaCondition(s) to set on the ManagedKafka status
 *
 * @param managedKafka ManagedKafka instance
 */
private void updateManagedKafkaStatus(ManagedKafka managedKafka) {
    // add status if not already available on the ManagedKafka resource
    ManagedKafkaStatus status = Objects.requireNonNullElse(managedKafka.getStatus(), new ManagedKafkaStatusBuilder().build());
    status.setUpdatedTimestamp(ConditionUtils.iso8601Now());
    managedKafka.setStatus(status);
    // add conditions if not already available
    List<ManagedKafkaCondition> managedKafkaConditions = managedKafka.getStatus().getConditions();
    if (managedKafkaConditions == null) {
        managedKafkaConditions = new ArrayList<>();
        status.setConditions(managedKafkaConditions);
    }
    Optional<ManagedKafkaCondition> optReady = ConditionUtils.findManagedKafkaCondition(managedKafkaConditions, ManagedKafkaCondition.Type.Ready);
    ManagedKafkaCondition ready = null;
    if (optReady.isPresent()) {
        ready = optReady.get();
    } else {
        ready = ConditionUtils.buildCondition(ManagedKafkaCondition.Type.Ready, Status.Unknown);
        managedKafkaConditions.add(ready);
    }
    // a not valid ManagedKafka skips the handling of it, so the status will report an error condition
    OperandReadiness readiness = this.validity(managedKafka).orElse(kafkaInstance.getReadiness(managedKafka));
    ConditionUtils.updateConditionStatus(ready, readiness.getStatus(), readiness.getReason(), readiness.getMessage());
    // routes should always be set on the CR status, even if it's just an empty list
    status.setRoutes(List.of());
    int replicas = kafkaCluster.getReplicas(managedKafka);
    if (ingressControllerManagerInstance.isResolvable()) {
        IngressControllerManager ingressControllerManager = ingressControllerManagerInstance.get();
        List<ManagedKafkaRoute> routes = ingressControllerManager.getManagedKafkaRoutesFor(managedKafka);
        // expect route for each broker + 1 for bootstrap URL + 1 for Admin API server
        int expectedNumRoutes = replicas + NUM_NON_BROKER_ROUTES;
        if (routes.size() >= expectedNumRoutes && routes.stream().noneMatch(r -> "".equals(r.getRouter()))) {
            status.setRoutes(routes);
        }
    }
    if (Status.True.equals(readiness.getStatus())) {
        status.setCapacity(new ManagedKafkaCapacityBuilder(managedKafka.getSpec().getCapacity()).withMaxDataRetentionSize(kafkaInstance.getKafkaCluster().calculateRetentionSize(managedKafka)).build());
        // the versions in the status are updated incrementally copying the spec only when each stage ends
        VersionsBuilder versionsBuilder = status.getVersions() != null ? new VersionsBuilder(status.getVersions()) : new VersionsBuilder(managedKafka.getSpec().getVersions());
        if (!Reason.StrimziUpdating.equals(readiness.getReason()) && !this.strimziManager.hasStrimziChanged(managedKafka)) {
            versionsBuilder.withStrimzi(managedKafka.getSpec().getVersions().getStrimzi());
        }
        if (!Reason.KafkaUpdating.equals(readiness.getReason()) && !this.kafkaManager.hasKafkaVersionChanged(managedKafka)) {
            versionsBuilder.withKafka(managedKafka.getSpec().getVersions().getKafka());
        }
        if (!Reason.KafkaIbpUpdating.equals(readiness.getReason()) && !this.kafkaManager.hasKafkaIbpVersionChanged(managedKafka)) {
            String kafkaIbp = managedKafka.getSpec().getVersions().getKafkaIbp() != null ? managedKafka.getSpec().getVersions().getKafkaIbp() : AbstractKafkaCluster.getKafkaIbpVersion(managedKafka.getSpec().getVersions().getKafka());
            versionsBuilder.withKafkaIbp(kafkaIbp);
        }
        status.setVersions(versionsBuilder.build());
        status.setAdminServerURI(kafkaInstance.getAdminServer().uri(managedKafka));
        status.setServiceAccounts(managedKafka.getSpec().getServiceAccounts());
    }
}
Also used : DeleteControl(io.javaoperatorsdk.operator.api.DeleteControl) ManagedKafkaResourceClient(org.bf2.common.ManagedKafkaResourceClient) Context(io.javaoperatorsdk.operator.api.Context) Status(org.bf2.operator.resources.v1alpha1.ManagedKafkaCondition.Status) Timed(io.micrometer.core.annotation.Timed) StrimziVersionStatus(org.bf2.operator.resources.v1alpha1.StrimziVersionStatus) Logger(org.jboss.logging.Logger) StrimziManager(org.bf2.operator.managers.StrimziManager) ManagedKafkaRoute(org.bf2.operator.resources.v1alpha1.ManagedKafkaRoute) ResourceEventSource(org.bf2.operator.events.ResourceEventSource) ArrayList(java.util.ArrayList) Controller(io.javaoperatorsdk.operator.api.Controller) VersionsBuilder(org.bf2.operator.resources.v1alpha1.VersionsBuilder) Inject(javax.inject.Inject) KafkaInstance(org.bf2.operator.operands.KafkaInstance) ManagedKafkaStatus(org.bf2.operator.resources.v1alpha1.ManagedKafkaStatus) UpdateControl(io.javaoperatorsdk.operator.api.UpdateControl) AbstractKafkaCluster(org.bf2.operator.operands.AbstractKafkaCluster) KafkaManager(org.bf2.operator.managers.KafkaManager) Instance(javax.enterprise.inject.Instance) NDC(org.jboss.logging.NDC) KafkaInstanceConfiguration(org.bf2.operator.operands.KafkaInstanceConfiguration) ManagedKafkaStatusBuilder(org.bf2.operator.resources.v1alpha1.ManagedKafkaStatusBuilder) IngressControllerManager(org.bf2.operator.managers.IngressControllerManager) ConditionUtils(org.bf2.common.ConditionUtils) Reason(org.bf2.operator.resources.v1alpha1.ManagedKafkaCondition.Reason) Objects(java.util.Objects) List(java.util.List) Counted(io.micrometer.core.annotation.Counted) ManagedKafkaCondition(org.bf2.operator.resources.v1alpha1.ManagedKafkaCondition) OperandReadiness(org.bf2.operator.operands.OperandReadiness) ManagedKafkaCapacityBuilder(org.bf2.operator.resources.v1alpha1.ManagedKafkaCapacityBuilder) Optional(java.util.Optional) ResourceController(io.javaoperatorsdk.operator.api.ResourceController) EventSourceManager(io.javaoperatorsdk.operator.processing.event.EventSourceManager) ManagedKafka(org.bf2.operator.resources.v1alpha1.ManagedKafka) ManagedKafkaCapacityBuilder(org.bf2.operator.resources.v1alpha1.ManagedKafkaCapacityBuilder) ManagedKafkaCondition(org.bf2.operator.resources.v1alpha1.ManagedKafkaCondition) VersionsBuilder(org.bf2.operator.resources.v1alpha1.VersionsBuilder) IngressControllerManager(org.bf2.operator.managers.IngressControllerManager) ManagedKafkaStatusBuilder(org.bf2.operator.resources.v1alpha1.ManagedKafkaStatusBuilder) ManagedKafkaRoute(org.bf2.operator.resources.v1alpha1.ManagedKafkaRoute) ManagedKafkaStatus(org.bf2.operator.resources.v1alpha1.ManagedKafkaStatus) OperandReadiness(org.bf2.operator.operands.OperandReadiness)

Aggregations

ManagedKafka (org.bf2.operator.resources.v1alpha1.ManagedKafka)45 Kafka (io.strimzi.api.kafka.model.Kafka)31 Test (org.junit.jupiter.api.Test)24 List (java.util.List)19 ParameterizedTest (org.junit.jupiter.params.ParameterizedTest)19 QuarkusTest (io.quarkus.test.junit.QuarkusTest)18 Map (java.util.Map)18 ArrayList (java.util.ArrayList)17 Inject (javax.inject.Inject)13 Quantity (io.fabric8.kubernetes.api.model.Quantity)12 Optional (java.util.Optional)11 Collections (java.util.Collections)10 Collectors (java.util.stream.Collectors)10 HashMap (java.util.HashMap)9 Objects (java.util.Objects)9 StrimziManager (org.bf2.operator.managers.StrimziManager)9 Logger (org.jboss.logging.Logger)9 KubernetesClient (io.fabric8.kubernetes.client.KubernetesClient)8 IOException (java.io.IOException)8 ManagedKafkaUtils.exampleManagedKafka (org.bf2.operator.utils.ManagedKafkaUtils.exampleManagedKafka)8