Search in sources :

Example 41 with Event

use of io.fabric8.kubernetes.api.model.Event in project strimzi by strimzi.

the class AbstractAssemblyOperator method reconcileAll.

/**
 * Reconcile assembly resources in the given namespace having the given selector.
 * Reconciliation works by getting the assembly ConfigMaps in the given namespace with the given selector and
 * comparing with the corresponding {@linkplain #getResources(String) resource}.
 * <ul>
 * <li>An assembly will be {@linkplain #createOrUpdate(Reconciliation, ConfigMap, Handler) created} for all ConfigMaps without same-named resources</li>
 * <li>An assembly will be {@linkplain #delete(Reconciliation, Handler) deleted} for all resources without same-named ConfigMaps</li>
 * </ul>
 *
 * @param trigger A description of the triggering event (timer or watch), used for logging
 * @param namespace The namespace
 * @param selector The selector
 */
public final CountDownLatch reconcileAll(String trigger, String namespace, Labels selector) {
    Labels selectorWithCluster = selector.withType(assemblyType);
    // get ConfigMaps with kind=cluster&type=kafka (or connect, or connect-s2i) for the corresponding cluster type
    List<ConfigMap> cms = configMapOperations.list(namespace, selectorWithCluster);
    Set<String> cmsNames = cms.stream().map(cm -> cm.getMetadata().getName()).collect(Collectors.toSet());
    log.debug("reconcileAll({}, {}): ConfigMaps with labels {}: {}", assemblyType, trigger, selectorWithCluster, cmsNames);
    // get resources with kind=cluster&type=kafka (or connect, or connect-s2i)
    List<? extends HasMetadata> resources = getResources(namespace);
    // now extract the cluster name from those
    Set<String> resourceNames = resources.stream().filter(// exclude Cluster CM, which won't have a cluster label
    r -> Labels.kind(r) == null).map(Labels::cluster).collect(Collectors.toSet());
    log.debug("reconcileAll({}, {}): Other resources with labels {}: {}", assemblyType, trigger, selectorWithCluster, resourceNames);
    cmsNames.addAll(resourceNames);
    // We use a latch so that callers (specifically, test callers) know when the reconciliation is complete
    // Using futures would be more complex for no benefit
    CountDownLatch latch = new CountDownLatch(cmsNames.size());
    for (String name : cmsNames) {
        Reconciliation reconciliation = new Reconciliation(trigger, assemblyType, namespace, name);
        reconcileAssembly(reconciliation, result -> {
            if (result.succeeded()) {
                log.info("{}: Assembly reconciled", reconciliation);
            } else {
                log.error("{}: Failed to reconcile", reconciliation);
            }
            latch.countDown();
        });
    }
    return latch;
}
Also used : AssemblyType(io.strimzi.controller.cluster.model.AssemblyType) Logger(org.slf4j.Logger) Vertx(io.vertx.core.Vertx) LoggerFactory(org.slf4j.LoggerFactory) Set(java.util.Set) ConfigMapOperator(io.strimzi.controller.cluster.operator.resource.ConfigMapOperator) HasMetadata(io.fabric8.kubernetes.api.model.HasMetadata) Labels(io.strimzi.controller.cluster.model.Labels) Future(io.vertx.core.Future) Collectors(java.util.stream.Collectors) ConfigMap(io.fabric8.kubernetes.api.model.ConfigMap) CountDownLatch(java.util.concurrent.CountDownLatch) List(java.util.List) Lock(io.vertx.core.shareddata.Lock) ObjectMeta(io.fabric8.kubernetes.api.model.ObjectMeta) Reconciliation(io.strimzi.controller.cluster.Reconciliation) AsyncResult(io.vertx.core.AsyncResult) Handler(io.vertx.core.Handler) ConfigMap(io.fabric8.kubernetes.api.model.ConfigMap) Reconciliation(io.strimzi.controller.cluster.Reconciliation) Labels(io.strimzi.controller.cluster.model.Labels) CountDownLatch(java.util.concurrent.CountDownLatch)

Example 42 with Event

use of io.fabric8.kubernetes.api.model.Event in project strimzi by strimzi.

the class ControllerIT method testConfigMapAddedWithBadData.

@Test
public void testConfigMapAddedWithBadData(TestContext context) {
    String topicName = "test-configmap-created-with-bad-data";
    Topic topic = new Topic.Builder(topicName, 1, (short) 1, emptyMap()).build();
    ConfigMap cm = TopicSerialization.toConfigMap(topic, cmPredicate);
    cm.getData().put(TopicSerialization.CM_KEY_PARTITIONS, "foo");
    // Create a CM
    kubeClient.configMaps().inNamespace(NAMESPACE).create(cm);
    // Wait for the warning event
    waitForEvent(context, cm, "ConfigMap test-configmap-created-with-bad-data has an invalid 'data' section: " + "ConfigMap's 'data' section has invalid key 'partitions': " + "should be a strictly positive integer but was 'foo'", Controller.EventType.WARNING);
}
Also used : ConfigMap(io.fabric8.kubernetes.api.model.ConfigMap) NewTopic(org.apache.kafka.clients.admin.NewTopic) Test(org.junit.Test)

Example 43 with Event

use of io.fabric8.kubernetes.api.model.Event in project strimzi by strimzi.

the class ControllerIT method testConfigMapModifiedNameChanged.

@Test
public void testConfigMapModifiedNameChanged(TestContext context) throws Exception {
    // create the cm
    String topicName = "test-configmap-modified-name-changed";
    ConfigMap cm = createCm(context, topicName);
    // now change the cm
    String changedName = topicName.toUpperCase(Locale.ENGLISH);
    LOGGER.info("Changing CM data.name from {} to {}", topicName, changedName);
    kubeClient.configMaps().inNamespace(NAMESPACE).withName(cm.getMetadata().getName()).edit().addToData(TopicSerialization.CM_KEY_NAME, changedName).done();
    // We expect this to cause a warning event
    waitForEvent(context, cm, "Kafka topics cannot be renamed, but ConfigMap's data.name has changed.", Controller.EventType.WARNING);
}
Also used : ConfigMap(io.fabric8.kubernetes.api.model.ConfigMap) Test(org.junit.Test)

Example 44 with Event

use of io.fabric8.kubernetes.api.model.Event in project strimzi by strimzi.

the class ControllerIT method testCreateTwoConfigMapsManagingOneTopic.

@Test
public void testCreateTwoConfigMapsManagingOneTopic(TestContext context) {
    String topicName = "two-cms-one-topic";
    Topic topic = new Topic.Builder(topicName, 1, (short) 1, emptyMap()).build();
    ConfigMap cm = TopicSerialization.toConfigMap(topic, cmPredicate);
    ConfigMap cm2 = new ConfigMapBuilder(cm).editMetadata().withName(topicName + "-1").endMetadata().build();
    // create one
    createCm(context, cm2);
    // create another
    kubeClient.configMaps().inNamespace(NAMESPACE).create(cm);
    waitForEvent(context, cm, "Failure processing ConfigMap watch event ADDED on map two-cms-one-topic with labels {strimzi.io/kind=topic}: " + "Topic 'two-cms-one-topic' is already managed via ConfigMap 'two-cms-one-topic-1' it cannot also be managed via the ConfiMap 'two-cms-one-topic'", Controller.EventType.WARNING);
}
Also used : ConfigMap(io.fabric8.kubernetes.api.model.ConfigMap) ConfigMapBuilder(io.fabric8.kubernetes.api.model.ConfigMapBuilder) NewTopic(org.apache.kafka.clients.admin.NewTopic) Test(org.junit.Test)

Example 45 with Event

use of io.fabric8.kubernetes.api.model.Event in project strimzi by strimzi.

the class Controller method update3Way.

private void update3Way(HasMetadata involvedObject, Topic k8sTopic, Topic kafkaTopic, Topic privateTopic, Handler<AsyncResult<Void>> reconciliationResultHandler) {
    if (!privateTopic.getMapName().equals(k8sTopic.getMapName())) {
        reconciliationResultHandler.handle(Future.failedFuture(new ControllerException(involvedObject, "Topic '" + kafkaTopic.getTopicName() + "' is already managed via ConfigMap '" + privateTopic.getMapName() + "' it cannot also be managed via the ConfiMap '" + k8sTopic.getMapName() + "'")));
        return;
    }
    TopicDiff oursKafka = TopicDiff.diff(privateTopic, kafkaTopic);
    LOGGER.debug("topicStore->kafkaTopic: {}", oursKafka);
    TopicDiff oursK8s = TopicDiff.diff(privateTopic, k8sTopic);
    LOGGER.debug("topicStore->k8sTopic: {}", oursK8s);
    String conflict = oursKafka.conflict(oursK8s);
    if (conflict != null) {
        final String message = "ConfigMap and Topic both changed in a conflicting way: " + conflict;
        LOGGER.error(message);
        enqueue(new Event(involvedObject, message, EventType.INFO, eventResult -> {
        }));
        reconciliationResultHandler.handle(Future.failedFuture(new Exception(message)));
    } else {
        TopicDiff merged = oursKafka.merge(oursK8s);
        LOGGER.debug("Diffs do not conflict, merged diff: {}", merged);
        if (merged.isEmpty()) {
            LOGGER.info("All three topics are identical");
            reconciliationResultHandler.handle(Future.succeededFuture());
        } else {
            Topic result = merged.apply(privateTopic);
            int partitionsDelta = merged.numPartitionsDelta();
            if (partitionsDelta < 0) {
                final String message = "Number of partitions cannot be decreased";
                LOGGER.error(message);
                enqueue(new Event(involvedObject, message, EventType.INFO, eventResult -> {
                }));
                reconciliationResultHandler.handle(Future.failedFuture(new Exception(message)));
            } else {
                if (merged.changesReplicationFactor()) {
                    LOGGER.error("Changes replication factor");
                    enqueue(new ChangeReplicationFactor(result, involvedObject, null));
                }
                // TODO What if we increase min.in.sync.replicas and the number of replicas,
                // such that the old number of replicas < the new min isr? But likewise
                // we could decrease, so order of tasks in the queue will need to change
                // depending on what the diffs are.
                LOGGER.debug("Updating cm, kafka topic and topicStore");
                // TODO replace this with compose
                enqueue(new UpdateConfigMap(result, ar -> {
                    Handler<Void> topicStoreHandler = ignored -> enqueue(new UpdateInTopicStore(result, involvedObject, reconciliationResultHandler));
                    Handler<Void> partitionsHandler;
                    if (partitionsDelta > 0) {
                        partitionsHandler = ar4 -> enqueue(new IncreaseKafkaPartitions(result, involvedObject, ar2 -> topicStoreHandler.handle(null)));
                    } else {
                        partitionsHandler = topicStoreHandler;
                    }
                    if (merged.changesConfig()) {
                        enqueue(new UpdateKafkaConfig(result, involvedObject, ar2 -> partitionsHandler.handle(null)));
                    } else {
                        enqueue(partitionsHandler);
                    }
                }));
            }
        }
    }
}
Also used : Logger(org.slf4j.Logger) Vertx(io.vertx.core.Vertx) LoggerFactory(org.slf4j.LoggerFactory) Collections.disjoint(java.util.Collections.disjoint) HashMap(java.util.HashMap) HasMetadata(io.fabric8.kubernetes.api.model.HasMetadata) Future(io.vertx.core.Future) ConfigMap(io.fabric8.kubernetes.api.model.ConfigMap) CompositeFuture(io.vertx.core.CompositeFuture) EventBuilder(io.fabric8.kubernetes.api.model.EventBuilder) TopicExistsException(org.apache.kafka.common.errors.TopicExistsException) Map(java.util.Map) AsyncResult(io.vertx.core.AsyncResult) Handler(io.vertx.core.Handler) Handler(io.vertx.core.Handler) TopicExistsException(org.apache.kafka.common.errors.TopicExistsException) Collections.disjoint(java.util.Collections.disjoint)

Aggregations

Test (org.junit.Test)14 IOException (java.io.IOException)11 ConfigMap (io.fabric8.kubernetes.api.model.ConfigMap)8 ArrayList (java.util.ArrayList)6 ConnectionParameters (io.fabric8.gateway.handlers.loadbalancer.ConnectionParameters)5 File (java.io.File)5 Map (java.util.Map)5 CuratorFramework (org.apache.curator.framework.CuratorFramework)5 Logger (org.slf4j.Logger)5 CountDownLatch (java.util.concurrent.CountDownLatch)4 LoggerFactory (org.slf4j.LoggerFactory)4 ServiceDTO (io.fabric8.gateway.ServiceDTO)3 FutureHandler (io.fabric8.gateway.handlers.detecting.FutureHandler)3 HttpGatewayHandler (io.fabric8.gateway.handlers.http.HttpGatewayHandler)3 LogEvent (io.fabric8.insight.log.LogEvent)3 KubernetesClientException (io.fabric8.kubernetes.client.KubernetesClientException)3 URI (java.net.URI)3 URISyntaxException (java.net.URISyntaxException)3 HashMap (java.util.HashMap)3 ChildData (org.apache.curator.framework.recipes.cache.ChildData)3