Search in sources :

Example 1 with KafkaCluster

use of io.debezium.kafka.KafkaCluster in project strimzi by strimzi.

the class ControllerIT method setup.

@Before
public void setup(TestContext context) throws Exception {
    LOGGER.info("Setting up test");
    Runtime.getRuntime().addShutdownHook(kafkaHook);
    kafkaCluster = new KafkaCluster();
    kafkaCluster.addBrokers(1);
    kafkaCluster.deleteDataPriorToStartup(true);
    kafkaCluster.deleteDataUponShutdown(true);
    kafkaCluster.usingDirectory(Files.createTempDirectory("controller-integration-test").toFile());
    kafkaCluster.startup();
    kubeClient = new DefaultKubernetesClient().inNamespace(NAMESPACE);
    LOGGER.info("Using namespace {}", NAMESPACE);
    Map<String, String> m = new HashMap();
    m.put(Config.KAFKA_BOOTSTRAP_SERVERS.key, kafkaCluster.brokerList());
    m.put(Config.ZOOKEEPER_CONNECT.key, "localhost:" + zkPort(kafkaCluster));
    m.put(Config.NAMESPACE.key, NAMESPACE);
    session = new Session(kubeClient, new Config(m));
    Async async = context.async();
    vertx.deployVerticle(session, ar -> {
        if (ar.succeeded()) {
            deploymentId = ar.result();
            adminClient = session.adminClient;
            topicsConfigWatcher = session.topicConfigsWatcher;
            topicWatcher = session.topicWatcher;
            topicsWatcher = session.topicsWatcher;
            async.complete();
        } else {
            context.fail("Failed to deploy session");
        }
    });
    async.await();
    waitFor(context, () -> this.topicsWatcher.started(), timeout, "Topics watcher not started");
    waitFor(context, () -> this.topicsConfigWatcher.started(), timeout, "Topic configs watcher not started");
    waitFor(context, () -> this.topicWatcher.started(), timeout, "Topic watcher not started");
    // We can't delete events, so record the events which exist at the start of the test
    // and then waitForEvents() can ignore those
    preExistingEvents = kubeClient.events().inNamespace(NAMESPACE).withLabels(cmPredicate.labels()).list().getItems().stream().map(evt -> evt.getMetadata().getUid()).collect(Collectors.toSet());
    LOGGER.info("Finished setting up test");
}
Also used : KafkaCluster(io.debezium.kafka.KafkaCluster) HashMap(java.util.HashMap) Async(io.vertx.ext.unit.Async) DefaultKubernetesClient(io.fabric8.kubernetes.client.DefaultKubernetesClient) Before(org.junit.Before)

Example 2 with KafkaCluster

use of io.debezium.kafka.KafkaCluster in project debezium by debezium.

the class KafkaDatabaseHistoryTest method beforeEach.

@Before
public void beforeEach() throws Exception {
    source = Collect.hashMapOf("server", "my-server");
    setLogPosition(0);
    topicName = "schema-changes-topic";
    File dataDir = Testing.Files.createTestingDirectory("history_cluster");
    Testing.Files.delete(dataDir);
    // Configure the extra properties to
    kafka = new KafkaCluster().usingDirectory(dataDir).deleteDataPriorToStartup(true).deleteDataUponShutdown(true).addBrokers(1).withKafkaConfiguration(Collect.propertiesOf("auto.create.topics.enable", "false")).startup();
    history = new KafkaDatabaseHistory();
}
Also used : KafkaCluster(io.debezium.kafka.KafkaCluster) File(java.io.File) Before(org.junit.Before)

Example 3 with KafkaCluster

use of io.debezium.kafka.KafkaCluster in project vertx-examples by vert-x3.

the class MainVerticle method start.

@Override
public void start() throws Exception {
    // Kafka setup for the example
    File dataDir = Testing.Files.createTestingDirectory("cluster");
    dataDir.deleteOnExit();
    kafkaCluster = new KafkaCluster().usingDirectory(dataDir).withPorts(2181, 9092).addBrokers(1).deleteDataPriorToStartup(true).startup();
    // Deploy the dashboard
    JsonObject consumerConfig = new JsonObject((Map) kafkaCluster.useTo().getConsumerProperties("the_group", "the_client", OffsetResetStrategy.LATEST));
    vertx.deployVerticle(DashboardVerticle.class.getName(), new DeploymentOptions().setConfig(consumerConfig));
    // Deploy the metrics collector : 3 times
    JsonObject producerConfig = new JsonObject((Map) kafkaCluster.useTo().getProducerProperties("the_producer"));
    vertx.deployVerticle(MetricsVerticle.class.getName(), new DeploymentOptions().setConfig(producerConfig).setInstances(3));
}
Also used : KafkaCluster(io.debezium.kafka.KafkaCluster) DeploymentOptions(io.vertx.core.DeploymentOptions) JsonObject(io.vertx.core.json.JsonObject) File(java.io.File)

Aggregations

KafkaCluster (io.debezium.kafka.KafkaCluster)3 File (java.io.File)2 Before (org.junit.Before)2 DefaultKubernetesClient (io.fabric8.kubernetes.client.DefaultKubernetesClient)1 DeploymentOptions (io.vertx.core.DeploymentOptions)1 JsonObject (io.vertx.core.json.JsonObject)1 Async (io.vertx.ext.unit.Async)1 HashMap (java.util.HashMap)1