Search in sources :

Example 16 with KafkaConnectorIncomingConfiguration

use of io.smallrye.reactive.messaging.kafka.KafkaConnectorIncomingConfiguration in project smallrye-reactive-messaging by smallrye.

the class DeprecatedCommitStrategiesTest method testThrottledStrategy.

@Test
void testThrottledStrategy() {
    MapBasedConfig config = commonConfiguration().with("commit-strategy", "throttled").with("auto.commit.interval.ms", 100);
    String group = UUID.randomUUID().toString();
    source = new KafkaSource<>(vertx, group, new KafkaConnectorIncomingConfiguration(config), getConsumerRebalanceListeners(), CountKafkaCdiEvents.noCdiEvents, getDeserializationFailureHandlers(), -1);
    injectMockConsumer(source, consumer);
    List<Message<?>> list = new ArrayList<>();
    source.getStream().subscribe().with(list::add);
    TopicPartition tp = new TopicPartition(TOPIC, 0);
    consumer.updateBeginningOffsets(Collections.singletonMap(tp, 0L));
    consumer.schedulePollTask(() -> {
        source.getCommitHandler().partitionsAssigned(Collections.singleton(tp));
        consumer.rebalance(Collections.singletonList(new TopicPartition(TOPIC, 0)));
        consumer.addRecord(new ConsumerRecord<>(TOPIC, 0, 0, "k", "v0"));
    });
    await().until(() -> list.size() == 1);
    assertThat(list).hasSize(1);
    list.get(0).ack().toCompletableFuture().join();
    await().untilAsserted(() -> {
        Map<TopicPartition, OffsetAndMetadata> committed = consumer.committed(Collections.singleton(tp));
        assertThat(committed.get(tp)).isNotNull();
        assertThat(committed.get(tp).offset()).isEqualTo(1);
    });
    consumer.schedulePollTask(() -> {
        consumer.addRecord(new ConsumerRecord<>(TOPIC, 0, 1, "k", "v1"));
        consumer.addRecord(new ConsumerRecord<>(TOPIC, 0, 2, "k", "v2"));
        consumer.addRecord(new ConsumerRecord<>(TOPIC, 0, 3, "k", "v3"));
    });
    await().until(() -> list.size() == 4);
    list.get(2).ack().toCompletableFuture().join();
    list.get(1).ack().toCompletableFuture().join();
    await().untilAsserted(() -> {
        Map<TopicPartition, OffsetAndMetadata> committed = consumer.committed(Collections.singleton(tp));
        assertThat(committed.get(tp)).isNotNull();
        assertThat(committed.get(tp).offset()).isEqualTo(3);
    });
    list.get(3).ack().toCompletableFuture().join();
    await().untilAsserted(() -> {
        Map<TopicPartition, OffsetAndMetadata> committed = consumer.committed(Collections.singleton(tp));
        assertThat(committed.get(tp)).isNotNull();
        assertThat(committed.get(tp).offset()).isEqualTo(4);
    });
}
Also used : Message(org.eclipse.microprofile.reactive.messaging.Message) TopicPartition(org.apache.kafka.common.TopicPartition) ArrayList(java.util.ArrayList) OffsetAndMetadata(org.apache.kafka.clients.consumer.OffsetAndMetadata) KafkaConnectorIncomingConfiguration(io.smallrye.reactive.messaging.kafka.KafkaConnectorIncomingConfiguration) MapBasedConfig(io.smallrye.reactive.messaging.test.common.config.MapBasedConfig) Test(org.junit.jupiter.api.Test)

Example 17 with KafkaConnectorIncomingConfiguration

use of io.smallrye.reactive.messaging.kafka.KafkaConnectorIncomingConfiguration in project smallrye-reactive-messaging by smallrye.

the class DeprecatedCommitStrategiesTest method testWithRebalanceListenerMatchGivenName.

@Test
public void testWithRebalanceListenerMatchGivenName() {
    addBeans(NamedRebalanceListener.class);
    MapBasedConfig config = commonConfiguration();
    config.with("consumer-rebalance-listener.name", "mine").with("client.id", UUID.randomUUID().toString());
    String group = UUID.randomUUID().toString();
    source = new KafkaSource<>(vertx, group, new KafkaConnectorIncomingConfiguration(config), getConsumerRebalanceListeners(), CountKafkaCdiEvents.noCdiEvents, getDeserializationFailureHandlers(), -1);
    injectMockConsumer(source, consumer);
    List<Message<?>> list = new ArrayList<>();
    source.getStream().subscribe().with(list::add);
    Map<TopicPartition, Long> offsets = new HashMap<>();
    offsets.put(new TopicPartition(TOPIC, 0), 0L);
    offsets.put(new TopicPartition(TOPIC, 1), 0L);
    consumer.updateBeginningOffsets(offsets);
    consumer.schedulePollTask(() -> {
        consumer.rebalance(Collections.singletonList(new TopicPartition(TOPIC, 0)));
        consumer.addRecord(new ConsumerRecord<>(TOPIC, 0, 0, "k", "v"));
    });
    await().until(() -> list.size() == 1);
    assertThat(list).hasSize(1);
    consumer.schedulePollTask(() -> {
        consumer.rebalance(Collections.singletonList(new TopicPartition(TOPIC, 1)));
        ConsumerRecord<String, String> record = new ConsumerRecord<>(TOPIC, 1, 0, "k", "v");
        consumer.addRecord(record);
    });
    await().until(() -> list.size() == 2);
    assertThat(list).hasSize(2);
}
Also used : Message(org.eclipse.microprofile.reactive.messaging.Message) HashMap(java.util.HashMap) ArrayList(java.util.ArrayList) ConsumerRecord(org.apache.kafka.clients.consumer.ConsumerRecord) TopicPartition(org.apache.kafka.common.TopicPartition) KafkaConnectorIncomingConfiguration(io.smallrye.reactive.messaging.kafka.KafkaConnectorIncomingConfiguration) MapBasedConfig(io.smallrye.reactive.messaging.test.common.config.MapBasedConfig) Test(org.junit.jupiter.api.Test)

Example 18 with KafkaConnectorIncomingConfiguration

use of io.smallrye.reactive.messaging.kafka.KafkaConnectorIncomingConfiguration in project smallrye-reactive-messaging by smallrye.

the class KafkaCommitHandlerTest method testSourceWithAutoCommitEnabled.

@Test
public void testSourceWithAutoCommitEnabled() throws ExecutionException, TimeoutException, InterruptedException {
    MapBasedConfig config = newCommonConfigForSource().with("group.id", "test-source-with-auto-commit-enabled").with(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true").with(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, 500).with("value.deserializer", IntegerDeserializer.class.getName());
    KafkaConnectorIncomingConfiguration ic = new KafkaConnectorIncomingConfiguration(config);
    source = new KafkaSource<>(vertx, "test-source-with-auto-commit-enabled", ic, UnsatisfiedInstance.instance(), CountKafkaCdiEvents.noCdiEvents, UnsatisfiedInstance.instance(), -1);
    List<Message<?>> messages = Collections.synchronizedList(new ArrayList<>());
    source.getStream().subscribe().with(messages::add);
    companion.produceIntegers().usingGenerator(i -> new ProducerRecord<>(topic, i), 10);
    await().atMost(10, TimeUnit.SECONDS).until(() -> messages.size() >= 10);
    assertThat(messages.stream().map(m -> ((KafkaRecord<String, Integer>) m).getPayload()).collect(Collectors.toList())).containsExactly(0, 1, 2, 3, 4, 5, 6, 7, 8, 9);
    Optional<IncomingKafkaRecord<String, Integer>> firstMessage = messages.stream().map(m -> (IncomingKafkaRecord<String, Integer>) m).findFirst();
    assertTrue(firstMessage.isPresent());
    CompletableFuture<Void> ackFuture = new CompletableFuture<>();
    firstMessage.get().ack().whenComplete((a, t) -> ackFuture.complete(null));
    ackFuture.get(10, TimeUnit.SECONDS);
    await().atMost(2, TimeUnit.MINUTES).ignoreExceptions().untilAsserted(() -> {
        TopicPartition topicPartition = new TopicPartition(topic, 0);
        OffsetAndMetadata offset = companion.consumerGroups().offsets("test-source-with-auto-commit-enabled", topicPartition);
        assertNotNull(offset);
        assertEquals(10L, offset.offset());
    });
}
Also used : IntegerDeserializer(org.apache.kafka.common.serialization.IntegerDeserializer) Assertions.assertNotNull(org.junit.jupiter.api.Assertions.assertNotNull) Arrays(java.util.Arrays) ProducerRecord(org.apache.kafka.clients.producer.ProducerRecord) KafkaCompanionTestBase(io.smallrye.reactive.messaging.kafka.base.KafkaCompanionTestBase) Assertions.assertThat(org.assertj.core.api.Assertions.assertThat) HealthReport(io.smallrye.reactive.messaging.health.HealthReport) TimeoutException(java.util.concurrent.TimeoutException) CompletableFuture(java.util.concurrent.CompletableFuture) MapBasedConfig(io.smallrye.reactive.messaging.test.common.config.MapBasedConfig) ArrayList(java.util.ArrayList) Assertions.assertFalse(org.junit.jupiter.api.Assertions.assertFalse) Map(java.util.Map) CountKafkaCdiEvents(io.smallrye.reactive.messaging.kafka.CountKafkaCdiEvents) Assertions.assertEquals(org.junit.jupiter.api.Assertions.assertEquals) TopicPartition(org.apache.kafka.common.TopicPartition) Awaitility.await(org.awaitility.Awaitility.await) Set(java.util.Set) ConsumerConfig(org.apache.kafka.clients.consumer.ConsumerConfig) UUID(java.util.UUID) UnsatisfiedInstance(io.smallrye.reactive.messaging.kafka.base.UnsatisfiedInstance) Collectors(java.util.stream.Collectors) ExecutionException(java.util.concurrent.ExecutionException) TimeUnit(java.util.concurrent.TimeUnit) Test(org.junit.jupiter.api.Test) List(java.util.List) Message(org.eclipse.microprofile.reactive.messaging.Message) AfterEach(org.junit.jupiter.api.AfterEach) KafkaRecord(io.smallrye.reactive.messaging.kafka.KafkaRecord) Assertions.assertTrue(org.junit.jupiter.api.Assertions.assertTrue) OffsetAndMetadata(org.apache.kafka.clients.consumer.OffsetAndMetadata) Optional(java.util.Optional) IntegerDeserializer(org.apache.kafka.common.serialization.IntegerDeserializer) KafkaConnectorIncomingConfiguration(io.smallrye.reactive.messaging.kafka.KafkaConnectorIncomingConfiguration) KafkaSource(io.smallrye.reactive.messaging.kafka.impl.KafkaSource) Collections(java.util.Collections) IncomingKafkaRecord(io.smallrye.reactive.messaging.kafka.IncomingKafkaRecord) Message(org.eclipse.microprofile.reactive.messaging.Message) KafkaRecord(io.smallrye.reactive.messaging.kafka.KafkaRecord) IncomingKafkaRecord(io.smallrye.reactive.messaging.kafka.IncomingKafkaRecord) IncomingKafkaRecord(io.smallrye.reactive.messaging.kafka.IncomingKafkaRecord) CompletableFuture(java.util.concurrent.CompletableFuture) TopicPartition(org.apache.kafka.common.TopicPartition) OffsetAndMetadata(org.apache.kafka.clients.consumer.OffsetAndMetadata) KafkaConnectorIncomingConfiguration(io.smallrye.reactive.messaging.kafka.KafkaConnectorIncomingConfiguration) MapBasedConfig(io.smallrye.reactive.messaging.test.common.config.MapBasedConfig) Test(org.junit.jupiter.api.Test)

Example 19 with KafkaConnectorIncomingConfiguration

use of io.smallrye.reactive.messaging.kafka.KafkaConnectorIncomingConfiguration in project smallrye-reactive-messaging by smallrye.

the class KafkaCommitHandlerTest method testSourceWithThrottledAndRebalance.

@Test
public void testSourceWithThrottledAndRebalance() {
    companion.topics().createAndWait(topic, 2);
    MapBasedConfig config1 = newCommonConfigForSource().with("client.id", UUID.randomUUID().toString()).with("group.id", "test-source-with-throttled-latest-processed-commit").with("value.deserializer", IntegerDeserializer.class.getName()).with("commit-strategy", "throttled").with("throttled.unprocessed-record-max-age.ms", 1);
    MapBasedConfig config2 = newCommonConfigForSource().with("client.id", UUID.randomUUID().toString()).with("group.id", "test-source-with-throttled-latest-processed-commit").with("value.deserializer", IntegerDeserializer.class.getName()).with("commit-strategy", "throttled").with("throttled.unprocessed-record-max-age.ms", 1);
    KafkaConnectorIncomingConfiguration ic1 = new KafkaConnectorIncomingConfiguration(config1);
    KafkaConnectorIncomingConfiguration ic2 = new KafkaConnectorIncomingConfiguration(config2);
    source = new KafkaSource<>(vertx, "test-source-with-throttled-latest-processed-commit", ic1, UnsatisfiedInstance.instance(), CountKafkaCdiEvents.noCdiEvents, UnsatisfiedInstance.instance(), -1);
    KafkaSource<String, Integer> source2 = new KafkaSource<>(vertx, "test-source-with-throttled-latest-processed-commit", ic2, UnsatisfiedInstance.instance(), CountKafkaCdiEvents.noCdiEvents, UnsatisfiedInstance.instance(), -1);
    List<Message<?>> messages1 = Collections.synchronizedList(new ArrayList<>());
    source.getStream().subscribe().with(m -> {
        m.ack();
        messages1.add(m);
    });
    await().until(() -> source.getConsumer().getAssignments().await().indefinitely().size() == 2);
    companion.produceIntegers().usingGenerator(i -> new ProducerRecord<>(topic, Integer.toString(i % 2), i), 10000);
    await().atMost(2, TimeUnit.MINUTES).until(() -> messages1.size() >= 10);
    List<Message<?>> messages2 = Collections.synchronizedList(new ArrayList<>());
    source2.getStream().subscribe().with(m -> {
        m.ack();
        messages2.add(m);
    });
    await().until(() -> source2.getConsumer().getAssignments().await().indefinitely().size() == 1 && source.getConsumer().getAssignments().await().indefinitely().size() == 1);
    companion.produceIntegers().usingGenerator(i -> new ProducerRecord<>(topic, Integer.toString(i % 2), i), 10000);
    await().atMost(2, TimeUnit.MINUTES).until(() -> messages1.size() + messages2.size() >= 10000);
    await().atMost(2, TimeUnit.MINUTES).untilAsserted(() -> {
        TopicPartition tp1 = new TopicPartition(topic, 0);
        TopicPartition tp2 = new TopicPartition(topic, 1);
        Map<TopicPartition, OffsetAndMetadata> result = companion.consumerGroups().offsets("test-source-with-throttled-latest-processed-commit", Arrays.asList(tp1, tp2));
        assertNotNull(result.get(tp1));
        assertNotNull(result.get(tp2));
        assertEquals(result.get(tp1).offset() + result.get(tp2).offset(), 20000);
    });
    await().atMost(2, TimeUnit.MINUTES).untilAsserted(() -> {
        HealthReport.HealthReportBuilder healthReportBuilder = HealthReport.builder();
        source.isAlive(healthReportBuilder);
        HealthReport build = healthReportBuilder.build();
        boolean ok = build.isOk();
        if (!ok) {
            build.getChannels().forEach(ci -> System.out.println(ci.getChannel() + " - " + ci.getMessage()));
        }
        assertTrue(ok);
    });
    source2.closeQuietly();
}
Also used : IntegerDeserializer(org.apache.kafka.common.serialization.IntegerDeserializer) Message(org.eclipse.microprofile.reactive.messaging.Message) KafkaSource(io.smallrye.reactive.messaging.kafka.impl.KafkaSource) HealthReport(io.smallrye.reactive.messaging.health.HealthReport) TopicPartition(org.apache.kafka.common.TopicPartition) OffsetAndMetadata(org.apache.kafka.clients.consumer.OffsetAndMetadata) KafkaConnectorIncomingConfiguration(io.smallrye.reactive.messaging.kafka.KafkaConnectorIncomingConfiguration) MapBasedConfig(io.smallrye.reactive.messaging.test.common.config.MapBasedConfig) Test(org.junit.jupiter.api.Test)

Example 20 with KafkaConnectorIncomingConfiguration

use of io.smallrye.reactive.messaging.kafka.KafkaConnectorIncomingConfiguration in project smallrye-reactive-messaging by smallrye.

the class DeprecatedCommitStrategiesTest method testThrottledStrategyWithManyRecords.

@Test
void testThrottledStrategyWithManyRecords() {
    MapBasedConfig config = commonConfiguration().with("client.id", UUID.randomUUID().toString()).with("commit-strategy", "throttled").with("auto.offset.reset", "earliest").with("auto.commit.interval.ms", 100);
    String group = UUID.randomUUID().toString();
    source = new KafkaSource<>(vertx, group, new KafkaConnectorIncomingConfiguration(config), getConsumerRebalanceListeners(), CountKafkaCdiEvents.noCdiEvents, getDeserializationFailureHandlers(), -1);
    injectMockConsumer(source, consumer);
    List<Message<?>> list = new ArrayList<>();
    source.getStream().subscribe().with(list::add);
    TopicPartition p0 = new TopicPartition(TOPIC, 0);
    TopicPartition p1 = new TopicPartition(TOPIC, 1);
    Map<TopicPartition, Long> offsets = new HashMap<>();
    offsets.put(p0, 0L);
    offsets.put(p1, 5L);
    consumer.updateBeginningOffsets(offsets);
    consumer.schedulePollTask(() -> {
        consumer.rebalance(offsets.keySet());
        source.getCommitHandler().partitionsAssigned(offsets.keySet());
        for (int i = 0; i < 500; i++) {
            consumer.addRecord(new ConsumerRecord<>(TOPIC, 0, i, "k", "v0-" + i));
            consumer.addRecord(new ConsumerRecord<>(TOPIC, 1, i, "r", "v1-" + i));
        }
    });
    // Expected number of messages: 500 messages in each partition minus the [0..5) messages from p1
    int expected = 500 * 2 - 5;
    await().until(() -> list.size() == expected);
    assertThat(list).hasSize(expected);
    list.forEach(m -> m.ack().toCompletableFuture().join());
    await().untilAsserted(() -> {
        Map<TopicPartition, OffsetAndMetadata> committed = consumer.committed(offsets.keySet());
        assertThat(committed.get(p0)).isNotNull();
        assertThat(committed.get(p0).offset()).isEqualTo(500);
        assertThat(committed.get(p1)).isNotNull();
        assertThat(committed.get(p1).offset()).isEqualTo(500);
    });
    consumer.schedulePollTask(() -> {
        for (int i = 0; i < 1000; i++) {
            consumer.addRecord(new ConsumerRecord<>(TOPIC, 0, 500 + i, "k", "v0-" + (500 + i)));
            consumer.addRecord(new ConsumerRecord<>(TOPIC, 1, 500 + i, "k", "v1-" + (500 + i)));
        }
    });
    int expected2 = expected + 1000 * 2;
    await().until(() -> list.size() == expected2);
    list.forEach(m -> m.ack().toCompletableFuture().join());
    await().untilAsserted(() -> {
        Map<TopicPartition, OffsetAndMetadata> committed = consumer.committed(offsets.keySet());
        assertThat(committed.get(p0)).isNotNull();
        assertThat(committed.get(p0).offset()).isEqualTo(1500);
        assertThat(committed.get(p1)).isNotNull();
        assertThat(committed.get(p1).offset()).isEqualTo(1500);
    });
    List<String> payloads = list.stream().map(m -> (String) m.getPayload()).collect(Collectors.toList());
    for (int i = 0; i < 1500; i++) {
        assertThat(payloads).contains("v0-" + i);
    }
    for (int i = 5; i < 1500; i++) {
        assertThat(payloads).contains("v1-" + i);
    }
}
Also used : BeforeEach(org.junit.jupiter.api.BeforeEach) Arrays(java.util.Arrays) MockConsumer(org.apache.kafka.clients.consumer.MockConsumer) DeserializationFailureHandler(io.smallrye.reactive.messaging.kafka.DeserializationFailureHandler) Assertions.assertThat(org.assertj.core.api.Assertions.assertThat) UnsatisfiedResolutionException(javax.enterprise.inject.UnsatisfiedResolutionException) HealthReport(io.smallrye.reactive.messaging.health.HealthReport) HashMap(java.util.HashMap) OffsetResetStrategy(org.apache.kafka.clients.consumer.OffsetResetStrategy) AtomicReference(java.util.concurrent.atomic.AtomicReference) IncomingKafkaRecordMetadata(io.smallrye.reactive.messaging.kafka.api.IncomingKafkaRecordMetadata) MapBasedConfig(io.smallrye.reactive.messaging.test.common.config.MapBasedConfig) ArrayList(java.util.ArrayList) HashSet(java.util.HashSet) Assertions.assertThatThrownBy(org.assertj.core.api.Assertions.assertThatThrownBy) StringDeserializer(org.apache.kafka.common.serialization.StringDeserializer) MockKafkaUtils.injectMockConsumer(io.smallrye.reactive.messaging.kafka.base.MockKafkaUtils.injectMockConsumer) TypeLiteral(javax.enterprise.util.TypeLiteral) Map(java.util.Map) CountKafkaCdiEvents(io.smallrye.reactive.messaging.kafka.CountKafkaCdiEvents) KafkaConnector(io.smallrye.reactive.messaging.kafka.KafkaConnector) WeldTestBase(io.smallrye.reactive.messaging.kafka.base.WeldTestBase) Named(javax.inject.Named) Instance(javax.enterprise.inject.Instance) Consumer(org.apache.kafka.clients.consumer.Consumer) TopicPartition(org.apache.kafka.common.TopicPartition) Awaitility.await(org.awaitility.Awaitility.await) Collection(java.util.Collection) DeploymentException(javax.enterprise.inject.spi.DeploymentException) UUID(java.util.UUID) Collectors(java.util.stream.Collectors) GlobalOpenTelemetry(io.opentelemetry.api.GlobalOpenTelemetry) Test(org.junit.jupiter.api.Test) List(java.util.List) Message(org.eclipse.microprofile.reactive.messaging.Message) AfterEach(org.junit.jupiter.api.AfterEach) KafkaConsumerRebalanceListener(io.smallrye.reactive.messaging.kafka.KafkaConsumerRebalanceListener) ConsumerRecord(org.apache.kafka.clients.consumer.ConsumerRecord) Vertx(io.vertx.mutiny.core.Vertx) OffsetAndMetadata(org.apache.kafka.clients.consumer.OffsetAndMetadata) LegacyMetadataTestUtils(io.smallrye.reactive.messaging.kafka.LegacyMetadataTestUtils) ApplicationScoped(javax.enterprise.context.ApplicationScoped) KafkaConnectorIncomingConfiguration(io.smallrye.reactive.messaging.kafka.KafkaConnectorIncomingConfiguration) KafkaSource(io.smallrye.reactive.messaging.kafka.impl.KafkaSource) Collections(java.util.Collections) Message(org.eclipse.microprofile.reactive.messaging.Message) HashMap(java.util.HashMap) ArrayList(java.util.ArrayList) TopicPartition(org.apache.kafka.common.TopicPartition) OffsetAndMetadata(org.apache.kafka.clients.consumer.OffsetAndMetadata) KafkaConnectorIncomingConfiguration(io.smallrye.reactive.messaging.kafka.KafkaConnectorIncomingConfiguration) MapBasedConfig(io.smallrye.reactive.messaging.test.common.config.MapBasedConfig) Test(org.junit.jupiter.api.Test)

Aggregations

KafkaConnectorIncomingConfiguration (io.smallrye.reactive.messaging.kafka.KafkaConnectorIncomingConfiguration)37 Test (org.junit.jupiter.api.Test)31 Message (org.eclipse.microprofile.reactive.messaging.Message)28 ArrayList (java.util.ArrayList)27 MapBasedConfig (io.smallrye.reactive.messaging.test.common.config.MapBasedConfig)20 KafkaMapBasedConfig (io.smallrye.reactive.messaging.kafka.base.KafkaMapBasedConfig)18 CopyOnWriteArrayList (java.util.concurrent.CopyOnWriteArrayList)17 RecordHeader (org.apache.kafka.common.header.internals.RecordHeader)17 IncomingKafkaCloudEventMetadata (io.smallrye.reactive.messaging.kafka.IncomingKafkaCloudEventMetadata)14 TopicPartition (org.apache.kafka.common.TopicPartition)13 JsonObject (io.vertx.core.json.JsonObject)12 StringDeserializer (org.apache.kafka.common.serialization.StringDeserializer)10 OffsetAndMetadata (org.apache.kafka.clients.consumer.OffsetAndMetadata)9 KafkaSource (io.smallrye.reactive.messaging.kafka.impl.KafkaSource)8 HealthReport (io.smallrye.reactive.messaging.health.HealthReport)7 CountKafkaCdiEvents (io.smallrye.reactive.messaging.kafka.CountKafkaCdiEvents)6 IncomingKafkaRecord (io.smallrye.reactive.messaging.kafka.IncomingKafkaRecord)6 Collectors (java.util.stream.Collectors)6 IntegerDeserializer (org.apache.kafka.common.serialization.IntegerDeserializer)6 Assertions.assertThat (org.assertj.core.api.Assertions.assertThat)6