Search in sources :

Example 1 with IncomingKafkaRecordMetadata

use of io.smallrye.reactive.messaging.kafka.api.IncomingKafkaRecordMetadata in project wildfly by wildfly.

the class ReactiveMessagingKafkaUserApiTestCase method testNoTopicConfiguredOverrideForAllMessages.

/*
     * The outgoing method doesn't have any topic configured. We test that we can successfully set this topic via metadata
     */
@Test
public void testNoTopicConfiguredOverrideForAllMessages() throws InterruptedException {
    noTopicSetupOverrideForAllMessagesBean.getLatch().await(TIMEOUT, TimeUnit.MILLISECONDS);
    Map<Integer, IncomingKafkaRecordMetadata<String, Integer>> map4 = noTopicSetupOverrideForAllMessagesBean.getTesting4Metadatas();
    Map<Integer, IncomingKafkaRecordMetadata<String, Integer>> map5 = noTopicSetupOverrideForAllMessagesBean.getTesting5Metadatas();
    Assert.assertEquals(3, map4.size());
    Assert.assertEquals(3, map5.size());
    // Do some less in-depth checks here, than in the testIncomingMetadata() method, focussing on what we have set
    for (int i = 1; i <= 6; i += 2) {
        IncomingKafkaRecordMetadata metadata = map4.get(i);
        Assert.assertNotNull(metadata);
        Assert.assertEquals("testing4", metadata.getTopic());
    }
    for (int i = 2; i <= 5; i += 2) {
        IncomingKafkaRecordMetadata metadata = map5.get(i);
        Assert.assertNotNull(metadata);
        Assert.assertEquals("testing5", metadata.getTopic());
    }
}
Also used : IncomingKafkaRecordMetadata(io.smallrye.reactive.messaging.kafka.api.IncomingKafkaRecordMetadata) Test(org.junit.Test)

Example 2 with IncomingKafkaRecordMetadata

use of io.smallrye.reactive.messaging.kafka.api.IncomingKafkaRecordMetadata in project smallrye-reactive-messaging by smallrye.

the class KafkaMetadataExample method metadata.

@SuppressWarnings("unchecked")
public void metadata() {
    Message<Double> incoming = Message.of(12.0);
    // <code>
    IncomingKafkaRecordMetadata<String, Double> metadata = incoming.getMetadata(IncomingKafkaRecordMetadata.class).orElse(null);
    if (metadata != null) {
        // The topic
        String topic = metadata.getTopic();
        // The key
        String key = metadata.getKey();
        // The timestamp
        Instant timestamp = metadata.getTimestamp();
        // The underlying record
        ConsumerRecord<String, Double> record = metadata.getRecord();
    // ...
    }
// </code>
}
Also used : IncomingKafkaRecordMetadata(io.smallrye.reactive.messaging.kafka.api.IncomingKafkaRecordMetadata) Instant(java.time.Instant)

Example 3 with IncomingKafkaRecordMetadata

use of io.smallrye.reactive.messaging.kafka.api.IncomingKafkaRecordMetadata in project wildfly by wildfly.

the class ReactiveMessagingKafkaUserApiTestCase method checkSpecifiedPartitionMetadatas.

private void checkSpecifiedPartitionMetadatas(Map<Integer, IncomingKafkaRecordMetadata<String, Integer>> unspecifiedPartitions, Map<Integer, IncomingKafkaRecordMetadata<String, Integer>> specifiedPartitions, int expectedSpecifiedPartition) {
    Assert.assertEquals(10, unspecifiedPartitions.size());
    Assert.assertEquals(10, specifiedPartitions.size());
    Set<Integer> partitionsSeen6 = new HashSet<>();
    for (int i = 1; i <= 10; i++) {
        IncomingKafkaRecordMetadata metadata = unspecifiedPartitions.get(i);
        Assert.assertNotNull(metadata);
        partitionsSeen6.add(metadata.getPartition());
    }
    // The partitioner spreads these records over the two partitions that seem to be created
    // I am missing the magic to be able to control how many partitions are set up by the embedded server,
    // currently there are two. If this check becomes problematic it can be removed
    Assert.assertTrue(partitionsSeen6.toString(), partitionsSeen6.size() > 1);
    for (int i = 11; i <= 20; i++) {
        IncomingKafkaRecordMetadata metadata = specifiedPartitions.get(i);
        Assert.assertNotNull(metadata);
        Assert.assertEquals(expectedSpecifiedPartition, metadata.getPartition());
    }
}
Also used : IncomingKafkaRecordMetadata(io.smallrye.reactive.messaging.kafka.api.IncomingKafkaRecordMetadata) HashSet(java.util.HashSet)

Example 4 with IncomingKafkaRecordMetadata

use of io.smallrye.reactive.messaging.kafka.api.IncomingKafkaRecordMetadata in project wildfly by wildfly.

the class ReactiveMessagingKafkaUserApiTestCase method testOverrideDefaultTopicWhenOneIsSetInTheConfig.

/*
     * The outgoing method is configured to send to a given topic. We test that we can successfully set this topic via metadata
     */
@Test
public void testOverrideDefaultTopicWhenOneIsSetInTheConfig() throws InterruptedException {
    configuredToSendToTopicAndOverrideTopicForSomeMessagesBean.getLatch().await(TIMEOUT, TimeUnit.MILLISECONDS);
    Map<Integer, IncomingKafkaRecordMetadata<String, Integer>> map2 = configuredToSendToTopicAndOverrideTopicForSomeMessagesBean.getTesting2Metadatas();
    Map<Integer, IncomingKafkaRecordMetadata<String, Integer>> map3 = configuredToSendToTopicAndOverrideTopicForSomeMessagesBean.getTesting3Metadatas();
    Assert.assertEquals(2, map2.size());
    Assert.assertEquals(2, map3.size());
    // Do some less in-depth checks here, than in the testIncomingMetadata() method, focussing on what we have set
    for (int i = 1; i <= 2; i++) {
        IncomingKafkaRecordMetadata metadata = map2.get(i);
        Assert.assertNotNull(metadata);
        Assert.assertEquals("testing2", metadata.getTopic());
    }
    for (int i = 3; i <= 4; i++) {
        IncomingKafkaRecordMetadata metadata = map3.get(i);
        Assert.assertNotNull(metadata);
        Assert.assertEquals("testing3", metadata.getTopic());
    }
}
Also used : IncomingKafkaRecordMetadata(io.smallrye.reactive.messaging.kafka.api.IncomingKafkaRecordMetadata) Test(org.junit.Test)

Example 5 with IncomingKafkaRecordMetadata

use of io.smallrye.reactive.messaging.kafka.api.IncomingKafkaRecordMetadata in project wildfly by wildfly.

the class ReactiveMessagingKafkaUserApiTestCase method testOutgoingAndIncomingMetadataExtensively.

/*
     * This tests that:
     * - incoming Metadata is set (also for entry 6 which did not set any metadata) and contains the topic
     * - the key is propagated from what was set in the outgoing metadata, and that it may be null when not set
     * - Headers are propagated, if set in the outgoing metadata
     * - offsets are unique per partition
     * - the timestamp and type are set, and that the timestamp matches if we set it ourselves in the outgoing metadata
     */
@Test
public void testOutgoingAndIncomingMetadataExtensively() throws InterruptedException {
    inDepthMetadataBean.getLatch().await(TIMEOUT, TimeUnit.MILLISECONDS);
    Map<Integer, IncomingKafkaRecordMetadata<String, Integer>> map = inDepthMetadataBean.getMetadatas();
    Assert.assertEquals(6, map.size());
    Map<Integer, Set<Long>> offsetsByPartition = new HashMap<>();
    for (int i = 1; i <= 6; i++) {
        IncomingKafkaRecordMetadata metadata = map.get(i);
        Assert.assertNotNull(metadata);
        if (i != 6) {
            Assert.assertEquals("KEY-" + i, metadata.getKey());
        } else {
            Assert.assertNull(metadata.getKey());
        }
        Assert.assertEquals("testing1", metadata.getTopic());
        Set<Long> offsets = offsetsByPartition.get(metadata.getPartition());
        if (offsets == null) {
            offsets = new HashSet<>();
            offsetsByPartition.put(metadata.getPartition(), offsets);
        }
        offsets.add(metadata.getOffset());
        Assert.assertNotNull(metadata.getTimestamp());
        if (i == 5) {
            Assert.assertEquals(inDepthMetadataBean.getTimestampEntry5Topic1(), metadata.getTimestamp());
        }
        Assert.assertEquals(TimestampType.CREATE_TIME, metadata.getTimestampType());
        Assert.assertNotNull(metadata.getRecord());
        Headers headers = metadata.getHeaders();
        if (i != 5) {
            Assert.assertEquals(0, headers.toArray().length);
        } else {
            Assert.assertEquals(1, headers.toArray().length);
            Header header = headers.toArray()[0];
            Assert.assertEquals("simple", header.key());
            Assert.assertArrayEquals(new byte[] { 0, 1, 2 }, header.value());
        }
    }
    Assert.assertEquals(6, checkOffsetsByPartitionAndCalculateTotalEntries(offsetsByPartition));
}
Also used : IncomingKafkaRecordMetadata(io.smallrye.reactive.messaging.kafka.api.IncomingKafkaRecordMetadata) HashSet(java.util.HashSet) Set(java.util.Set) Header(org.apache.kafka.common.header.Header) HashMap(java.util.HashMap) Headers(org.apache.kafka.common.header.Headers) Test(org.junit.Test)

Aggregations

IncomingKafkaRecordMetadata (io.smallrye.reactive.messaging.kafka.api.IncomingKafkaRecordMetadata)9 Message (org.eclipse.microprofile.reactive.messaging.Message)3 Test (org.junit.Test)3 Test (org.junit.jupiter.api.Test)3 HealthReport (io.smallrye.reactive.messaging.health.HealthReport)2 MapBasedConfig (io.smallrye.reactive.messaging.test.common.config.MapBasedConfig)2 Instant (java.time.Instant)2 HashMap (java.util.HashMap)2 HashSet (java.util.HashSet)2 AtomicReference (java.util.concurrent.atomic.AtomicReference)2 UnsatisfiedResolutionException (javax.enterprise.inject.UnsatisfiedResolutionException)2 TopicPartition (org.apache.kafka.common.TopicPartition)2 Header (org.apache.kafka.common.header.Header)2 Headers (org.apache.kafka.common.header.Headers)2 RepeatedTest (org.junit.jupiter.api.RepeatedTest)2 CloudEventMetadata (io.smallrye.reactive.messaging.ce.CloudEventMetadata)1 DefaultCloudEventMetadataBuilder (io.smallrye.reactive.messaging.ce.DefaultCloudEventMetadataBuilder)1 OutgoingCloudEventMetadata (io.smallrye.reactive.messaging.ce.OutgoingCloudEventMetadata)1 BaseCloudEventMetadata (io.smallrye.reactive.messaging.ce.impl.BaseCloudEventMetadata)1 DefaultIncomingCloudEventMetadata (io.smallrye.reactive.messaging.ce.impl.DefaultIncomingCloudEventMetadata)1