Search in sources :

Example 1 with CompressionCodecFactory

use of com.rabbitmq.stream.compression.CompressionCodecFactory in project rabbitmq-stream-java-client by rabbitmq.

the class SubEntryBatchingTest method publishConsumeCompressedMessages.

@ParameterizedTest
@MethodSource("compressionCodecFactories")
void publishConsumeCompressedMessages(CompressionCodecFactory compressionCodecFactory, TestInfo info) {
    Map<Compression, Integer> compressionToReadBytes = new HashMap<>();
    for (Compression compression : Compression.values()) {
        int batchCount = 100;
        int messagesInBatch = 30;
        int messageCount = batchCount * messagesInBatch;
        CountDownLatch publishLatch = new CountDownLatch(batchCount);
        Client publisher = cf.get(new ClientParameters().compressionCodecFactory(compressionCodecFactory).publishConfirmListener((publisherId, publishingId) -> publishLatch.countDown()));
        String s = TestUtils.streamName(info) + "_" + compression.name();
        try {
            Response response = publisher.create(s);
            assertThat(response.isOk()).isTrue();
            response = publisher.declarePublisher(b(0), null, s);
            assertThat(response.isOk()).isTrue();
            Set<String> publishedBodies = ConcurrentHashMap.newKeySet(messageCount);
            IntStream.range(0, batchCount).forEach(batchIndex -> {
                MessageBatch messageBatch = new MessageBatch(compression);
                IntStream.range(0, messagesInBatch).forEach(messageIndex -> {
                    String body = "batch " + batchIndex + " message " + messageIndex;
                    messageBatch.add(publisher.messageBuilder().addData(body.getBytes(UTF8)).build());
                    publishedBodies.add(body);
                });
                publisher.publishBatches(b(0), Collections.singletonList(messageBatch));
            });
            assertThat(latchAssert(publishLatch)).completes();
            Set<String> consumedBodies = ConcurrentHashMap.newKeySet(batchCount * messagesInBatch);
            CountDownLatch consumeLatch = new CountDownLatch(batchCount * messagesInBatch);
            CountMetricsCollector metricsCollector = new CountMetricsCollector();
            Client consumer = cf.get(new ClientParameters().compressionCodecFactory(compressionCodecFactory).chunkListener((client, subscriptionId, offset, messageCount1, dataSize) -> client.credit(subscriptionId, 1)).messageListener((subscriptionId, offset, chunkTimestamp, message) -> {
                consumedBodies.add(new String(message.getBodyAsBinary(), UTF8));
                consumeLatch.countDown();
            }).metricsCollector(metricsCollector));
            response = consumer.subscribe(b(1), s, OffsetSpecification.first(), 2);
            assertThat(response.isOk()).isTrue();
            assertThat(latchAssert(consumeLatch)).completes();
            assertThat(consumedBodies).hasSize(messageCount).hasSameSizeAs(publishedBodies);
            publishedBodies.forEach(publishedBody -> assertThat(consumedBodies.contains(publishedBody)).isTrue());
            compressionToReadBytes.put(compression, metricsCollector.readBytes.get());
        } finally {
            Response response = publisher.delete(s);
            assertThat(response.isOk()).isTrue();
        }
    }
    int plainReadBytes = compressionToReadBytes.get(Compression.NONE);
    Arrays.stream(Compression.values()).filter(comp -> comp != Compression.NONE).forEach(compression -> {
        assertThat(compressionToReadBytes.get(compression)).isLessThan(plainReadBytes);
    });
}
Also used : IntStream(java.util.stream.IntStream) Arrays(java.util.Arrays) Assertions.assertThat(org.assertj.core.api.Assertions.assertThat) TestUtils.latchAssert(com.rabbitmq.stream.impl.TestUtils.latchAssert) CommonsCompressCompressionCodecFactory(com.rabbitmq.stream.compression.CommonsCompressCompressionCodecFactory) HashMap(java.util.HashMap) Response(com.rabbitmq.stream.impl.Client.Response) TestUtils.b(com.rabbitmq.stream.impl.TestUtils.b) ClientParameters(com.rabbitmq.stream.impl.Client.ClientParameters) Charset(java.nio.charset.Charset) ExtendWith(org.junit.jupiter.api.extension.ExtendWith) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) Map(java.util.Map) DefaultCompressionCodecFactory(com.rabbitmq.stream.compression.DefaultCompressionCodecFactory) MethodSource(org.junit.jupiter.params.provider.MethodSource) MetricsCollector(com.rabbitmq.stream.metrics.MetricsCollector) ConcurrentHashMap(java.util.concurrent.ConcurrentHashMap) Set(java.util.Set) Compression(com.rabbitmq.stream.compression.Compression) Collectors(java.util.stream.Collectors) StandardCharsets(java.nio.charset.StandardCharsets) TestInfo(org.junit.jupiter.api.TestInfo) Test(org.junit.jupiter.api.Test) CountDownLatch(java.util.concurrent.CountDownLatch) List(java.util.List) ParameterizedTest(org.junit.jupiter.params.ParameterizedTest) CompressionCodecFactory(com.rabbitmq.stream.compression.CompressionCodecFactory) Stream(java.util.stream.Stream) Collections(java.util.Collections) OffsetSpecification(com.rabbitmq.stream.OffsetSpecification) Compression(com.rabbitmq.stream.compression.Compression) HashMap(java.util.HashMap) ConcurrentHashMap(java.util.concurrent.ConcurrentHashMap) CountDownLatch(java.util.concurrent.CountDownLatch) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) Response(com.rabbitmq.stream.impl.Client.Response) ClientParameters(com.rabbitmq.stream.impl.Client.ClientParameters) ParameterizedTest(org.junit.jupiter.params.ParameterizedTest) MethodSource(org.junit.jupiter.params.provider.MethodSource)

Example 2 with CompressionCodecFactory

use of com.rabbitmq.stream.compression.CompressionCodecFactory in project rabbitmq-stream-java-client by rabbitmq.

the class SubEntryBatchingTest method subEntriesCompressedWithDifferentCompressionsShouldBeReadCorrectly.

@Test
void subEntriesCompressedWithDifferentCompressionsShouldBeReadCorrectly() {
    List<CompressionCodecFactory> compressionCodecFactories = compressionCodecFactories().collect(Collectors.toList());
    int batchCount = compressionCodecFactories.size() * Compression.values().length;
    int messagesInBatch = 30;
    int messageCount = batchCount * messagesInBatch;
    AtomicInteger messageIndex = new AtomicInteger(0);
    CountDownLatch publishLatch = new CountDownLatch(batchCount);
    Set<String> publishedBodies = ConcurrentHashMap.newKeySet(messageCount);
    compressionCodecFactories.forEach(compressionCodecFactory -> {
        Client publisher = cf.get(new ClientParameters().compressionCodecFactory(compressionCodecFactory).publishConfirmListener((publisherId, publishingId) -> publishLatch.countDown()));
        Response response = publisher.declarePublisher(b(0), null, stream);
        assertThat(response.isOk()).isTrue();
        for (Compression compression : Compression.values()) {
            MessageBatch messageBatch = new MessageBatch(compression);
            IntStream.range(0, messagesInBatch).forEach(i -> {
                String body = "compression " + compression.name() + " message " + messageIndex.getAndIncrement();
                messageBatch.add(publisher.messageBuilder().addData(body.getBytes(UTF8)).build());
                publishedBodies.add(body);
            });
            publisher.publishBatches(b(0), Collections.singletonList(messageBatch));
        }
    });
    assertThat(latchAssert(publishLatch)).completes();
    compressionCodecFactories.forEach(compressionCodecFactory -> {
        CountDownLatch consumeLatch = new CountDownLatch(messageCount);
        Set<String> consumedBodies = ConcurrentHashMap.newKeySet(messageCount);
        Client consumer = cf.get(new ClientParameters().compressionCodecFactory(compressionCodecFactory).chunkListener((client, subscriptionId, offset, messageCount1, dataSize) -> client.credit(subscriptionId, 1)).messageListener((subscriptionId, offset, chunkTimestamp, message) -> {
            consumedBodies.add(new String(message.getBodyAsBinary(), UTF8));
            consumeLatch.countDown();
        }));
        Response response = consumer.subscribe(b(1), stream, OffsetSpecification.first(), 2);
        assertThat(response.isOk()).isTrue();
        assertThat(latchAssert(consumeLatch)).completes();
        assertThat(consumedBodies).hasSize(messageCount).hasSameSizeAs(publishedBodies);
        publishedBodies.forEach(publishBody -> assertThat(consumedBodies.contains(publishBody)).isTrue());
    });
}
Also used : IntStream(java.util.stream.IntStream) Arrays(java.util.Arrays) Assertions.assertThat(org.assertj.core.api.Assertions.assertThat) TestUtils.latchAssert(com.rabbitmq.stream.impl.TestUtils.latchAssert) CommonsCompressCompressionCodecFactory(com.rabbitmq.stream.compression.CommonsCompressCompressionCodecFactory) HashMap(java.util.HashMap) Response(com.rabbitmq.stream.impl.Client.Response) TestUtils.b(com.rabbitmq.stream.impl.TestUtils.b) ClientParameters(com.rabbitmq.stream.impl.Client.ClientParameters) Charset(java.nio.charset.Charset) ExtendWith(org.junit.jupiter.api.extension.ExtendWith) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) Map(java.util.Map) DefaultCompressionCodecFactory(com.rabbitmq.stream.compression.DefaultCompressionCodecFactory) MethodSource(org.junit.jupiter.params.provider.MethodSource) MetricsCollector(com.rabbitmq.stream.metrics.MetricsCollector) ConcurrentHashMap(java.util.concurrent.ConcurrentHashMap) Set(java.util.Set) Compression(com.rabbitmq.stream.compression.Compression) Collectors(java.util.stream.Collectors) StandardCharsets(java.nio.charset.StandardCharsets) TestInfo(org.junit.jupiter.api.TestInfo) Test(org.junit.jupiter.api.Test) CountDownLatch(java.util.concurrent.CountDownLatch) List(java.util.List) ParameterizedTest(org.junit.jupiter.params.ParameterizedTest) CompressionCodecFactory(com.rabbitmq.stream.compression.CompressionCodecFactory) Stream(java.util.stream.Stream) Collections(java.util.Collections) OffsetSpecification(com.rabbitmq.stream.OffsetSpecification) Compression(com.rabbitmq.stream.compression.Compression) CountDownLatch(java.util.concurrent.CountDownLatch) Response(com.rabbitmq.stream.impl.Client.Response) ClientParameters(com.rabbitmq.stream.impl.Client.ClientParameters) CommonsCompressCompressionCodecFactory(com.rabbitmq.stream.compression.CommonsCompressCompressionCodecFactory) DefaultCompressionCodecFactory(com.rabbitmq.stream.compression.DefaultCompressionCodecFactory) CompressionCodecFactory(com.rabbitmq.stream.compression.CompressionCodecFactory) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) Test(org.junit.jupiter.api.Test) ParameterizedTest(org.junit.jupiter.params.ParameterizedTest)

Aggregations

OffsetSpecification (com.rabbitmq.stream.OffsetSpecification)2 CommonsCompressCompressionCodecFactory (com.rabbitmq.stream.compression.CommonsCompressCompressionCodecFactory)2 Compression (com.rabbitmq.stream.compression.Compression)2 CompressionCodecFactory (com.rabbitmq.stream.compression.CompressionCodecFactory)2 DefaultCompressionCodecFactory (com.rabbitmq.stream.compression.DefaultCompressionCodecFactory)2 ClientParameters (com.rabbitmq.stream.impl.Client.ClientParameters)2 Response (com.rabbitmq.stream.impl.Client.Response)2 TestUtils.b (com.rabbitmq.stream.impl.TestUtils.b)2 TestUtils.latchAssert (com.rabbitmq.stream.impl.TestUtils.latchAssert)2 MetricsCollector (com.rabbitmq.stream.metrics.MetricsCollector)2 Charset (java.nio.charset.Charset)2 StandardCharsets (java.nio.charset.StandardCharsets)2 Arrays (java.util.Arrays)2 Collections (java.util.Collections)2 HashMap (java.util.HashMap)2 List (java.util.List)2 Map (java.util.Map)2 Set (java.util.Set)2 ConcurrentHashMap (java.util.concurrent.ConcurrentHashMap)2 CountDownLatch (java.util.concurrent.CountDownLatch)2