Search in sources :

Example 6 with RecordHeader

use of org.apache.kafka.common.header.internals.RecordHeader in project apache-kafka-on-k8s by banzaicloud.

the class ProducerBatchTest method testSplitPreservesHeaders.

@Test
public void testSplitPreservesHeaders() {
    for (CompressionType compressionType : CompressionType.values()) {
        MemoryRecordsBuilder builder = MemoryRecords.builder(ByteBuffer.allocate(1024), MAGIC_VALUE_V2, compressionType, TimestampType.CREATE_TIME, 0L);
        ProducerBatch batch = new ProducerBatch(new TopicPartition("topic", 1), builder, now);
        Header header = new RecordHeader("header-key", "header-value".getBytes());
        while (true) {
            FutureRecordMetadata future = batch.tryAppend(now, "hi".getBytes(), "there".getBytes(), new Header[] { header }, null, now);
            if (future == null) {
                break;
            }
        }
        Deque<ProducerBatch> batches = batch.split(200);
        assertTrue("This batch should be split to multiple small batches.", batches.size() >= 2);
        for (ProducerBatch splitProducerBatch : batches) {
            for (RecordBatch splitBatch : splitProducerBatch.records().batches()) {
                for (Record record : splitBatch) {
                    assertTrue("Header size should be 1.", record.headers().length == 1);
                    assertTrue("Header key should be 'header-key'.", record.headers()[0].key().equals("header-key"));
                    assertTrue("Header value should be 'header-value'.", new String(record.headers()[0].value()).equals("header-value"));
                }
            }
        }
    }
}
Also used : RecordHeader(org.apache.kafka.common.header.internals.RecordHeader) Header(org.apache.kafka.common.header.Header) TopicPartition(org.apache.kafka.common.TopicPartition) RecordBatch(org.apache.kafka.common.record.RecordBatch) MemoryRecordsBuilder(org.apache.kafka.common.record.MemoryRecordsBuilder) Record(org.apache.kafka.common.record.Record) LegacyRecord(org.apache.kafka.common.record.LegacyRecord) CompressionType(org.apache.kafka.common.record.CompressionType) RecordHeader(org.apache.kafka.common.header.internals.RecordHeader) Test(org.junit.Test)

Example 7 with RecordHeader

use of org.apache.kafka.common.header.internals.RecordHeader in project apache-kafka-on-k8s by banzaicloud.

the class KafkaProducerTest method testHeaders.

@PrepareOnlyThisForTest(Metadata.class)
@Test
public void testHeaders() throws Exception {
    Properties props = new Properties();
    props.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9999");
    ExtendedSerializer keySerializer = PowerMock.createNiceMock(ExtendedSerializer.class);
    ExtendedSerializer valueSerializer = PowerMock.createNiceMock(ExtendedSerializer.class);
    KafkaProducer<String, String> producer = new KafkaProducer<>(props, keySerializer, valueSerializer);
    Metadata metadata = PowerMock.createNiceMock(Metadata.class);
    MemberModifier.field(KafkaProducer.class, "metadata").set(producer, metadata);
    String topic = "topic";
    final Cluster cluster = new Cluster("dummy", Collections.singletonList(new Node(0, "host1", 1000)), Arrays.asList(new PartitionInfo(topic, 0, null, null, null)), Collections.<String>emptySet(), Collections.<String>emptySet());
    EasyMock.expect(metadata.fetch()).andReturn(cluster).anyTimes();
    PowerMock.replay(metadata);
    String value = "value";
    ProducerRecord<String, String> record = new ProducerRecord<>(topic, value);
    EasyMock.expect(keySerializer.serialize(topic, record.headers(), null)).andReturn(null).once();
    EasyMock.expect(valueSerializer.serialize(topic, record.headers(), value)).andReturn(value.getBytes()).once();
    PowerMock.replay(keySerializer);
    PowerMock.replay(valueSerializer);
    // ensure headers can be mutated pre send.
    record.headers().add(new RecordHeader("test", "header2".getBytes()));
    producer.send(record, null);
    // ensure headers are closed and cannot be mutated post send
    try {
        record.headers().add(new RecordHeader("test", "test".getBytes()));
        fail("Expected IllegalStateException to be raised");
    } catch (IllegalStateException ise) {
    // expected
    }
    // ensure existing headers are not changed, and last header for key is still original value
    assertTrue(Arrays.equals(record.headers().lastHeader("test").value(), "header2".getBytes()));
    PowerMock.verify(valueSerializer);
    PowerMock.verify(keySerializer);
}
Also used : Node(org.apache.kafka.common.Node) Metadata(org.apache.kafka.clients.Metadata) Cluster(org.apache.kafka.common.Cluster) Properties(java.util.Properties) ExtendedSerializer(org.apache.kafka.common.serialization.ExtendedSerializer) PartitionInfo(org.apache.kafka.common.PartitionInfo) RecordHeader(org.apache.kafka.common.header.internals.RecordHeader) PrepareOnlyThisForTest(org.powermock.core.classloader.annotations.PrepareOnlyThisForTest) PrepareOnlyThisForTest(org.powermock.core.classloader.annotations.PrepareOnlyThisForTest) Test(org.junit.Test)

Aggregations

RecordHeader (org.apache.kafka.common.header.internals.RecordHeader)7 Header (org.apache.kafka.common.header.Header)6 Test (org.junit.Test)6 ByteBuffer (java.nio.ByteBuffer)4 DataOutputStream (java.io.DataOutputStream)3 ByteBufferOutputStream (org.apache.kafka.common.utils.ByteBufferOutputStream)3 MemoryRecordsBuilder (org.apache.kafka.common.record.MemoryRecordsBuilder)2 Properties (java.util.Properties)1 Metadata (org.apache.kafka.clients.Metadata)1 ConsumerRecord (org.apache.kafka.clients.consumer.ConsumerRecord)1 Cluster (org.apache.kafka.common.Cluster)1 Node (org.apache.kafka.common.Node)1 PartitionInfo (org.apache.kafka.common.PartitionInfo)1 TopicPartition (org.apache.kafka.common.TopicPartition)1 Metrics (org.apache.kafka.common.metrics.Metrics)1 CompressionType (org.apache.kafka.common.record.CompressionType)1 LegacyRecord (org.apache.kafka.common.record.LegacyRecord)1 MemoryRecords (org.apache.kafka.common.record.MemoryRecords)1 Record (org.apache.kafka.common.record.Record)1 RecordBatch (org.apache.kafka.common.record.RecordBatch)1