Search in sources :

Example 1 with Header

use of org.apache.kafka.connect.header.Header in project apache-kafka-on-k8s by banzaicloud.

the class WorkerSourceTask method convertHeaderFor.

private RecordHeaders convertHeaderFor(SourceRecord record) {
    Headers headers = record.headers();
    RecordHeaders result = new RecordHeaders();
    if (headers != null) {
        String topic = record.topic();
        for (Header header : headers) {
            String key = header.key();
            byte[] rawHeader = headerConverter.fromConnectHeader(topic, key, header.schema(), header.value());
            result.add(key, rawHeader);
        }
    }
    return result;
}
Also used : RecordHeaders(org.apache.kafka.common.header.internals.RecordHeaders) Header(org.apache.kafka.connect.header.Header) Headers(org.apache.kafka.connect.header.Headers) RecordHeaders(org.apache.kafka.common.header.internals.RecordHeaders)

Example 2 with Header

use of org.apache.kafka.connect.header.Header in project apache-kafka-on-k8s by banzaicloud.

the class SourceRecordTest method shouldModifyRecordHeader.

@Test
public void shouldModifyRecordHeader() {
    assertTrue(record.headers().isEmpty());
    record.headers().addInt("intHeader", 100);
    assertEquals(1, record.headers().size());
    Header header = record.headers().lastWithName("intHeader");
    assertEquals(100, (int) Values.convertToInteger(header.schema(), header.value()));
}
Also used : Header(org.apache.kafka.connect.header.Header) Test(org.junit.Test)

Example 3 with Header

use of org.apache.kafka.connect.header.Header in project kafka by apache.

the class WorkerErrantRecordReporter method report.

@Override
public Future<Void> report(SinkRecord record, Throwable error) {
    ConsumerRecord<byte[], byte[]> consumerRecord;
    // report modified or new records, so handle both cases
    if (record instanceof InternalSinkRecord) {
        consumerRecord = ((InternalSinkRecord) record).originalRecord();
    } else {
        // Generate a new consumer record from the modified sink record. We prefer
        // to send the original consumer record (pre-transformed) to the DLQ,
        // but in this case we don't have one and send the potentially transformed
        // record instead
        String topic = record.topic();
        byte[] key = keyConverter.fromConnectData(topic, record.keySchema(), record.key());
        byte[] value = valueConverter.fromConnectData(topic, record.valueSchema(), record.value());
        RecordHeaders headers = new RecordHeaders();
        if (record.headers() != null) {
            for (Header header : record.headers()) {
                String headerKey = header.key();
                byte[] rawHeader = headerConverter.fromConnectHeader(topic, headerKey, header.schema(), header.value());
                headers.add(headerKey, rawHeader);
            }
        }
        int keyLength = key != null ? key.length : -1;
        int valLength = value != null ? value.length : -1;
        consumerRecord = new ConsumerRecord<>(record.topic(), record.kafkaPartition(), record.kafkaOffset(), record.timestamp(), record.timestampType(), keyLength, valLength, key, value, headers, Optional.empty());
    }
    Future<Void> future = retryWithToleranceOperator.executeFailed(Stage.TASK_PUT, SinkTask.class, consumerRecord, error);
    if (!future.isDone()) {
        TopicPartition partition = new TopicPartition(consumerRecord.topic(), consumerRecord.partition());
        futures.computeIfAbsent(partition, p -> new ArrayList<>()).add(future);
    }
    return future;
}
Also used : LoggerFactory(org.slf4j.LoggerFactory) TimeoutException(java.util.concurrent.TimeoutException) ArrayList(java.util.ArrayList) ConcurrentMap(java.util.concurrent.ConcurrentMap) RecordHeaders(org.apache.kafka.common.header.internals.RecordHeaders) Future(java.util.concurrent.Future) InternalSinkRecord(org.apache.kafka.connect.runtime.InternalSinkRecord) HeaderConverter(org.apache.kafka.connect.storage.HeaderConverter) Converter(org.apache.kafka.connect.storage.Converter) SinkTask(org.apache.kafka.connect.sink.SinkTask) TopicPartition(org.apache.kafka.common.TopicPartition) ErrantRecordReporter(org.apache.kafka.connect.sink.ErrantRecordReporter) Logger(org.slf4j.Logger) Header(org.apache.kafka.connect.header.Header) Collection(java.util.Collection) ConcurrentHashMap(java.util.concurrent.ConcurrentHashMap) RecordMetadata(org.apache.kafka.clients.producer.RecordMetadata) Collectors(java.util.stream.Collectors) Objects(java.util.Objects) ExecutionException(java.util.concurrent.ExecutionException) TimeUnit(java.util.concurrent.TimeUnit) List(java.util.List) ConsumerRecord(org.apache.kafka.clients.consumer.ConsumerRecord) ConnectException(org.apache.kafka.connect.errors.ConnectException) SinkRecord(org.apache.kafka.connect.sink.SinkRecord) Optional(java.util.Optional) InternalSinkRecord(org.apache.kafka.connect.runtime.InternalSinkRecord) ArrayList(java.util.ArrayList) RecordHeaders(org.apache.kafka.common.header.internals.RecordHeaders) Header(org.apache.kafka.connect.header.Header) TopicPartition(org.apache.kafka.common.TopicPartition)

Example 4 with Header

use of org.apache.kafka.connect.header.Header in project apache-kafka-on-k8s by banzaicloud.

the class SinkRecordTest method shouldModifyRecordHeader.

@Test
public void shouldModifyRecordHeader() {
    assertTrue(record.headers().isEmpty());
    record.headers().addInt("intHeader", 100);
    assertEquals(1, record.headers().size());
    Header header = record.headers().lastWithName("intHeader");
    assertEquals(100, (int) Values.convertToInteger(header.schema(), header.value()));
}
Also used : Header(org.apache.kafka.connect.header.Header) Test(org.junit.Test)

Example 5 with Header

use of org.apache.kafka.connect.header.Header in project kafka by apache.

the class WorkerSourceTask method convertHeaderFor.

private RecordHeaders convertHeaderFor(SourceRecord record) {
    Headers headers = record.headers();
    RecordHeaders result = new RecordHeaders();
    if (headers != null) {
        String topic = record.topic();
        for (Header header : headers) {
            String key = header.key();
            byte[] rawHeader = headerConverter.fromConnectHeader(topic, key, header.schema(), header.value());
            result.add(key, rawHeader);
        }
    }
    return result;
}
Also used : RecordHeaders(org.apache.kafka.common.header.internals.RecordHeaders) Header(org.apache.kafka.connect.header.Header) Headers(org.apache.kafka.connect.header.Headers) RecordHeaders(org.apache.kafka.common.header.internals.RecordHeaders)

Aggregations

Header (org.apache.kafka.connect.header.Header)7 RecordHeaders (org.apache.kafka.common.header.internals.RecordHeaders)3 Headers (org.apache.kafka.connect.header.Headers)2 Test (org.junit.Test)2 Test (org.junit.jupiter.api.Test)2 ArrayList (java.util.ArrayList)1 Collection (java.util.Collection)1 List (java.util.List)1 Objects (java.util.Objects)1 Optional (java.util.Optional)1 ConcurrentHashMap (java.util.concurrent.ConcurrentHashMap)1 ConcurrentMap (java.util.concurrent.ConcurrentMap)1 ExecutionException (java.util.concurrent.ExecutionException)1 Future (java.util.concurrent.Future)1 TimeUnit (java.util.concurrent.TimeUnit)1 TimeoutException (java.util.concurrent.TimeoutException)1 Collectors (java.util.stream.Collectors)1 ConsumerRecord (org.apache.kafka.clients.consumer.ConsumerRecord)1 RecordMetadata (org.apache.kafka.clients.producer.RecordMetadata)1 TopicPartition (org.apache.kafka.common.TopicPartition)1