Search in sources :

Example 11 with SinkRecord

use of org.apache.kafka.connect.sink.SinkRecord in project ignite by apache.

the class IgniteSinkTask method put.

/**
     * Buffers records.
     *
     * @param records Records to inject into grid.
     */
@SuppressWarnings("unchecked")
@Override
public void put(Collection<SinkRecord> records) {
    try {
        for (SinkRecord record : records) {
            // Data is flushed asynchronously when CACHE_PER_NODE_DATA_SIZE is reached.
            if (extractor != null) {
                Map.Entry<Object, Object> entry = extractor.extract(record);
                StreamerContext.getStreamer().addData(entry.getKey(), entry.getValue());
            } else {
                if (record.key() != null) {
                    StreamerContext.getStreamer().addData(record.key(), record.value());
                } else {
                    log.error("Failed to stream a record with null key!");
                }
            }
        }
    } catch (ConnectException e) {
        log.error("Failed adding record", e);
        throw new ConnectException(e);
    }
}
Also used : SinkRecord(org.apache.kafka.connect.sink.SinkRecord) Map(java.util.Map) ConnectException(org.apache.kafka.connect.errors.ConnectException)

Example 12 with SinkRecord

use of org.apache.kafka.connect.sink.SinkRecord in project kafka by apache.

the class WorkerSinkTaskTest method testPreCommit.

@Test
public void testPreCommit() throws Exception {
    expectInitializeTask();
    // iter 1
    expectPollInitialAssignment();
    // iter 2
    expectConsumerPoll(2);
    expectConversionAndTransformation(2);
    sinkTask.put(EasyMock.<Collection<SinkRecord>>anyObject());
    EasyMock.expectLastCall();
    final Map<TopicPartition, OffsetAndMetadata> workerStartingOffsets = new HashMap<>();
    workerStartingOffsets.put(TOPIC_PARTITION, new OffsetAndMetadata(FIRST_OFFSET));
    workerStartingOffsets.put(TOPIC_PARTITION2, new OffsetAndMetadata(FIRST_OFFSET));
    final Map<TopicPartition, OffsetAndMetadata> workerCurrentOffsets = new HashMap<>();
    workerCurrentOffsets.put(TOPIC_PARTITION, new OffsetAndMetadata(FIRST_OFFSET + 2));
    workerCurrentOffsets.put(TOPIC_PARTITION2, new OffsetAndMetadata(FIRST_OFFSET));
    final Map<TopicPartition, OffsetAndMetadata> taskOffsets = new HashMap<>();
    // act like FIRST_OFFSET+2 has not yet been flushed by the task
    taskOffsets.put(TOPIC_PARTITION, new OffsetAndMetadata(FIRST_OFFSET + 1));
    // should be ignored because > current offset
    taskOffsets.put(TOPIC_PARTITION2, new OffsetAndMetadata(FIRST_OFFSET + 1));
    // should be ignored because this partition is not assigned
    taskOffsets.put(new TopicPartition(TOPIC, 3), new OffsetAndMetadata(FIRST_OFFSET));
    final Map<TopicPartition, OffsetAndMetadata> committableOffsets = new HashMap<>();
    committableOffsets.put(TOPIC_PARTITION, new OffsetAndMetadata(FIRST_OFFSET + 1));
    committableOffsets.put(TOPIC_PARTITION2, new OffsetAndMetadata(FIRST_OFFSET));
    sinkTask.preCommit(workerCurrentOffsets);
    EasyMock.expectLastCall().andReturn(taskOffsets);
    final Capture<OffsetCommitCallback> callback = EasyMock.newCapture();
    consumer.commitAsync(EasyMock.eq(committableOffsets), EasyMock.capture(callback));
    EasyMock.expectLastCall().andAnswer(new IAnswer<Void>() {

        @Override
        public Void answer() throws Throwable {
            callback.getValue().onComplete(committableOffsets, null);
            return null;
        }
    });
    expectConsumerPoll(0);
    sinkTask.put(EasyMock.<Collection<SinkRecord>>anyObject());
    EasyMock.expectLastCall();
    PowerMock.replayAll();
    workerTask.initialize(TASK_CONFIG);
    workerTask.initializeAndStart();
    // iter 1 -- initial assignment
    workerTask.iteration();
    assertEquals(workerStartingOffsets, Whitebox.<Map<TopicPartition, OffsetAndMetadata>>getInternalState(workerTask, "currentOffsets"));
    // iter 2 -- deliver 2 records
    workerTask.iteration();
    assertEquals(workerCurrentOffsets, Whitebox.<Map<TopicPartition, OffsetAndMetadata>>getInternalState(workerTask, "currentOffsets"));
    assertEquals(workerStartingOffsets, Whitebox.<Map<TopicPartition, OffsetAndMetadata>>getInternalState(workerTask, "lastCommittedOffsets"));
    sinkTaskContext.getValue().requestCommit();
    // iter 3 -- commit
    workerTask.iteration();
    assertEquals(committableOffsets, Whitebox.<Map<TopicPartition, OffsetAndMetadata>>getInternalState(workerTask, "lastCommittedOffsets"));
    PowerMock.verifyAll();
}
Also used : HashMap(java.util.HashMap) TopicPartition(org.apache.kafka.common.TopicPartition) OffsetAndMetadata(org.apache.kafka.clients.consumer.OffsetAndMetadata) SinkRecord(org.apache.kafka.connect.sink.SinkRecord) OffsetCommitCallback(org.apache.kafka.clients.consumer.OffsetCommitCallback) PrepareForTest(org.powermock.core.classloader.annotations.PrepareForTest) Test(org.junit.Test)

Example 13 with SinkRecord

use of org.apache.kafka.connect.sink.SinkRecord in project kafka by apache.

the class WorkerSinkTaskTest method testTimestampPropagation.

@Test
public void testTimestampPropagation() throws Exception {
    final Long timestamp = System.currentTimeMillis();
    final TimestampType timestampType = TimestampType.CREATE_TIME;
    expectInitializeTask();
    expectConsumerPoll(1, timestamp, timestampType);
    expectConversionAndTransformation(1);
    Capture<Collection<SinkRecord>> records = EasyMock.newCapture(CaptureType.ALL);
    sinkTask.put(EasyMock.capture(records));
    PowerMock.replayAll();
    workerTask.initialize(TASK_CONFIG);
    workerTask.initializeAndStart();
    workerTask.iteration();
    SinkRecord record = records.getValue().iterator().next();
    assertEquals(timestamp, record.timestamp());
    assertEquals(timestampType, record.timestampType());
    PowerMock.verifyAll();
}
Also used : TimestampType(org.apache.kafka.common.record.TimestampType) Collection(java.util.Collection) SinkRecord(org.apache.kafka.connect.sink.SinkRecord) PrepareForTest(org.powermock.core.classloader.annotations.PrepareForTest) Test(org.junit.Test)

Example 14 with SinkRecord

use of org.apache.kafka.connect.sink.SinkRecord in project kafka by apache.

the class WorkerSinkTaskThreadedTest method expectPolls.

// Note that this can only be called once per test currently
private Capture<Collection<SinkRecord>> expectPolls(final long pollDelayMs) throws Exception {
    // Stub out all the consumer stream/iterator responses, which we just want to verify occur,
    // but don't care about the exact details here.
    EasyMock.expect(consumer.poll(EasyMock.anyLong())).andStubAnswer(new IAnswer<ConsumerRecords<byte[], byte[]>>() {

        @Override
        public ConsumerRecords<byte[], byte[]> answer() throws Throwable {
            // "Sleep" so time will progress
            time.sleep(pollDelayMs);
            ConsumerRecords<byte[], byte[]> records = new ConsumerRecords<>(Collections.singletonMap(new TopicPartition(TOPIC, PARTITION), Arrays.asList(new ConsumerRecord<>(TOPIC, PARTITION, FIRST_OFFSET + recordsReturned, TIMESTAMP, TIMESTAMP_TYPE, 0L, 0, 0, RAW_KEY, RAW_VALUE))));
            recordsReturned++;
            return records;
        }
    });
    EasyMock.expect(keyConverter.toConnectData(TOPIC, RAW_KEY)).andReturn(new SchemaAndValue(KEY_SCHEMA, KEY)).anyTimes();
    EasyMock.expect(valueConverter.toConnectData(TOPIC, RAW_VALUE)).andReturn(new SchemaAndValue(VALUE_SCHEMA, VALUE)).anyTimes();
    final Capture<SinkRecord> recordCapture = EasyMock.newCapture();
    EasyMock.expect(transformationChain.apply(EasyMock.capture(recordCapture))).andAnswer(new IAnswer<SinkRecord>() {

        @Override
        public SinkRecord answer() {
            return recordCapture.getValue();
        }
    }).anyTimes();
    Capture<Collection<SinkRecord>> capturedRecords = EasyMock.newCapture(CaptureType.ALL);
    sinkTask.put(EasyMock.capture(capturedRecords));
    EasyMock.expectLastCall().anyTimes();
    return capturedRecords;
}
Also used : SinkRecord(org.apache.kafka.connect.sink.SinkRecord) ConsumerRecords(org.apache.kafka.clients.consumer.ConsumerRecords) ConsumerRecord(org.apache.kafka.clients.consumer.ConsumerRecord) SchemaAndValue(org.apache.kafka.connect.data.SchemaAndValue) IAnswer(org.easymock.IAnswer) TopicPartition(org.apache.kafka.common.TopicPartition) Collection(java.util.Collection)

Example 15 with SinkRecord

use of org.apache.kafka.connect.sink.SinkRecord in project kafka by apache.

the class WorkerSinkTaskThreadedTest method setup.

@SuppressWarnings("unchecked")
@Override
public void setup() {
    super.setup();
    time = new MockTime();
    Map<String, String> workerProps = new HashMap<>();
    workerProps.put("key.converter", "org.apache.kafka.connect.json.JsonConverter");
    workerProps.put("value.converter", "org.apache.kafka.connect.json.JsonConverter");
    workerProps.put("internal.key.converter", "org.apache.kafka.connect.json.JsonConverter");
    workerProps.put("internal.value.converter", "org.apache.kafka.connect.json.JsonConverter");
    workerProps.put("internal.key.converter.schemas.enable", "false");
    workerProps.put("internal.value.converter.schemas.enable", "false");
    workerProps.put("offset.storage.file.filename", "/tmp/connect.offsets");
    workerConfig = new StandaloneConfig(workerProps);
    workerTask = PowerMock.createPartialMock(WorkerSinkTask.class, new String[] { "createConsumer" }, taskId, sinkTask, statusListener, initialState, workerConfig, keyConverter, valueConverter, TransformationChain.<SinkRecord>noOp(), time);
    recordsReturned = 0;
}
Also used : HashMap(java.util.HashMap) StandaloneConfig(org.apache.kafka.connect.runtime.standalone.StandaloneConfig) SinkRecord(org.apache.kafka.connect.sink.SinkRecord) MockTime(org.apache.kafka.connect.util.MockTime)

Aggregations

SinkRecord (org.apache.kafka.connect.sink.SinkRecord)27 Test (org.junit.Test)19 HashMap (java.util.HashMap)11 TopicPartition (org.apache.kafka.common.TopicPartition)7 PrepareForTest (org.powermock.core.classloader.annotations.PrepareForTest)7 OffsetAndMetadata (org.apache.kafka.clients.consumer.OffsetAndMetadata)6 Collection (java.util.Collection)4 Schema (org.apache.kafka.connect.data.Schema)3 Struct (org.apache.kafka.connect.data.Struct)3 ConnectException (org.apache.kafka.connect.errors.ConnectException)3 Map (java.util.Map)2 ConsumerRecords (org.apache.kafka.clients.consumer.ConsumerRecords)2 OffsetCommitCallback (org.apache.kafka.clients.consumer.OffsetCommitCallback)2 SchemaAndValue (org.apache.kafka.connect.data.SchemaAndValue)2 JsonProcessingException (com.fasterxml.jackson.core.JsonProcessingException)1 ConsumerRecord (org.apache.kafka.clients.consumer.ConsumerRecord)1 KafkaProducer (org.apache.kafka.clients.producer.KafkaProducer)1 WakeupException (org.apache.kafka.common.errors.WakeupException)1 TimestampType (org.apache.kafka.common.record.TimestampType)1 RetriableException (org.apache.kafka.connect.errors.RetriableException)1