Search in sources :

Example 6 with SinkRecord

use of org.apache.kafka.connect.sink.SinkRecord in project kafka by apache.

the class FileStreamSinkTask method put.

@Override
public void put(Collection<SinkRecord> sinkRecords) {
    for (SinkRecord record : sinkRecords) {
        log.trace("Writing line to {}: {}", logFilename(), record.value());
        outputStream.println(record.value());
    }
}
Also used : SinkRecord(org.apache.kafka.connect.sink.SinkRecord)

Example 7 with SinkRecord

use of org.apache.kafka.connect.sink.SinkRecord in project kafka by apache.

the class WorkerSinkTaskTest method testRequestCommit.

@Test
public void testRequestCommit() throws Exception {
    expectInitializeTask();
    expectPollInitialAssignment();
    expectConsumerPoll(1);
    expectConversionAndTransformation(1);
    sinkTask.put(EasyMock.<Collection<SinkRecord>>anyObject());
    EasyMock.expectLastCall();
    final Map<TopicPartition, OffsetAndMetadata> offsets = new HashMap<>();
    offsets.put(TOPIC_PARTITION, new OffsetAndMetadata(FIRST_OFFSET + 1));
    offsets.put(TOPIC_PARTITION2, new OffsetAndMetadata(FIRST_OFFSET));
    sinkTask.preCommit(offsets);
    EasyMock.expectLastCall().andReturn(offsets);
    final Capture<OffsetCommitCallback> callback = EasyMock.newCapture();
    consumer.commitAsync(EasyMock.eq(offsets), EasyMock.capture(callback));
    EasyMock.expectLastCall().andAnswer(new IAnswer<Void>() {

        @Override
        public Void answer() throws Throwable {
            callback.getValue().onComplete(offsets, null);
            return null;
        }
    });
    expectConsumerPoll(0);
    sinkTask.put(Collections.<SinkRecord>emptyList());
    EasyMock.expectLastCall();
    PowerMock.replayAll();
    workerTask.initialize(TASK_CONFIG);
    workerTask.initializeAndStart();
    // initial assignment
    workerTask.iteration();
    // first record delivered
    workerTask.iteration();
    sinkTaskContext.getValue().requestCommit();
    assertTrue(sinkTaskContext.getValue().isCommitRequested());
    assertNotEquals(offsets, Whitebox.<Map<TopicPartition, OffsetAndMetadata>>getInternalState(workerTask, "lastCommittedOffsets"));
    // triggers the commit
    workerTask.iteration();
    // should have been cleared
    assertFalse(sinkTaskContext.getValue().isCommitRequested());
    assertEquals(offsets, Whitebox.<Map<TopicPartition, OffsetAndMetadata>>getInternalState(workerTask, "lastCommittedOffsets"));
    assertEquals(0, workerTask.commitFailures());
    PowerMock.verifyAll();
}
Also used : HashMap(java.util.HashMap) TopicPartition(org.apache.kafka.common.TopicPartition) OffsetAndMetadata(org.apache.kafka.clients.consumer.OffsetAndMetadata) SinkRecord(org.apache.kafka.connect.sink.SinkRecord) OffsetCommitCallback(org.apache.kafka.clients.consumer.OffsetCommitCallback) PrepareForTest(org.powermock.core.classloader.annotations.PrepareForTest) Test(org.junit.Test)

Example 8 with SinkRecord

use of org.apache.kafka.connect.sink.SinkRecord in project kafka by apache.

the class WorkerSinkTaskThreadedTest method testPollsInBackground.

@Test
public void testPollsInBackground() throws Exception {
    expectInitializeTask();
    expectPollInitialAssignment();
    Capture<Collection<SinkRecord>> capturedRecords = expectPolls(1L);
    expectStopTask();
    PowerMock.replayAll();
    workerTask.initialize(TASK_CONFIG);
    workerTask.initializeAndStart();
    // First iteration initializes partition assignment
    workerTask.iteration();
    // Then we iterate to fetch data
    for (int i = 0; i < 10; i++) {
        workerTask.iteration();
    }
    workerTask.stop();
    workerTask.close();
    // Verify contents match expected values, i.e. that they were translated properly. With max
    // batch size 1 and poll returns 1 message at a time, we should have a matching # of batches
    assertEquals(10, capturedRecords.getValues().size());
    int offset = 0;
    for (Collection<SinkRecord> recs : capturedRecords.getValues()) {
        assertEquals(1, recs.size());
        for (SinkRecord rec : recs) {
            SinkRecord referenceSinkRecord = new SinkRecord(TOPIC, PARTITION, KEY_SCHEMA, KEY, VALUE_SCHEMA, VALUE, FIRST_OFFSET + offset, TIMESTAMP, TIMESTAMP_TYPE);
            assertEquals(referenceSinkRecord, rec);
            offset++;
        }
    }
    PowerMock.verifyAll();
}
Also used : Collection(java.util.Collection) SinkRecord(org.apache.kafka.connect.sink.SinkRecord) ThreadedTest(org.apache.kafka.connect.util.ThreadedTest) PrepareForTest(org.powermock.core.classloader.annotations.PrepareForTest) Test(org.junit.Test)

Example 9 with SinkRecord

use of org.apache.kafka.connect.sink.SinkRecord in project kafka by apache.

the class ExtractFieldTest method schemaless.

@Test
public void schemaless() {
    final ExtractField<SinkRecord> xform = new ExtractField.Key<>();
    xform.configure(Collections.singletonMap("field", "magic"));
    final SinkRecord record = new SinkRecord("test", 0, null, Collections.singletonMap("magic", 42), null, null, 0);
    final SinkRecord transformedRecord = xform.apply(record);
    assertNull(transformedRecord.keySchema());
    assertEquals(42, transformedRecord.key());
}
Also used : SinkRecord(org.apache.kafka.connect.sink.SinkRecord) Test(org.junit.Test)

Example 10 with SinkRecord

use of org.apache.kafka.connect.sink.SinkRecord in project kafka by apache.

the class HoistFieldTest method withSchema.

@Test
public void withSchema() {
    final HoistField<SinkRecord> xform = new HoistField.Key<>();
    xform.configure(Collections.singletonMap("field", "magic"));
    final SinkRecord record = new SinkRecord("test", 0, Schema.INT32_SCHEMA, 42, null, null, 0);
    final SinkRecord transformedRecord = xform.apply(record);
    assertEquals(Schema.Type.STRUCT, transformedRecord.keySchema().type());
    assertEquals(record.keySchema(), transformedRecord.keySchema().field("magic").schema());
    assertEquals(42, ((Struct) transformedRecord.key()).get("magic"));
}
Also used : SinkRecord(org.apache.kafka.connect.sink.SinkRecord) Test(org.junit.Test)

Aggregations

SinkRecord (org.apache.kafka.connect.sink.SinkRecord)27 Test (org.junit.Test)19 HashMap (java.util.HashMap)11 TopicPartition (org.apache.kafka.common.TopicPartition)7 PrepareForTest (org.powermock.core.classloader.annotations.PrepareForTest)7 OffsetAndMetadata (org.apache.kafka.clients.consumer.OffsetAndMetadata)6 Collection (java.util.Collection)4 Schema (org.apache.kafka.connect.data.Schema)3 Struct (org.apache.kafka.connect.data.Struct)3 ConnectException (org.apache.kafka.connect.errors.ConnectException)3 Map (java.util.Map)2 ConsumerRecords (org.apache.kafka.clients.consumer.ConsumerRecords)2 OffsetCommitCallback (org.apache.kafka.clients.consumer.OffsetCommitCallback)2 SchemaAndValue (org.apache.kafka.connect.data.SchemaAndValue)2 JsonProcessingException (com.fasterxml.jackson.core.JsonProcessingException)1 ConsumerRecord (org.apache.kafka.clients.consumer.ConsumerRecord)1 KafkaProducer (org.apache.kafka.clients.producer.KafkaProducer)1 WakeupException (org.apache.kafka.common.errors.WakeupException)1 TimestampType (org.apache.kafka.common.record.TimestampType)1 RetriableException (org.apache.kafka.connect.errors.RetriableException)1