Search in sources :

Example 76 with ProducerRecord

use of org.apache.kafka.clients.producer.ProducerRecord in project kafka by apache.

the class SmokeTestDriver method generate.

public static Map<String, Set<Integer>> generate(final String kafka, final int numKeys, final int maxRecordsPerKey, final Duration timeToSpend) {
    final Properties producerProps = generatorProperties(kafka);
    int numRecordsProduced = 0;
    final Map<String, Set<Integer>> allData = new HashMap<>();
    final ValueList[] data = new ValueList[numKeys];
    for (int i = 0; i < numKeys; i++) {
        data[i] = new ValueList(i, i + maxRecordsPerKey - 1);
        allData.put(data[i].key, new HashSet<>());
    }
    final Random rand = new Random();
    int remaining = data.length;
    final long recordPauseTime = timeToSpend.toMillis() / numKeys / maxRecordsPerKey;
    List<ProducerRecord<byte[], byte[]>> needRetry = new ArrayList<>();
    try (final KafkaProducer<byte[], byte[]> producer = new KafkaProducer<>(producerProps)) {
        while (remaining > 0) {
            final int index = rand.nextInt(remaining);
            final String key = data[index].key;
            final int value = data[index].next();
            if (value < 0) {
                remaining--;
                data[index] = data[remaining];
            } else {
                final ProducerRecord<byte[], byte[]> record = new ProducerRecord<>("data", stringSerde.serializer().serialize("", key), intSerde.serializer().serialize("", value));
                producer.send(record, new TestCallback(record, needRetry));
                numRecordsProduced++;
                allData.get(key).add(value);
                if (numRecordsProduced % 100 == 0) {
                    System.out.println(Instant.now() + " " + numRecordsProduced + " records produced");
                }
                Utils.sleep(Math.max(recordPauseTime, 2));
            }
        }
        producer.flush();
        int remainingRetries = 5;
        while (!needRetry.isEmpty()) {
            final List<ProducerRecord<byte[], byte[]>> needRetry2 = new ArrayList<>();
            for (final ProducerRecord<byte[], byte[]> record : needRetry) {
                System.out.println("retry producing " + stringSerde.deserializer().deserialize("", record.key()));
                producer.send(record, new TestCallback(record, needRetry2));
            }
            producer.flush();
            needRetry = needRetry2;
            if (--remainingRetries == 0 && !needRetry.isEmpty()) {
                System.err.println("Failed to produce all records after multiple retries");
                Exit.exit(1);
            }
        }
        // now that we've sent everything, we'll send some final records with a timestamp high enough to flush out
        // all suppressed records.
        final List<PartitionInfo> partitions = producer.partitionsFor("data");
        for (final PartitionInfo partition : partitions) {
            producer.send(new ProducerRecord<>(partition.topic(), partition.partition(), System.currentTimeMillis() + Duration.ofDays(2).toMillis(), stringSerde.serializer().serialize("", "flush"), intSerde.serializer().serialize("", 0)));
        }
    }
    return Collections.unmodifiableMap(allData);
}
Also used : KafkaProducer(org.apache.kafka.clients.producer.KafkaProducer) HashSet(java.util.HashSet) Set(java.util.Set) HashMap(java.util.HashMap) ArrayList(java.util.ArrayList) Properties(java.util.Properties) Random(java.util.Random) ProducerRecord(org.apache.kafka.clients.producer.ProducerRecord) PartitionInfo(org.apache.kafka.common.PartitionInfo)

Example 77 with ProducerRecord

use of org.apache.kafka.clients.producer.ProducerRecord in project kafka by apache.

the class ClientCompatibilityTest method testProduce.

public void testProduce() throws Exception {
    Properties producerProps = new Properties();
    producerProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, testConfig.bootstrapServer);
    ByteArraySerializer serializer = new ByteArraySerializer();
    KafkaProducer<byte[], byte[]> producer = new KafkaProducer<>(producerProps, serializer, serializer);
    ProducerRecord<byte[], byte[]> record1 = new ProducerRecord<>(testConfig.topic, message1);
    Future<RecordMetadata> future1 = producer.send(record1);
    ProducerRecord<byte[], byte[]> record2 = new ProducerRecord<>(testConfig.topic, message2);
    Future<RecordMetadata> future2 = producer.send(record2);
    producer.flush();
    future1.get();
    future2.get();
    producer.close();
}
Also used : KafkaProducer(org.apache.kafka.clients.producer.KafkaProducer) RecordMetadata(org.apache.kafka.clients.producer.RecordMetadata) ProducerRecord(org.apache.kafka.clients.producer.ProducerRecord) Properties(java.util.Properties) ByteArraySerializer(org.apache.kafka.common.serialization.ByteArraySerializer)

Example 78 with ProducerRecord

use of org.apache.kafka.clients.producer.ProducerRecord in project kafka by apache.

the class KafkaBasedLogTest method testSendAndReadToEnd.

@Test
public void testSendAndReadToEnd() throws Exception {
    expectStart();
    TestFuture<RecordMetadata> tp0Future = new TestFuture<>();
    ProducerRecord<String, String> tp0Record = new ProducerRecord<>(TOPIC, TP0_KEY, TP0_VALUE);
    Capture<org.apache.kafka.clients.producer.Callback> callback0 = EasyMock.newCapture();
    EasyMock.expect(producer.send(EasyMock.eq(tp0Record), EasyMock.capture(callback0))).andReturn(tp0Future);
    TestFuture<RecordMetadata> tp1Future = new TestFuture<>();
    ProducerRecord<String, String> tp1Record = new ProducerRecord<>(TOPIC, TP1_KEY, TP1_VALUE);
    Capture<org.apache.kafka.clients.producer.Callback> callback1 = EasyMock.newCapture();
    EasyMock.expect(producer.send(EasyMock.eq(tp1Record), EasyMock.capture(callback1))).andReturn(tp1Future);
    // Producer flushes when read to log end is called
    producer.flush();
    PowerMock.expectLastCall();
    expectStop();
    PowerMock.replayAll();
    Map<TopicPartition, Long> endOffsets = new HashMap<>();
    endOffsets.put(TP0, 0L);
    endOffsets.put(TP1, 0L);
    consumer.updateEndOffsets(endOffsets);
    store.start();
    assertEquals(CONSUMER_ASSIGNMENT, consumer.assignment());
    assertEquals(0L, consumer.position(TP0));
    assertEquals(0L, consumer.position(TP1));
    // Set some keys
    final AtomicInteger invoked = new AtomicInteger(0);
    org.apache.kafka.clients.producer.Callback producerCallback = (metadata, exception) -> invoked.incrementAndGet();
    store.send(TP0_KEY, TP0_VALUE, producerCallback);
    store.send(TP1_KEY, TP1_VALUE, producerCallback);
    assertEquals(0, invoked.get());
    // Output not used, so safe to not return a real value for testing
    tp1Future.resolve((RecordMetadata) null);
    callback1.getValue().onCompletion(null, null);
    assertEquals(1, invoked.get());
    tp0Future.resolve((RecordMetadata) null);
    callback0.getValue().onCompletion(null, null);
    assertEquals(2, invoked.get());
    // Now we should have to wait for the records to be read back when we call readToEnd()
    final AtomicBoolean getInvoked = new AtomicBoolean(false);
    final FutureCallback<Void> readEndFutureCallback = new FutureCallback<>((error, result) -> getInvoked.set(true));
    consumer.schedulePollTask(() -> {
        // Once we're synchronized in a poll, start the read to end and schedule the exact set of poll events
        // that should follow. This readToEnd call will immediately wakeup this consumer.poll() call without
        // returning any data.
        Map<TopicPartition, Long> newEndOffsets = new HashMap<>();
        newEndOffsets.put(TP0, 2L);
        newEndOffsets.put(TP1, 2L);
        consumer.updateEndOffsets(newEndOffsets);
        store.readToEnd(readEndFutureCallback);
        // Should keep polling until it reaches current log end offset for all partitions
        consumer.scheduleNopPollTask();
        consumer.scheduleNopPollTask();
        consumer.scheduleNopPollTask();
        consumer.schedulePollTask(() -> {
            consumer.addRecord(new ConsumerRecord<>(TOPIC, 0, 0, 0L, TimestampType.CREATE_TIME, 0, 0, TP0_KEY, TP0_VALUE, new RecordHeaders(), Optional.empty()));
            consumer.addRecord(new ConsumerRecord<>(TOPIC, 0, 1, 0L, TimestampType.CREATE_TIME, 0, 0, TP0_KEY, TP0_VALUE_NEW, new RecordHeaders(), Optional.empty()));
            consumer.addRecord(new ConsumerRecord<>(TOPIC, 1, 0, 0L, TimestampType.CREATE_TIME, 0, 0, TP1_KEY, TP1_VALUE, new RecordHeaders(), Optional.empty()));
        });
        consumer.schedulePollTask(() -> consumer.addRecord(new ConsumerRecord<>(TOPIC, 1, 1, 0L, TimestampType.CREATE_TIME, 0, 0, TP1_KEY, TP1_VALUE_NEW, new RecordHeaders(), Optional.empty())));
    // Already have FutureCallback that should be invoked/awaited, so no need for follow up finishedLatch
    });
    readEndFutureCallback.get(10000, TimeUnit.MILLISECONDS);
    assertTrue(getInvoked.get());
    assertEquals(2, consumedRecords.size());
    assertEquals(2, consumedRecords.get(TP0).size());
    assertEquals(TP0_VALUE, consumedRecords.get(TP0).get(0).value());
    assertEquals(TP0_VALUE_NEW, consumedRecords.get(TP0).get(1).value());
    assertEquals(2, consumedRecords.get(TP1).size());
    assertEquals(TP1_VALUE, consumedRecords.get(TP1).get(0).value());
    assertEquals(TP1_VALUE_NEW, consumedRecords.get(TP1).get(1).value());
    // Cleanup
    store.stop();
    assertFalse(Whitebox.<Thread>getInternalState(store, "thread").isAlive());
    assertTrue(consumer.closed());
    PowerMock.verifyAll();
}
Also used : MockTime(org.apache.kafka.common.utils.MockTime) Arrays(java.util.Arrays) MockConsumer(org.apache.kafka.clients.consumer.MockConsumer) KafkaException(org.apache.kafka.common.KafkaException) OffsetResetStrategy(org.apache.kafka.clients.consumer.OffsetResetStrategy) ByteBuffer(java.nio.ByteBuffer) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) Map(java.util.Map) TimestampType(org.apache.kafka.common.record.TimestampType) CommonClientConfigs(org.apache.kafka.clients.CommonClientConfigs) TopicPartition(org.apache.kafka.common.TopicPartition) Time(org.apache.kafka.common.utils.Time) WakeupException(org.apache.kafka.common.errors.WakeupException) Set(java.util.Set) ConsumerConfig(org.apache.kafka.clients.consumer.ConsumerConfig) PartitionInfo(org.apache.kafka.common.PartitionInfo) RecordMetadata(org.apache.kafka.clients.producer.RecordMetadata) PowerMock(org.powermock.api.easymock.PowerMock) CountDownLatch(java.util.concurrent.CountDownLatch) List(java.util.List) ConsumerRecord(org.apache.kafka.clients.consumer.ConsumerRecord) Assert.assertFalse(org.junit.Assert.assertFalse) Errors(org.apache.kafka.common.protocol.Errors) Optional(java.util.Optional) Node(org.apache.kafka.common.Node) Whitebox(org.powermock.reflect.Whitebox) ProducerRecord(org.apache.kafka.clients.producer.ProducerRecord) Assert.assertThrows(org.junit.Assert.assertThrows) RunWith(org.junit.runner.RunWith) AtomicBoolean(java.util.concurrent.atomic.AtomicBoolean) HashMap(java.util.HashMap) LeaderNotAvailableException(org.apache.kafka.common.errors.LeaderNotAvailableException) AtomicReference(java.util.concurrent.atomic.AtomicReference) Supplier(java.util.function.Supplier) ArrayList(java.util.ArrayList) HashSet(java.util.HashSet) RecordHeaders(org.apache.kafka.common.header.internals.RecordHeaders) KafkaProducer(org.apache.kafka.clients.producer.KafkaProducer) PrepareForTest(org.powermock.core.classloader.annotations.PrepareForTest) PowerMockRunner(org.powermock.modules.junit4.PowerMockRunner) ProducerConfig(org.apache.kafka.clients.producer.ProducerConfig) PowerMockIgnore(org.powermock.core.classloader.annotations.PowerMockIgnore) Before(org.junit.Before) Capture(org.easymock.Capture) TimeoutException(org.apache.kafka.common.errors.TimeoutException) Assert.assertNotNull(org.junit.Assert.assertNotNull) Mock(org.powermock.api.easymock.annotation.Mock) Assert.assertTrue(org.junit.Assert.assertTrue) Test(org.junit.Test) EasyMock(org.easymock.EasyMock) TimeUnit(java.util.concurrent.TimeUnit) Assert.assertNull(org.junit.Assert.assertNull) UnsupportedVersionException(org.apache.kafka.common.errors.UnsupportedVersionException) Assert.assertEquals(org.junit.Assert.assertEquals) HashMap(java.util.HashMap) RecordMetadata(org.apache.kafka.clients.producer.RecordMetadata) ConsumerRecord(org.apache.kafka.clients.consumer.ConsumerRecord) AtomicBoolean(java.util.concurrent.atomic.AtomicBoolean) RecordHeaders(org.apache.kafka.common.header.internals.RecordHeaders) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) TopicPartition(org.apache.kafka.common.TopicPartition) ProducerRecord(org.apache.kafka.clients.producer.ProducerRecord) PrepareForTest(org.powermock.core.classloader.annotations.PrepareForTest) Test(org.junit.Test)

Example 79 with ProducerRecord

use of org.apache.kafka.clients.producer.ProducerRecord in project kafka by apache.

the class RecordCollectorTest method shouldPassThroughRecordHeaderToSerializer.

@Test
public void shouldPassThroughRecordHeaderToSerializer() {
    final CustomStringSerializer keySerializer = new CustomStringSerializer();
    final CustomStringSerializer valueSerializer = new CustomStringSerializer();
    keySerializer.configure(Collections.emptyMap(), true);
    collector.send(topic, "3", "0", new RecordHeaders(), null, keySerializer, valueSerializer, streamPartitioner);
    final List<ProducerRecord<byte[], byte[]>> recordHistory = mockProducer.history();
    for (final ProducerRecord<byte[], byte[]> sentRecord : recordHistory) {
        final Headers headers = sentRecord.headers();
        assertEquals(2, headers.toArray().length);
        assertEquals(new RecordHeader("key", "key".getBytes()), headers.lastHeader("key"));
        assertEquals(new RecordHeader("value", "value".getBytes()), headers.lastHeader("value"));
    }
}
Also used : RecordHeaders(org.apache.kafka.common.header.internals.RecordHeaders) Headers(org.apache.kafka.common.header.Headers) RecordHeaders(org.apache.kafka.common.header.internals.RecordHeaders) ProducerRecord(org.apache.kafka.clients.producer.ProducerRecord) RecordHeader(org.apache.kafka.common.header.internals.RecordHeader) Test(org.junit.Test)

Example 80 with ProducerRecord

use of org.apache.kafka.clients.producer.ProducerRecord in project kafka by apache.

the class WorkerSourceTaskTest method testHeaders.

@Test
public void testHeaders() throws Exception {
    Headers headers = new RecordHeaders();
    headers.add("header_key", "header_value".getBytes());
    org.apache.kafka.connect.header.Headers connectHeaders = new ConnectHeaders();
    connectHeaders.add("header_key", new SchemaAndValue(Schema.STRING_SCHEMA, "header_value"));
    createWorkerTask();
    List<SourceRecord> records = new ArrayList<>();
    records.add(new SourceRecord(PARTITION, OFFSET, TOPIC, null, KEY_SCHEMA, KEY, RECORD_SCHEMA, RECORD, null, connectHeaders));
    expectTopicCreation(TOPIC);
    Capture<ProducerRecord<byte[], byte[]>> sent = expectSendRecord(TOPIC, true, true, true, true, headers);
    PowerMock.replayAll();
    Whitebox.setInternalState(workerTask, "toSend", records);
    Whitebox.invokeMethod(workerTask, "sendRecords");
    assertEquals(SERIALIZED_KEY, sent.getValue().key());
    assertEquals(SERIALIZED_RECORD, sent.getValue().value());
    assertEquals(headers, sent.getValue().headers());
    PowerMock.verifyAll();
}
Also used : Headers(org.apache.kafka.common.header.Headers) ConnectHeaders(org.apache.kafka.connect.header.ConnectHeaders) RecordHeaders(org.apache.kafka.common.header.internals.RecordHeaders) ArrayList(java.util.ArrayList) SourceRecord(org.apache.kafka.connect.source.SourceRecord) SchemaAndValue(org.apache.kafka.connect.data.SchemaAndValue) ConnectHeaders(org.apache.kafka.connect.header.ConnectHeaders) RecordHeaders(org.apache.kafka.common.header.internals.RecordHeaders) ProducerRecord(org.apache.kafka.clients.producer.ProducerRecord) ThreadedTest(org.apache.kafka.connect.util.ThreadedTest) RetryWithToleranceOperatorTest(org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorTest) ParameterizedTest(org.apache.kafka.connect.util.ParameterizedTest) Test(org.junit.Test)

Aggregations

ProducerRecord (org.apache.kafka.clients.producer.ProducerRecord)193 Test (org.junit.Test)90 KafkaProducer (org.apache.kafka.clients.producer.KafkaProducer)57 Properties (java.util.Properties)50 RecordMetadata (org.apache.kafka.clients.producer.RecordMetadata)40 ArrayList (java.util.ArrayList)39 Callback (org.apache.kafka.clients.producer.Callback)30 Future (java.util.concurrent.Future)26 TopicPartition (org.apache.kafka.common.TopicPartition)24 StringSerializer (org.apache.kafka.common.serialization.StringSerializer)21 HashMap (java.util.HashMap)20 Random (java.util.Random)19 IOException (java.io.IOException)16 ConsumerRecord (org.apache.kafka.clients.consumer.ConsumerRecord)16 KafkaConsumer (org.apache.kafka.clients.consumer.KafkaConsumer)16 KafkaException (org.apache.kafka.common.KafkaException)16 List (java.util.List)13 MockProducer (org.apache.kafka.clients.producer.MockProducer)13 DefaultPartitioner (org.apache.kafka.clients.producer.internals.DefaultPartitioner)12 StreamsException (org.apache.kafka.streams.errors.StreamsException)12