Search in sources :

Example 76 with RecordMetadata

use of org.apache.kafka.clients.producer.RecordMetadata in project kafka by apache.

the class TransactionManagerTest method testTransitionToFatalErrorWhenRetriedBatchIsExpired.

@Test
public void testTransitionToFatalErrorWhenRetriedBatchIsExpired() throws InterruptedException {
    apiVersions.update("0", new NodeApiVersions(Arrays.asList(new ApiVersion().setApiKey(ApiKeys.INIT_PRODUCER_ID.id).setMinVersion((short) 0).setMaxVersion((short) 1), new ApiVersion().setApiKey(ApiKeys.PRODUCE.id).setMinVersion((short) 0).setMaxVersion((short) 7))));
    doInitTransactions();
    transactionManager.beginTransaction();
    transactionManager.maybeAddPartition(tp0);
    Future<RecordMetadata> responseFuture = appendToAccumulator(tp0);
    assertFalse(responseFuture.isDone());
    prepareAddPartitionsToTxnResponse(Errors.NONE, tp0, epoch, producerId);
    assertFalse(transactionManager.transactionContainsPartition(tp0));
    assertFalse(transactionManager.isSendToPartitionAllowed(tp0));
    // Check that only addPartitions was sent.
    runUntil(() -> transactionManager.transactionContainsPartition(tp0));
    assertTrue(transactionManager.isSendToPartitionAllowed(tp0));
    prepareProduceResponse(Errors.NOT_LEADER_OR_FOLLOWER, producerId, epoch);
    runUntil(() -> !client.hasPendingResponses());
    assertFalse(responseFuture.isDone());
    TransactionalRequestResult commitResult = transactionManager.beginCommit();
    // Sleep 10 seconds to make sure that the batches in the queue would be expired if they can't be drained.
    time.sleep(10000);
    // Disconnect the target node for the pending produce request. This will ensure that sender will try to
    // expire the batch.
    Node clusterNode = metadata.fetch().nodes().get(0);
    client.disconnect(clusterNode.idString());
    client.backoff(clusterNode, 100);
    // We should try to flush the produce, but expire it instead without sending anything.
    runUntil(responseFuture::isDone);
    try {
        // make sure the produce was expired.
        responseFuture.get();
        fail("Expected to get a TimeoutException since the queued ProducerBatch should have been expired");
    } catch (ExecutionException e) {
        assertTrue(e.getCause() instanceof TimeoutException);
    }
    runUntil(commitResult::isCompleted);
    // the commit should have been dropped.
    assertFalse(commitResult.isSuccessful());
    assertTrue(transactionManager.hasFatalError());
    assertFalse(transactionManager.hasOngoingTransaction());
}
Also used : RecordMetadata(org.apache.kafka.clients.producer.RecordMetadata) ApiVersion(org.apache.kafka.common.message.ApiVersionsResponseData.ApiVersion) NodeApiVersions(org.apache.kafka.clients.NodeApiVersions) Node(org.apache.kafka.common.Node) ExecutionException(java.util.concurrent.ExecutionException) TimeoutException(org.apache.kafka.common.errors.TimeoutException) Test(org.junit.jupiter.api.Test)

Example 77 with RecordMetadata

use of org.apache.kafka.clients.producer.RecordMetadata in project kafka by apache.

the class ProcessingContextTest method testReport.

private void testReport(int numberOfReports) {
    ProcessingContext context = new ProcessingContext();
    List<CompletableFuture<RecordMetadata>> fs = IntStream.range(0, numberOfReports).mapToObj(i -> new CompletableFuture<RecordMetadata>()).collect(Collectors.toList());
    context.reporters(IntStream.range(0, numberOfReports).mapToObj(i -> (ErrorReporter) c -> fs.get(i)).collect(Collectors.toList()));
    Future<Void> result = context.report();
    fs.forEach(f -> {
        assertFalse(result.isDone());
        f.complete(new RecordMetadata(new TopicPartition("t", 0), 0, 0, 0, 0, 0));
    });
    assertTrue(result.isDone());
}
Also used : IntStream(java.util.stream.IntStream) TopicPartition(org.apache.kafka.common.TopicPartition) List(java.util.List) Future(java.util.concurrent.Future) Assert.assertFalse(org.junit.Assert.assertFalse) Assert.assertTrue(org.junit.Assert.assertTrue) Test(org.junit.Test) CompletableFuture(java.util.concurrent.CompletableFuture) RecordMetadata(org.apache.kafka.clients.producer.RecordMetadata) Collectors(java.util.stream.Collectors) RecordMetadata(org.apache.kafka.clients.producer.RecordMetadata) CompletableFuture(java.util.concurrent.CompletableFuture) TopicPartition(org.apache.kafka.common.TopicPartition)

Example 78 with RecordMetadata

use of org.apache.kafka.clients.producer.RecordMetadata in project kafka by apache.

the class ClientCompatibilityTest method testProduce.

public void testProduce() throws Exception {
    Properties producerProps = new Properties();
    producerProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, testConfig.bootstrapServer);
    ByteArraySerializer serializer = new ByteArraySerializer();
    KafkaProducer<byte[], byte[]> producer = new KafkaProducer<>(producerProps, serializer, serializer);
    ProducerRecord<byte[], byte[]> record1 = new ProducerRecord<>(testConfig.topic, message1);
    Future<RecordMetadata> future1 = producer.send(record1);
    ProducerRecord<byte[], byte[]> record2 = new ProducerRecord<>(testConfig.topic, message2);
    Future<RecordMetadata> future2 = producer.send(record2);
    producer.flush();
    future1.get();
    future2.get();
    producer.close();
}
Also used : KafkaProducer(org.apache.kafka.clients.producer.KafkaProducer) RecordMetadata(org.apache.kafka.clients.producer.RecordMetadata) ProducerRecord(org.apache.kafka.clients.producer.ProducerRecord) Properties(java.util.Properties) ByteArraySerializer(org.apache.kafka.common.serialization.ByteArraySerializer)

Example 79 with RecordMetadata

use of org.apache.kafka.clients.producer.RecordMetadata in project kafka by apache.

the class KafkaBasedLogTest method testSendAndReadToEnd.

@Test
public void testSendAndReadToEnd() throws Exception {
    expectStart();
    TestFuture<RecordMetadata> tp0Future = new TestFuture<>();
    ProducerRecord<String, String> tp0Record = new ProducerRecord<>(TOPIC, TP0_KEY, TP0_VALUE);
    Capture<org.apache.kafka.clients.producer.Callback> callback0 = EasyMock.newCapture();
    EasyMock.expect(producer.send(EasyMock.eq(tp0Record), EasyMock.capture(callback0))).andReturn(tp0Future);
    TestFuture<RecordMetadata> tp1Future = new TestFuture<>();
    ProducerRecord<String, String> tp1Record = new ProducerRecord<>(TOPIC, TP1_KEY, TP1_VALUE);
    Capture<org.apache.kafka.clients.producer.Callback> callback1 = EasyMock.newCapture();
    EasyMock.expect(producer.send(EasyMock.eq(tp1Record), EasyMock.capture(callback1))).andReturn(tp1Future);
    // Producer flushes when read to log end is called
    producer.flush();
    PowerMock.expectLastCall();
    expectStop();
    PowerMock.replayAll();
    Map<TopicPartition, Long> endOffsets = new HashMap<>();
    endOffsets.put(TP0, 0L);
    endOffsets.put(TP1, 0L);
    consumer.updateEndOffsets(endOffsets);
    store.start();
    assertEquals(CONSUMER_ASSIGNMENT, consumer.assignment());
    assertEquals(0L, consumer.position(TP0));
    assertEquals(0L, consumer.position(TP1));
    // Set some keys
    final AtomicInteger invoked = new AtomicInteger(0);
    org.apache.kafka.clients.producer.Callback producerCallback = (metadata, exception) -> invoked.incrementAndGet();
    store.send(TP0_KEY, TP0_VALUE, producerCallback);
    store.send(TP1_KEY, TP1_VALUE, producerCallback);
    assertEquals(0, invoked.get());
    // Output not used, so safe to not return a real value for testing
    tp1Future.resolve((RecordMetadata) null);
    callback1.getValue().onCompletion(null, null);
    assertEquals(1, invoked.get());
    tp0Future.resolve((RecordMetadata) null);
    callback0.getValue().onCompletion(null, null);
    assertEquals(2, invoked.get());
    // Now we should have to wait for the records to be read back when we call readToEnd()
    final AtomicBoolean getInvoked = new AtomicBoolean(false);
    final FutureCallback<Void> readEndFutureCallback = new FutureCallback<>((error, result) -> getInvoked.set(true));
    consumer.schedulePollTask(() -> {
        // Once we're synchronized in a poll, start the read to end and schedule the exact set of poll events
        // that should follow. This readToEnd call will immediately wakeup this consumer.poll() call without
        // returning any data.
        Map<TopicPartition, Long> newEndOffsets = new HashMap<>();
        newEndOffsets.put(TP0, 2L);
        newEndOffsets.put(TP1, 2L);
        consumer.updateEndOffsets(newEndOffsets);
        store.readToEnd(readEndFutureCallback);
        // Should keep polling until it reaches current log end offset for all partitions
        consumer.scheduleNopPollTask();
        consumer.scheduleNopPollTask();
        consumer.scheduleNopPollTask();
        consumer.schedulePollTask(() -> {
            consumer.addRecord(new ConsumerRecord<>(TOPIC, 0, 0, 0L, TimestampType.CREATE_TIME, 0, 0, TP0_KEY, TP0_VALUE, new RecordHeaders(), Optional.empty()));
            consumer.addRecord(new ConsumerRecord<>(TOPIC, 0, 1, 0L, TimestampType.CREATE_TIME, 0, 0, TP0_KEY, TP0_VALUE_NEW, new RecordHeaders(), Optional.empty()));
            consumer.addRecord(new ConsumerRecord<>(TOPIC, 1, 0, 0L, TimestampType.CREATE_TIME, 0, 0, TP1_KEY, TP1_VALUE, new RecordHeaders(), Optional.empty()));
        });
        consumer.schedulePollTask(() -> consumer.addRecord(new ConsumerRecord<>(TOPIC, 1, 1, 0L, TimestampType.CREATE_TIME, 0, 0, TP1_KEY, TP1_VALUE_NEW, new RecordHeaders(), Optional.empty())));
    // Already have FutureCallback that should be invoked/awaited, so no need for follow up finishedLatch
    });
    readEndFutureCallback.get(10000, TimeUnit.MILLISECONDS);
    assertTrue(getInvoked.get());
    assertEquals(2, consumedRecords.size());
    assertEquals(2, consumedRecords.get(TP0).size());
    assertEquals(TP0_VALUE, consumedRecords.get(TP0).get(0).value());
    assertEquals(TP0_VALUE_NEW, consumedRecords.get(TP0).get(1).value());
    assertEquals(2, consumedRecords.get(TP1).size());
    assertEquals(TP1_VALUE, consumedRecords.get(TP1).get(0).value());
    assertEquals(TP1_VALUE_NEW, consumedRecords.get(TP1).get(1).value());
    // Cleanup
    store.stop();
    assertFalse(Whitebox.<Thread>getInternalState(store, "thread").isAlive());
    assertTrue(consumer.closed());
    PowerMock.verifyAll();
}
Also used : MockTime(org.apache.kafka.common.utils.MockTime) Arrays(java.util.Arrays) MockConsumer(org.apache.kafka.clients.consumer.MockConsumer) KafkaException(org.apache.kafka.common.KafkaException) OffsetResetStrategy(org.apache.kafka.clients.consumer.OffsetResetStrategy) ByteBuffer(java.nio.ByteBuffer) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) Map(java.util.Map) TimestampType(org.apache.kafka.common.record.TimestampType) CommonClientConfigs(org.apache.kafka.clients.CommonClientConfigs) TopicPartition(org.apache.kafka.common.TopicPartition) Time(org.apache.kafka.common.utils.Time) WakeupException(org.apache.kafka.common.errors.WakeupException) Set(java.util.Set) ConsumerConfig(org.apache.kafka.clients.consumer.ConsumerConfig) PartitionInfo(org.apache.kafka.common.PartitionInfo) RecordMetadata(org.apache.kafka.clients.producer.RecordMetadata) PowerMock(org.powermock.api.easymock.PowerMock) CountDownLatch(java.util.concurrent.CountDownLatch) List(java.util.List) ConsumerRecord(org.apache.kafka.clients.consumer.ConsumerRecord) Assert.assertFalse(org.junit.Assert.assertFalse) Errors(org.apache.kafka.common.protocol.Errors) Optional(java.util.Optional) Node(org.apache.kafka.common.Node) Whitebox(org.powermock.reflect.Whitebox) ProducerRecord(org.apache.kafka.clients.producer.ProducerRecord) Assert.assertThrows(org.junit.Assert.assertThrows) RunWith(org.junit.runner.RunWith) AtomicBoolean(java.util.concurrent.atomic.AtomicBoolean) HashMap(java.util.HashMap) LeaderNotAvailableException(org.apache.kafka.common.errors.LeaderNotAvailableException) AtomicReference(java.util.concurrent.atomic.AtomicReference) Supplier(java.util.function.Supplier) ArrayList(java.util.ArrayList) HashSet(java.util.HashSet) RecordHeaders(org.apache.kafka.common.header.internals.RecordHeaders) KafkaProducer(org.apache.kafka.clients.producer.KafkaProducer) PrepareForTest(org.powermock.core.classloader.annotations.PrepareForTest) PowerMockRunner(org.powermock.modules.junit4.PowerMockRunner) ProducerConfig(org.apache.kafka.clients.producer.ProducerConfig) PowerMockIgnore(org.powermock.core.classloader.annotations.PowerMockIgnore) Before(org.junit.Before) Capture(org.easymock.Capture) TimeoutException(org.apache.kafka.common.errors.TimeoutException) Assert.assertNotNull(org.junit.Assert.assertNotNull) Mock(org.powermock.api.easymock.annotation.Mock) Assert.assertTrue(org.junit.Assert.assertTrue) Test(org.junit.Test) EasyMock(org.easymock.EasyMock) TimeUnit(java.util.concurrent.TimeUnit) Assert.assertNull(org.junit.Assert.assertNull) UnsupportedVersionException(org.apache.kafka.common.errors.UnsupportedVersionException) Assert.assertEquals(org.junit.Assert.assertEquals) HashMap(java.util.HashMap) RecordMetadata(org.apache.kafka.clients.producer.RecordMetadata) ConsumerRecord(org.apache.kafka.clients.consumer.ConsumerRecord) AtomicBoolean(java.util.concurrent.atomic.AtomicBoolean) RecordHeaders(org.apache.kafka.common.header.internals.RecordHeaders) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) TopicPartition(org.apache.kafka.common.TopicPartition) ProducerRecord(org.apache.kafka.clients.producer.ProducerRecord) PrepareForTest(org.powermock.core.classloader.annotations.PrepareForTest) Test(org.junit.Test)

Example 80 with RecordMetadata

use of org.apache.kafka.clients.producer.RecordMetadata in project kafka by apache.

the class KafkaLog4jAppenderTest method replaceProducerWithMocked.

private void replaceProducerWithMocked(MockKafkaLog4jAppender mockKafkaLog4jAppender, boolean success) {
    @SuppressWarnings("unchecked") MockProducer<byte[], byte[]> producer = mock(MockProducer.class);
    CompletableFuture<RecordMetadata> future = new CompletableFuture<>();
    if (success)
        future.complete(new RecordMetadata(new TopicPartition("tp", 0), 0, 0, 0, 0, 0));
    else
        future.completeExceptionally(new TimeoutException("simulated timeout"));
    when(producer.send(any())).thenReturn(future);
    // reconfiguring mock appender
    mockKafkaLog4jAppender.setKafkaProducer(producer);
    mockKafkaLog4jAppender.activateOptions();
}
Also used : RecordMetadata(org.apache.kafka.clients.producer.RecordMetadata) CompletableFuture(java.util.concurrent.CompletableFuture) TopicPartition(org.apache.kafka.common.TopicPartition) TimeoutException(java.util.concurrent.TimeoutException)

Aggregations

RecordMetadata (org.apache.kafka.clients.producer.RecordMetadata)191 Test (org.junit.Test)64 Node (org.apache.kafka.common.Node)50 Test (org.junit.jupiter.api.Test)50 TopicPartition (org.apache.kafka.common.TopicPartition)48 ProducerRecord (org.apache.kafka.clients.producer.ProducerRecord)46 ExecutionException (java.util.concurrent.ExecutionException)35 Callback (org.apache.kafka.clients.producer.Callback)33 KafkaProducer (org.apache.kafka.clients.producer.KafkaProducer)31 Properties (java.util.Properties)30 HashMap (java.util.HashMap)24 TimeoutException (org.apache.kafka.common.errors.TimeoutException)23 ArrayList (java.util.ArrayList)21 KafkaException (org.apache.kafka.common.KafkaException)19 List (java.util.List)15 AtomicInteger (java.util.concurrent.atomic.AtomicInteger)15 Metrics (org.apache.kafka.common.metrics.Metrics)15 LinkedHashMap (java.util.LinkedHashMap)13 Future (java.util.concurrent.Future)13 Map (java.util.Map)12