Search in sources :

Example 36 with Time

use of org.apache.kafka.common.utils.Time in project kafka by apache.

the class KafkaProducerTest method verifyInvalidGroupMetadata.

private void verifyInvalidGroupMetadata(ConsumerGroupMetadata groupMetadata) {
    Map<String, Object> configs = new HashMap<>();
    configs.put(ProducerConfig.TRANSACTIONAL_ID_CONFIG, "some.id");
    configs.put(ProducerConfig.MAX_BLOCK_MS_CONFIG, 10000);
    configs.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9000");
    Time time = new MockTime(1);
    MetadataResponse initialUpdateResponse = RequestTestUtils.metadataUpdateWith(1, singletonMap("topic", 1));
    ProducerMetadata metadata = newMetadata(0, Long.MAX_VALUE);
    MockClient client = new MockClient(time, metadata);
    client.updateMetadata(initialUpdateResponse);
    Node node = metadata.fetch().nodes().get(0);
    client.throttle(node, 5000);
    client.prepareResponse(FindCoordinatorResponse.prepareResponse(Errors.NONE, "some.id", NODE));
    client.prepareResponse(initProducerIdResponse(1L, (short) 5, Errors.NONE));
    try (Producer<String, String> producer = kafkaProducer(configs, new StringSerializer(), new StringSerializer(), metadata, client, null, time)) {
        producer.initTransactions();
        producer.beginTransaction();
        assertThrows(IllegalArgumentException.class, () -> producer.sendOffsetsToTransaction(Collections.emptyMap(), groupMetadata));
    }
}
Also used : ProducerMetadata(org.apache.kafka.clients.producer.internals.ProducerMetadata) HashMap(java.util.HashMap) Node(org.apache.kafka.common.Node) MetadataResponse(org.apache.kafka.common.requests.MetadataResponse) MockTime(org.apache.kafka.common.utils.MockTime) Time(org.apache.kafka.common.utils.Time) StringSerializer(org.apache.kafka.common.serialization.StringSerializer) MockTime(org.apache.kafka.common.utils.MockTime) MockClient(org.apache.kafka.clients.MockClient)

Example 37 with Time

use of org.apache.kafka.common.utils.Time in project kafka by apache.

the class KafkaProducerTest method shouldCloseProperlyAndThrowIfInterrupted.

@Test
public void shouldCloseProperlyAndThrowIfInterrupted() throws Exception {
    Map<String, Object> configs = new HashMap<>();
    configs.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9999");
    configs.put(ProducerConfig.PARTITIONER_CLASS_CONFIG, MockPartitioner.class.getName());
    configs.put(ProducerConfig.BATCH_SIZE_CONFIG, "1");
    Time time = new MockTime();
    MetadataResponse initialUpdateResponse = RequestTestUtils.metadataUpdateWith(1, singletonMap("topic", 1));
    ProducerMetadata metadata = newMetadata(0, Long.MAX_VALUE);
    MockClient client = new MockClient(time, metadata);
    client.updateMetadata(initialUpdateResponse);
    final Producer<String, String> producer = kafkaProducer(configs, new StringSerializer(), new StringSerializer(), metadata, client, null, time);
    ExecutorService executor = Executors.newSingleThreadExecutor();
    final AtomicReference<Exception> closeException = new AtomicReference<>();
    try {
        Future<?> future = executor.submit(() -> {
            producer.send(new ProducerRecord<>("topic", "key", "value"));
            try {
                producer.close();
                fail("Close should block and throw.");
            } catch (Exception e) {
                closeException.set(e);
            }
        });
        // Close producer should not complete until send succeeds
        try {
            future.get(100, TimeUnit.MILLISECONDS);
            fail("Close completed without waiting for send");
        } catch (java.util.concurrent.TimeoutException expected) {
        /* ignore */
        }
        // Ensure send has started
        client.waitForRequests(1, 1000);
        assertTrue(future.cancel(true), "Close terminated prematurely");
        TestUtils.waitForCondition(() -> closeException.get() != null, "InterruptException did not occur within timeout.");
        assertTrue(closeException.get() instanceof InterruptException, "Expected exception not thrown " + closeException);
    } finally {
        executor.shutdownNow();
    }
}
Also used : ProducerMetadata(org.apache.kafka.clients.producer.internals.ProducerMetadata) HashMap(java.util.HashMap) InterruptException(org.apache.kafka.common.errors.InterruptException) MockTime(org.apache.kafka.common.utils.MockTime) Time(org.apache.kafka.common.utils.Time) AtomicReference(java.util.concurrent.atomic.AtomicReference) KafkaException(org.apache.kafka.common.KafkaException) InvalidTopicException(org.apache.kafka.common.errors.InvalidTopicException) InterruptException(org.apache.kafka.common.errors.InterruptException) ExecutionException(java.util.concurrent.ExecutionException) RecordTooLargeException(org.apache.kafka.common.errors.RecordTooLargeException) TimeoutException(org.apache.kafka.common.errors.TimeoutException) ConfigException(org.apache.kafka.common.config.ConfigException) MockPartitioner(org.apache.kafka.test.MockPartitioner) MetadataResponse(org.apache.kafka.common.requests.MetadataResponse) ExecutorService(java.util.concurrent.ExecutorService) StringSerializer(org.apache.kafka.common.serialization.StringSerializer) MockTime(org.apache.kafka.common.utils.MockTime) MockClient(org.apache.kafka.clients.MockClient) ParameterizedTest(org.junit.jupiter.params.ParameterizedTest) Test(org.junit.jupiter.api.Test)

Example 38 with Time

use of org.apache.kafka.common.utils.Time in project kafka by apache.

the class KafkaConsumerTest method testMeasureCommittedDuration.

@Test
public void testMeasureCommittedDuration() {
    long offset1 = 10000;
    Time time = new MockTime(Duration.ofSeconds(1).toMillis());
    SubscriptionState subscription = new SubscriptionState(new LogContext(), OffsetResetStrategy.EARLIEST);
    ConsumerMetadata metadata = createMetadata(subscription);
    MockClient client = new MockClient(time, metadata);
    initMetadata(client, Collections.singletonMap(topic, 2));
    Node node = metadata.fetch().nodes().get(0);
    KafkaConsumer<String, String> consumer = newConsumer(time, client, subscription, metadata, assignor, true, groupInstanceId);
    consumer.assign(singletonList(tp0));
    // lookup coordinator
    client.prepareResponseFrom(FindCoordinatorResponse.prepareResponse(Errors.NONE, groupId, node), node);
    Node coordinator = new Node(Integer.MAX_VALUE - node.id(), node.host(), node.port());
    // fetch offset for one topic
    client.prepareResponseFrom(offsetResponse(Collections.singletonMap(tp0, offset1), Errors.NONE), coordinator);
    consumer.committed(Collections.singleton(tp0)).get(tp0).offset();
    final Metric metric = consumer.metrics().get(consumer.metrics.metricName("committed-time-ns-total", "consumer-metrics"));
    assertTrue((Double) metric.metricValue() >= Duration.ofMillis(999).toNanos());
}
Also used : ConsumerMetadata(org.apache.kafka.clients.consumer.internals.ConsumerMetadata) SubscriptionState(org.apache.kafka.clients.consumer.internals.SubscriptionState) Node(org.apache.kafka.common.Node) LogContext(org.apache.kafka.common.utils.LogContext) MockTime(org.apache.kafka.common.utils.MockTime) Time(org.apache.kafka.common.utils.Time) Metric(org.apache.kafka.common.Metric) MockTime(org.apache.kafka.common.utils.MockTime) MockClient(org.apache.kafka.clients.MockClient) Test(org.junit.jupiter.api.Test)

Example 39 with Time

use of org.apache.kafka.common.utils.Time in project kafka by apache.

the class KafkaConsumerTest method testClosingConsumerUnregistersConsumerMetrics.

@Test
public void testClosingConsumerUnregistersConsumerMetrics() {
    Time time = new MockTime(1L);
    ConsumerMetadata metadata = createMetadata(subscription);
    MockClient client = new MockClient(time, metadata);
    initMetadata(client, Collections.singletonMap(topic, 1));
    KafkaConsumer<String, String> consumer = newConsumer(time, client, subscription, metadata, new RoundRobinAssignor(), true, groupInstanceId);
    consumer.subscribe(singletonList(topic));
    assertTrue(consumerMetricPresent(consumer, "last-poll-seconds-ago"));
    assertTrue(consumerMetricPresent(consumer, "time-between-poll-avg"));
    assertTrue(consumerMetricPresent(consumer, "time-between-poll-max"));
    consumer.close();
    assertFalse(consumerMetricPresent(consumer, "last-poll-seconds-ago"));
    assertFalse(consumerMetricPresent(consumer, "time-between-poll-avg"));
    assertFalse(consumerMetricPresent(consumer, "time-between-poll-max"));
}
Also used : ConsumerMetadata(org.apache.kafka.clients.consumer.internals.ConsumerMetadata) MockTime(org.apache.kafka.common.utils.MockTime) Time(org.apache.kafka.common.utils.Time) MockTime(org.apache.kafka.common.utils.MockTime) MockClient(org.apache.kafka.clients.MockClient) Test(org.junit.jupiter.api.Test)

Example 40 with Time

use of org.apache.kafka.common.utils.Time in project kafka by apache.

the class KafkaConsumerTest method testReturnRecordsDuringRebalance.

@Test
public void testReturnRecordsDuringRebalance() throws InterruptedException {
    Time time = new MockTime(1L);
    ConsumerMetadata metadata = createMetadata(subscription);
    MockClient client = new MockClient(time, metadata);
    ConsumerPartitionAssignor assignor = new CooperativeStickyAssignor();
    KafkaConsumer<String, String> consumer = newConsumer(time, client, subscription, metadata, assignor, true, groupInstanceId);
    initMetadata(client, Utils.mkMap(Utils.mkEntry(topic, 1), Utils.mkEntry(topic2, 1), Utils.mkEntry(topic3, 1)));
    consumer.subscribe(Arrays.asList(topic, topic2), getConsumerRebalanceListener(consumer));
    Node node = metadata.fetch().nodes().get(0);
    Node coordinator = prepareRebalance(client, node, assignor, Arrays.asList(tp0, t2p0), null);
    // a poll with non-zero milliseconds would complete three round-trips (discover, join, sync)
    TestUtils.waitForCondition(() -> {
        consumer.poll(Duration.ofMillis(100L));
        return consumer.assignment().equals(Utils.mkSet(tp0, t2p0));
    }, "Does not complete rebalance in time");
    assertEquals(Utils.mkSet(topic, topic2), consumer.subscription());
    assertEquals(Utils.mkSet(tp0, t2p0), consumer.assignment());
    // prepare a response of the outstanding fetch so that we have data available on the next poll
    Map<TopicPartition, FetchInfo> fetches1 = new HashMap<>();
    fetches1.put(tp0, new FetchInfo(0, 1));
    fetches1.put(t2p0, new FetchInfo(0, 10));
    client.respondFrom(fetchResponse(fetches1), node);
    ConsumerRecords<String, String> records = consumer.poll(Duration.ZERO);
    // verify that the fetch occurred as expected
    assertEquals(11, records.count());
    assertEquals(1L, consumer.position(tp0));
    assertEquals(10L, consumer.position(t2p0));
    // prepare the next response of the prefetch
    fetches1.clear();
    fetches1.put(tp0, new FetchInfo(1, 1));
    fetches1.put(t2p0, new FetchInfo(10, 20));
    client.respondFrom(fetchResponse(fetches1), node);
    // subscription change
    consumer.subscribe(Arrays.asList(topic, topic3), getConsumerRebalanceListener(consumer));
    // verify that subscription has changed but assignment is still unchanged
    assertEquals(Utils.mkSet(topic, topic3), consumer.subscription());
    assertEquals(Utils.mkSet(tp0, t2p0), consumer.assignment());
    // mock the offset commit response for to be revoked partitions
    Map<TopicPartition, Long> partitionOffsets1 = new HashMap<>();
    partitionOffsets1.put(t2p0, 10L);
    AtomicBoolean commitReceived = prepareOffsetCommitResponse(client, coordinator, partitionOffsets1);
    // poll once which would not complete the rebalance
    records = consumer.poll(Duration.ZERO);
    // clear out the prefetch so it doesn't interfere with the rest of the test
    fetches1.clear();
    fetches1.put(tp0, new FetchInfo(2, 1));
    client.respondFrom(fetchResponse(fetches1), node);
    // verify that the fetch still occurred as expected
    assertEquals(Utils.mkSet(topic, topic3), consumer.subscription());
    assertEquals(Collections.singleton(tp0), consumer.assignment());
    assertEquals(1, records.count());
    assertEquals(2L, consumer.position(tp0));
    // verify that the offset commits occurred as expected
    assertTrue(commitReceived.get());
    // mock rebalance responses
    client.respondFrom(joinGroupFollowerResponse(assignor, 2, "memberId", "leaderId", Errors.NONE), coordinator);
    // we need to poll 1) for getting the join response, and then send the sync request;
    // 2) for getting the sync response
    records = consumer.poll(Duration.ZERO);
    // should not finish the response yet
    assertEquals(Utils.mkSet(topic, topic3), consumer.subscription());
    assertEquals(Collections.singleton(tp0), consumer.assignment());
    assertEquals(1, records.count());
    assertEquals(3L, consumer.position(tp0));
    fetches1.clear();
    fetches1.put(tp0, new FetchInfo(3, 1));
    client.respondFrom(fetchResponse(fetches1), node);
    // now complete the rebalance
    client.respondFrom(syncGroupResponse(Arrays.asList(tp0, t3p0), Errors.NONE), coordinator);
    AtomicInteger count = new AtomicInteger(0);
    TestUtils.waitForCondition(() -> {
        ConsumerRecords<String, String> recs = consumer.poll(Duration.ofMillis(100L));
        return consumer.assignment().equals(Utils.mkSet(tp0, t3p0)) && count.addAndGet(recs.count()) == 1;
    }, "Does not complete rebalance in time");
    // should have t3 but not sent yet the t3 records
    assertEquals(Utils.mkSet(topic, topic3), consumer.subscription());
    assertEquals(Utils.mkSet(tp0, t3p0), consumer.assignment());
    assertEquals(4L, consumer.position(tp0));
    assertEquals(0L, consumer.position(t3p0));
    fetches1.clear();
    fetches1.put(tp0, new FetchInfo(4, 1));
    fetches1.put(t3p0, new FetchInfo(0, 100));
    client.respondFrom(fetchResponse(fetches1), node);
    count.set(0);
    TestUtils.waitForCondition(() -> {
        ConsumerRecords<String, String> recs = consumer.poll(Duration.ofMillis(100L));
        return count.addAndGet(recs.count()) == 101;
    }, "Does not complete rebalance in time");
    assertEquals(5L, consumer.position(tp0));
    assertEquals(100L, consumer.position(t3p0));
    client.requests().clear();
    consumer.unsubscribe();
    consumer.close();
}
Also used : ConsumerMetadata(org.apache.kafka.clients.consumer.internals.ConsumerMetadata) LinkedHashMap(java.util.LinkedHashMap) HashMap(java.util.HashMap) Node(org.apache.kafka.common.Node) MockTime(org.apache.kafka.common.utils.MockTime) Time(org.apache.kafka.common.utils.Time) AtomicBoolean(java.util.concurrent.atomic.AtomicBoolean) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) TopicPartition(org.apache.kafka.common.TopicPartition) OptionalLong(java.util.OptionalLong) MockTime(org.apache.kafka.common.utils.MockTime) MockClient(org.apache.kafka.clients.MockClient) Test(org.junit.jupiter.api.Test)

Aggregations

Time (org.apache.kafka.common.utils.Time)125 MockTime (org.apache.kafka.common.utils.MockTime)107 Test (org.junit.jupiter.api.Test)63 MockClient (org.apache.kafka.clients.MockClient)55 HashMap (java.util.HashMap)53 Cluster (org.apache.kafka.common.Cluster)41 Test (org.junit.Test)40 Node (org.apache.kafka.common.Node)39 ParameterizedTest (org.junit.jupiter.params.ParameterizedTest)32 MetadataResponse (org.apache.kafka.common.requests.MetadataResponse)31 StringSerializer (org.apache.kafka.common.serialization.StringSerializer)30 Metadata (org.apache.kafka.clients.Metadata)28 ProducerMetadata (org.apache.kafka.clients.producer.internals.ProducerMetadata)25 TopicPartition (org.apache.kafka.common.TopicPartition)22 PartitionAssignor (org.apache.kafka.clients.consumer.internals.PartitionAssignor)21 LogContext (org.apache.kafka.common.utils.LogContext)17 Map (java.util.Map)14 Properties (java.util.Properties)14 MetricName (org.apache.kafka.common.MetricName)14 ExecutionException (java.util.concurrent.ExecutionException)13