Search in sources :

Example 11 with ProduceResponse

use of org.apache.kafka.common.requests.ProduceResponse in project apache-kafka-on-k8s by banzaicloud.

the class SenderTest method testDownConversionForMismatchedMagicValues.

@Test
public void testDownConversionForMismatchedMagicValues() throws Exception {
    // it can happen that we construct a record set with mismatching magic values (perhaps
    // because the partition leader changed after the record set was initially constructed)
    // in this case, we down-convert record sets with newer magic values to match the oldest
    // created record set
    long offset = 0;
    // start off support produce request v3
    apiVersions.update("0", NodeApiVersions.create());
    Future<RecordMetadata> future1 = accumulator.append(tp0, 0L, "key".getBytes(), "value".getBytes(), null, null, MAX_BLOCK_TIMEOUT).future;
    // now the partition leader supports only v2
    apiVersions.update("0", NodeApiVersions.create(Collections.singleton(new ApiVersionsResponse.ApiVersion(ApiKeys.PRODUCE.id, (short) 0, (short) 2))));
    Future<RecordMetadata> future2 = accumulator.append(tp1, 0L, "key".getBytes(), "value".getBytes(), null, null, MAX_BLOCK_TIMEOUT).future;
    // start off support produce request v3
    apiVersions.update("0", NodeApiVersions.create());
    ProduceResponse.PartitionResponse resp = new ProduceResponse.PartitionResponse(Errors.NONE, offset, RecordBatch.NO_TIMESTAMP, 100);
    Map<TopicPartition, ProduceResponse.PartitionResponse> partResp = new HashMap<>();
    partResp.put(tp0, resp);
    partResp.put(tp1, resp);
    ProduceResponse produceResponse = new ProduceResponse(partResp, 0);
    client.prepareResponse(new MockClient.RequestMatcher() {

        @Override
        public boolean matches(AbstractRequest body) {
            ProduceRequest request = (ProduceRequest) body;
            if (request.version() != 2)
                return false;
            Map<TopicPartition, MemoryRecords> recordsMap = request.partitionRecordsOrFail();
            if (recordsMap.size() != 2)
                return false;
            for (MemoryRecords records : recordsMap.values()) {
                if (records == null || records.sizeInBytes() == 0 || !records.hasMatchingMagic(RecordBatch.MAGIC_VALUE_V1))
                    return false;
            }
            return true;
        }
    }, produceResponse);
    // connect
    sender.run(time.milliseconds());
    // send produce request
    sender.run(time.milliseconds());
    assertTrue("Request should be completed", future1.isDone());
    assertTrue("Request should be completed", future2.isDone());
}
Also used : ApiVersionsResponse(org.apache.kafka.common.requests.ApiVersionsResponse) HashMap(java.util.HashMap) LinkedHashMap(java.util.LinkedHashMap) ProduceRequest(org.apache.kafka.common.requests.ProduceRequest) ProduceResponse(org.apache.kafka.common.requests.ProduceResponse) AbstractRequest(org.apache.kafka.common.requests.AbstractRequest) RecordMetadata(org.apache.kafka.clients.producer.RecordMetadata) TopicPartition(org.apache.kafka.common.TopicPartition) Map(java.util.Map) HashMap(java.util.HashMap) LinkedHashMap(java.util.LinkedHashMap) MockClient(org.apache.kafka.clients.MockClient) MemoryRecords(org.apache.kafka.common.record.MemoryRecords) Test(org.junit.Test)

Example 12 with ProduceResponse

use of org.apache.kafka.common.requests.ProduceResponse in project kafka by apache.

the class NetworkClientTest method sendThrottledProduceResponse.

private void sendThrottledProduceResponse(int correlationId, int throttleMs, short version) {
    ProduceResponse response = new ProduceResponse(new ProduceResponseData().setThrottleTimeMs(throttleMs));
    sendResponse(response, version, correlationId);
}
Also used : ProduceResponse(org.apache.kafka.common.requests.ProduceResponse) ProduceResponseData(org.apache.kafka.common.message.ProduceResponseData)

Example 13 with ProduceResponse

use of org.apache.kafka.common.requests.ProduceResponse in project kafka by apache.

the class SenderTest method testDownConversionForMismatchedMagicValues.

@SuppressWarnings("deprecation")
@Test
public void testDownConversionForMismatchedMagicValues() throws Exception {
    // it can happen that we construct a record set with mismatching magic values (perhaps
    // because the partition leader changed after the record set was initially constructed)
    // in this case, we down-convert record sets with newer magic values to match the oldest
    // created record set
    long offset = 0;
    // start off support produce request v3
    apiVersions.update("0", NodeApiVersions.create());
    Future<RecordMetadata> future1 = appendToAccumulator(tp0, 0L, "key", "value");
    // now the partition leader supports only v2
    apiVersions.update("0", NodeApiVersions.create(ApiKeys.PRODUCE.id, (short) 0, (short) 2));
    Future<RecordMetadata> future2 = appendToAccumulator(tp1, 0L, "key", "value");
    // start off support produce request v3
    apiVersions.update("0", NodeApiVersions.create());
    ProduceResponse.PartitionResponse resp = new ProduceResponse.PartitionResponse(Errors.NONE, offset, RecordBatch.NO_TIMESTAMP, 100);
    Map<TopicPartition, ProduceResponse.PartitionResponse> partResp = new HashMap<>();
    partResp.put(tp0, resp);
    partResp.put(tp1, resp);
    ProduceResponse produceResponse = new ProduceResponse(partResp, 0);
    client.prepareResponse(body -> {
        ProduceRequest request = (ProduceRequest) body;
        if (request.version() != 2)
            return false;
        Map<TopicPartition, MemoryRecords> recordsMap = partitionRecords(request);
        if (recordsMap.size() != 2)
            return false;
        for (MemoryRecords records : recordsMap.values()) {
            if (records == null || records.sizeInBytes() == 0 || !records.hasMatchingMagic(RecordBatch.MAGIC_VALUE_V1))
                return false;
        }
        return true;
    }, produceResponse);
    // connect
    sender.runOnce();
    // send produce request
    sender.runOnce();
    assertTrue(future1.isDone(), "Request should be completed");
    assertTrue(future2.isDone(), "Request should be completed");
}
Also used : RecordMetadata(org.apache.kafka.clients.producer.RecordMetadata) LinkedHashMap(java.util.LinkedHashMap) IdentityHashMap(java.util.IdentityHashMap) HashMap(java.util.HashMap) ProduceRequest(org.apache.kafka.common.requests.ProduceRequest) ProduceResponse(org.apache.kafka.common.requests.ProduceResponse) TopicPartition(org.apache.kafka.common.TopicPartition) MemoryRecords(org.apache.kafka.common.record.MemoryRecords) Test(org.junit.jupiter.api.Test)

Example 14 with ProduceResponse

use of org.apache.kafka.common.requests.ProduceResponse in project kafka by apache.

the class SenderTest method testQuotaMetrics.

/*
     * Send multiple requests. Verify that the client side quota metrics have the right values
     */
@SuppressWarnings("deprecation")
@Test
public void testQuotaMetrics() {
    MockSelector selector = new MockSelector(time);
    Sensor throttleTimeSensor = Sender.throttleTimeSensor(this.senderMetricsRegistry);
    Cluster cluster = TestUtils.singletonCluster("test", 1);
    Node node = cluster.nodes().get(0);
    NetworkClient client = new NetworkClient(selector, metadata, "mock", Integer.MAX_VALUE, 1000, 1000, 64 * 1024, 64 * 1024, 1000, 10 * 1000, 127 * 1000, time, true, new ApiVersions(), throttleTimeSensor, logContext);
    ApiVersionsResponse apiVersionsResponse = ApiVersionsResponse.defaultApiVersionsResponse(400, ApiMessageType.ListenerType.ZK_BROKER);
    ByteBuffer buffer = RequestTestUtils.serializeResponseWithHeader(apiVersionsResponse, ApiKeys.API_VERSIONS.latestVersion(), 0);
    selector.delayedReceive(new DelayedReceive(node.idString(), new NetworkReceive(node.idString(), buffer)));
    while (!client.ready(node, time.milliseconds())) {
        client.poll(1, time.milliseconds());
        // If a throttled response is received, advance the time to ensure progress.
        time.sleep(client.throttleDelayMs(node, time.milliseconds()));
    }
    selector.clear();
    for (int i = 1; i <= 3; i++) {
        int throttleTimeMs = 100 * i;
        ProduceRequest.Builder builder = ProduceRequest.forCurrentMagic(new ProduceRequestData().setTopicData(new ProduceRequestData.TopicProduceDataCollection()).setAcks((short) 1).setTimeoutMs(1000));
        ClientRequest request = client.newClientRequest(node.idString(), builder, time.milliseconds(), true);
        client.send(request, time.milliseconds());
        client.poll(1, time.milliseconds());
        ProduceResponse response = produceResponse(tp0, i, Errors.NONE, throttleTimeMs);
        buffer = RequestTestUtils.serializeResponseWithHeader(response, ApiKeys.PRODUCE.latestVersion(), request.correlationId());
        selector.completeReceive(new NetworkReceive(node.idString(), buffer));
        client.poll(1, time.milliseconds());
        // If a throttled response is received, advance the time to ensure progress.
        time.sleep(client.throttleDelayMs(node, time.milliseconds()));
        selector.clear();
    }
    Map<MetricName, KafkaMetric> allMetrics = metrics.metrics();
    KafkaMetric avgMetric = allMetrics.get(this.senderMetricsRegistry.produceThrottleTimeAvg);
    KafkaMetric maxMetric = allMetrics.get(this.senderMetricsRegistry.produceThrottleTimeMax);
    // Throttle times are ApiVersions=400, Produce=(100, 200, 300)
    assertEquals(250, (Double) avgMetric.metricValue(), EPS);
    assertEquals(400, (Double) maxMetric.metricValue(), EPS);
    client.close();
}
Also used : ApiVersionsResponse(org.apache.kafka.common.requests.ApiVersionsResponse) ProduceRequest(org.apache.kafka.common.requests.ProduceRequest) ProduceResponse(org.apache.kafka.common.requests.ProduceResponse) Node(org.apache.kafka.common.Node) ProduceRequestData(org.apache.kafka.common.message.ProduceRequestData) NetworkReceive(org.apache.kafka.common.network.NetworkReceive) Cluster(org.apache.kafka.common.Cluster) KafkaMetric(org.apache.kafka.common.metrics.KafkaMetric) ByteBuffer(java.nio.ByteBuffer) MockSelector(org.apache.kafka.test.MockSelector) MetricName(org.apache.kafka.common.MetricName) NetworkClient(org.apache.kafka.clients.NetworkClient) NodeApiVersions(org.apache.kafka.clients.NodeApiVersions) ApiVersions(org.apache.kafka.clients.ApiVersions) DelayedReceive(org.apache.kafka.test.DelayedReceive) ClientRequest(org.apache.kafka.clients.ClientRequest) Sensor(org.apache.kafka.common.metrics.Sensor) Test(org.junit.jupiter.api.Test)

Aggregations

ProduceResponse (org.apache.kafka.common.requests.ProduceResponse)14 TopicPartition (org.apache.kafka.common.TopicPartition)9 HashMap (java.util.HashMap)8 LinkedHashMap (java.util.LinkedHashMap)7 ProduceRequest (org.apache.kafka.common.requests.ProduceRequest)7 RecordMetadata (org.apache.kafka.clients.producer.RecordMetadata)6 IdentityHashMap (java.util.IdentityHashMap)5 ApiVersions (org.apache.kafka.clients.ApiVersions)5 Node (org.apache.kafka.common.Node)5 Test (org.junit.jupiter.api.Test)5 ByteBuffer (java.nio.ByteBuffer)4 NodeApiVersions (org.apache.kafka.clients.NodeApiVersions)4 Cluster (org.apache.kafka.common.Cluster)4 ProduceRequestData (org.apache.kafka.common.message.ProduceRequestData)4 ProduceResponseData (org.apache.kafka.common.message.ProduceResponseData)4 NetworkReceive (org.apache.kafka.common.network.NetworkReceive)4 Map (java.util.Map)3 ClientRequest (org.apache.kafka.clients.ClientRequest)3 MetricName (org.apache.kafka.common.MetricName)3 TimeoutException (org.apache.kafka.common.errors.TimeoutException)3