Search in sources :

Example 1 with NetworkClient

use of org.apache.kafka.clients.NetworkClient in project apache-kafka-on-k8s by banzaicloud.

the class ConsumerNetworkClientTest method blockOnlyForRetryBackoffIfNoInflightRequests.

@Test
public void blockOnlyForRetryBackoffIfNoInflightRequests() {
    long retryBackoffMs = 100L;
    NetworkClient mockNetworkClient = EasyMock.mock(NetworkClient.class);
    ConsumerNetworkClient consumerClient = new ConsumerNetworkClient(new LogContext(), mockNetworkClient, metadata, time, retryBackoffMs, 1000L, Integer.MAX_VALUE);
    EasyMock.expect(mockNetworkClient.inFlightRequestCount()).andReturn(0);
    EasyMock.expect(mockNetworkClient.poll(EasyMock.eq(retryBackoffMs), EasyMock.anyLong())).andReturn(Collections.<ClientResponse>emptyList());
    EasyMock.replay(mockNetworkClient);
    consumerClient.poll(Long.MAX_VALUE, time.milliseconds(), new ConsumerNetworkClient.PollCondition() {

        @Override
        public boolean shouldBlock() {
            return true;
        }
    });
    EasyMock.verify(mockNetworkClient);
}
Also used : NetworkClient(org.apache.kafka.clients.NetworkClient) LogContext(org.apache.kafka.common.utils.LogContext) Test(org.junit.Test)

Example 2 with NetworkClient

use of org.apache.kafka.clients.NetworkClient in project kafka by apache.

the class FetcherTest method testQuotaMetrics.

/*
     * Send multiple requests. Verify that the client side quota metrics have the right values
     */
@Test
public void testQuotaMetrics() {
    buildFetcher();
    MockSelector selector = new MockSelector(time);
    Sensor throttleTimeSensor = Fetcher.throttleTimeSensor(metrics, metricsRegistry);
    Cluster cluster = TestUtils.singletonCluster("test", 1);
    Node node = cluster.nodes().get(0);
    NetworkClient client = new NetworkClient(selector, metadata, "mock", Integer.MAX_VALUE, 1000, 1000, 64 * 1024, 64 * 1024, 1000, 10 * 1000, 127 * 1000, time, true, new ApiVersions(), throttleTimeSensor, new LogContext());
    ApiVersionsResponse apiVersionsResponse = ApiVersionsResponse.defaultApiVersionsResponse(400, ApiMessageType.ListenerType.ZK_BROKER);
    ByteBuffer buffer = RequestTestUtils.serializeResponseWithHeader(apiVersionsResponse, ApiKeys.API_VERSIONS.latestVersion(), 0);
    selector.delayedReceive(new DelayedReceive(node.idString(), new NetworkReceive(node.idString(), buffer)));
    while (!client.ready(node, time.milliseconds())) {
        client.poll(1, time.milliseconds());
        // If a throttled response is received, advance the time to ensure progress.
        time.sleep(client.throttleDelayMs(node, time.milliseconds()));
    }
    selector.clear();
    for (int i = 1; i <= 3; i++) {
        int throttleTimeMs = 100 * i;
        FetchRequest.Builder builder = FetchRequest.Builder.forConsumer(ApiKeys.FETCH.latestVersion(), 100, 100, new LinkedHashMap<>());
        builder.rackId("");
        ClientRequest request = client.newClientRequest(node.idString(), builder, time.milliseconds(), true);
        client.send(request, time.milliseconds());
        client.poll(1, time.milliseconds());
        FetchResponse response = fullFetchResponse(tidp0, nextRecords, Errors.NONE, i, throttleTimeMs);
        buffer = RequestTestUtils.serializeResponseWithHeader(response, ApiKeys.FETCH.latestVersion(), request.correlationId());
        selector.completeReceive(new NetworkReceive(node.idString(), buffer));
        client.poll(1, time.milliseconds());
        // If a throttled response is received, advance the time to ensure progress.
        time.sleep(client.throttleDelayMs(node, time.milliseconds()));
        selector.clear();
    }
    Map<MetricName, KafkaMetric> allMetrics = metrics.metrics();
    KafkaMetric avgMetric = allMetrics.get(metrics.metricInstance(metricsRegistry.fetchThrottleTimeAvg));
    KafkaMetric maxMetric = allMetrics.get(metrics.metricInstance(metricsRegistry.fetchThrottleTimeMax));
    // Throttle times are ApiVersions=400, Fetch=(100, 200, 300)
    assertEquals(250, (Double) avgMetric.metricValue(), EPSILON);
    assertEquals(400, (Double) maxMetric.metricValue(), EPSILON);
    client.close();
}
Also used : ApiVersionsResponse(org.apache.kafka.common.requests.ApiVersionsResponse) Node(org.apache.kafka.common.Node) NetworkReceive(org.apache.kafka.common.network.NetworkReceive) Cluster(org.apache.kafka.common.Cluster) LogContext(org.apache.kafka.common.utils.LogContext) FetchResponse(org.apache.kafka.common.requests.FetchResponse) KafkaMetric(org.apache.kafka.common.metrics.KafkaMetric) ByteBuffer(java.nio.ByteBuffer) MockSelector(org.apache.kafka.test.MockSelector) MetricName(org.apache.kafka.common.MetricName) NetworkClient(org.apache.kafka.clients.NetworkClient) NodeApiVersions(org.apache.kafka.clients.NodeApiVersions) ApiVersions(org.apache.kafka.clients.ApiVersions) FetchRequest(org.apache.kafka.common.requests.FetchRequest) DelayedReceive(org.apache.kafka.test.DelayedReceive) ClientRequest(org.apache.kafka.clients.ClientRequest) Sensor(org.apache.kafka.common.metrics.Sensor) Test(org.junit.jupiter.api.Test)

Example 3 with NetworkClient

use of org.apache.kafka.clients.NetworkClient in project kafka by apache.

the class ConsumerNetworkClientTest method blockOnlyForRetryBackoffIfNoInflightRequests.

@Test
public void blockOnlyForRetryBackoffIfNoInflightRequests() {
    long retryBackoffMs = 100L;
    NetworkClient mockNetworkClient = mock(NetworkClient.class);
    ConsumerNetworkClient consumerClient = new ConsumerNetworkClient(new LogContext(), mockNetworkClient, metadata, time, retryBackoffMs, 1000, Integer.MAX_VALUE);
    when(mockNetworkClient.inFlightRequestCount()).thenReturn(0);
    consumerClient.poll(time.timer(Long.MAX_VALUE), () -> true);
    verify(mockNetworkClient).poll(eq(retryBackoffMs), anyLong());
}
Also used : NetworkClient(org.apache.kafka.clients.NetworkClient) LogContext(org.apache.kafka.common.utils.LogContext) Test(org.junit.jupiter.api.Test)

Example 4 with NetworkClient

use of org.apache.kafka.clients.NetworkClient in project apache-kafka-on-k8s by banzaicloud.

the class KafkaAdminClient method createInternal.

static KafkaAdminClient createInternal(AdminClientConfig config, TimeoutProcessorFactory timeoutProcessorFactory) {
    Metrics metrics = null;
    NetworkClient networkClient = null;
    Time time = Time.SYSTEM;
    String clientId = generateClientId(config);
    ChannelBuilder channelBuilder = null;
    Selector selector = null;
    ApiVersions apiVersions = new ApiVersions();
    LogContext logContext = createLogContext(clientId);
    try {
        // Since we only request node information, it's safe to pass true for allowAutoTopicCreation (and it
        // simplifies communication with older brokers)
        Metadata metadata = new Metadata(config.getLong(AdminClientConfig.RETRY_BACKOFF_MS_CONFIG), config.getLong(AdminClientConfig.METADATA_MAX_AGE_CONFIG), true);
        List<MetricsReporter> reporters = config.getConfiguredInstances(AdminClientConfig.METRIC_REPORTER_CLASSES_CONFIG, MetricsReporter.class);
        Map<String, String> metricTags = Collections.singletonMap("client-id", clientId);
        MetricConfig metricConfig = new MetricConfig().samples(config.getInt(AdminClientConfig.METRICS_NUM_SAMPLES_CONFIG)).timeWindow(config.getLong(AdminClientConfig.METRICS_SAMPLE_WINDOW_MS_CONFIG), TimeUnit.MILLISECONDS).recordLevel(Sensor.RecordingLevel.forName(config.getString(AdminClientConfig.METRICS_RECORDING_LEVEL_CONFIG))).tags(metricTags);
        reporters.add(new JmxReporter(JMX_PREFIX));
        metrics = new Metrics(metricConfig, reporters, time);
        String metricGrpPrefix = "admin-client";
        channelBuilder = ClientUtils.createChannelBuilder(config);
        selector = new Selector(config.getLong(AdminClientConfig.CONNECTIONS_MAX_IDLE_MS_CONFIG), metrics, time, metricGrpPrefix, channelBuilder, logContext);
        networkClient = new NetworkClient(selector, metadata, clientId, 1, config.getLong(AdminClientConfig.RECONNECT_BACKOFF_MS_CONFIG), config.getLong(AdminClientConfig.RECONNECT_BACKOFF_MAX_MS_CONFIG), config.getInt(AdminClientConfig.SEND_BUFFER_CONFIG), config.getInt(AdminClientConfig.RECEIVE_BUFFER_CONFIG), (int) TimeUnit.HOURS.toMillis(1), time, true, apiVersions, logContext);
        return new KafkaAdminClient(config, clientId, time, metadata, metrics, networkClient, timeoutProcessorFactory, logContext);
    } catch (Throwable exc) {
        closeQuietly(metrics, "Metrics");
        closeQuietly(networkClient, "NetworkClient");
        closeQuietly(selector, "Selector");
        closeQuietly(channelBuilder, "ChannelBuilder");
        throw new KafkaException("Failed create new KafkaAdminClient", exc);
    }
}
Also used : MetricConfig(org.apache.kafka.common.metrics.MetricConfig) Metadata(org.apache.kafka.clients.Metadata) LogContext(org.apache.kafka.common.utils.LogContext) Time(org.apache.kafka.common.utils.Time) JmxReporter(org.apache.kafka.common.metrics.JmxReporter) Metrics(org.apache.kafka.common.metrics.Metrics) NetworkClient(org.apache.kafka.clients.NetworkClient) MetricsReporter(org.apache.kafka.common.metrics.MetricsReporter) ApiVersions(org.apache.kafka.clients.ApiVersions) KafkaException(org.apache.kafka.common.KafkaException) ChannelBuilder(org.apache.kafka.common.network.ChannelBuilder) Selector(org.apache.kafka.common.network.Selector)

Example 5 with NetworkClient

use of org.apache.kafka.clients.NetworkClient in project apache-kafka-on-k8s by banzaicloud.

the class SenderTest method testQuotaMetrics.

/*
     * Send multiple requests. Verify that the client side quota metrics have the right values
     */
@Test
@SuppressWarnings("deprecation")
public void testQuotaMetrics() throws Exception {
    MockSelector selector = new MockSelector(time);
    Sensor throttleTimeSensor = Sender.throttleTimeSensor(this.senderMetricsRegistry);
    Cluster cluster = TestUtils.singletonCluster("test", 1);
    Node node = cluster.nodes().get(0);
    NetworkClient client = new NetworkClient(selector, metadata, "mock", Integer.MAX_VALUE, 1000, 1000, 64 * 1024, 64 * 1024, 1000, time, true, new ApiVersions(), throttleTimeSensor, logContext);
    short apiVersionsResponseVersion = ApiKeys.API_VERSIONS.latestVersion();
    ByteBuffer buffer = ApiVersionsResponse.createApiVersionsResponse(400, RecordBatch.CURRENT_MAGIC_VALUE).serialize(apiVersionsResponseVersion, new ResponseHeader(0));
    selector.delayedReceive(new DelayedReceive(node.idString(), new NetworkReceive(node.idString(), buffer)));
    while (!client.ready(node, time.milliseconds())) client.poll(1, time.milliseconds());
    selector.clear();
    for (int i = 1; i <= 3; i++) {
        int throttleTimeMs = 100 * i;
        ProduceRequest.Builder builder = ProduceRequest.Builder.forCurrentMagic((short) 1, 1000, Collections.<TopicPartition, MemoryRecords>emptyMap());
        ClientRequest request = client.newClientRequest(node.idString(), builder, time.milliseconds(), true, null);
        client.send(request, time.milliseconds());
        client.poll(1, time.milliseconds());
        ProduceResponse response = produceResponse(tp0, i, Errors.NONE, throttleTimeMs);
        buffer = response.serialize(ApiKeys.PRODUCE.latestVersion(), new ResponseHeader(request.correlationId()));
        selector.completeReceive(new NetworkReceive(node.idString(), buffer));
        client.poll(1, time.milliseconds());
        selector.clear();
    }
    Map<MetricName, KafkaMetric> allMetrics = metrics.metrics();
    KafkaMetric avgMetric = allMetrics.get(this.senderMetricsRegistry.produceThrottleTimeAvg);
    KafkaMetric maxMetric = allMetrics.get(this.senderMetricsRegistry.produceThrottleTimeMax);
    // Throttle times are ApiVersions=400, Produce=(100, 200, 300)
    assertEquals(250, avgMetric.value(), EPS);
    assertEquals(400, maxMetric.value(), EPS);
    client.close();
}
Also used : ResponseHeader(org.apache.kafka.common.requests.ResponseHeader) ProduceRequest(org.apache.kafka.common.requests.ProduceRequest) ProduceResponse(org.apache.kafka.common.requests.ProduceResponse) Node(org.apache.kafka.common.Node) NetworkReceive(org.apache.kafka.common.network.NetworkReceive) Cluster(org.apache.kafka.common.Cluster) KafkaMetric(org.apache.kafka.common.metrics.KafkaMetric) ByteBuffer(java.nio.ByteBuffer) MockSelector(org.apache.kafka.test.MockSelector) MetricName(org.apache.kafka.common.MetricName) NetworkClient(org.apache.kafka.clients.NetworkClient) NodeApiVersions(org.apache.kafka.clients.NodeApiVersions) ApiVersions(org.apache.kafka.clients.ApiVersions) DelayedReceive(org.apache.kafka.test.DelayedReceive) ClientRequest(org.apache.kafka.clients.ClientRequest) Sensor(org.apache.kafka.common.metrics.Sensor) Test(org.junit.Test)

Aggregations

NetworkClient (org.apache.kafka.clients.NetworkClient)13 LogContext (org.apache.kafka.common.utils.LogContext)10 ApiVersions (org.apache.kafka.clients.ApiVersions)6 Sensor (org.apache.kafka.common.metrics.Sensor)5 Test (org.junit.Test)5 Test (org.junit.jupiter.api.Test)5 ByteBuffer (java.nio.ByteBuffer)4 ClientRequest (org.apache.kafka.clients.ClientRequest)4 NodeApiVersions (org.apache.kafka.clients.NodeApiVersions)4 Cluster (org.apache.kafka.common.Cluster)4 MetricName (org.apache.kafka.common.MetricName)4 Node (org.apache.kafka.common.Node)4 KafkaMetric (org.apache.kafka.common.metrics.KafkaMetric)4 NetworkReceive (org.apache.kafka.common.network.NetworkReceive)4 DelayedReceive (org.apache.kafka.test.DelayedReceive)4 MockSelector (org.apache.kafka.test.MockSelector)4 ChannelBuilder (org.apache.kafka.common.network.ChannelBuilder)3 Selector (org.apache.kafka.common.network.Selector)3 KafkaException (org.apache.kafka.common.KafkaException)2 JmxReporter (org.apache.kafka.common.metrics.JmxReporter)2