Search in sources :

Example 21 with RequestHeader

use of org.apache.kafka.common.requests.RequestHeader in project apache-kafka-on-k8s by banzaicloud.

the class SaslAuthenticatorTest method sendKafkaRequestReceiveResponse.

private AbstractResponse sendKafkaRequestReceiveResponse(String node, ApiKeys apiKey, AbstractRequest request) throws IOException {
    RequestHeader header = new RequestHeader(apiKey, request.version(), "someclient", nextCorrelationId++);
    Send send = request.toSend(node, header);
    selector.send(send);
    ByteBuffer responseBuffer = waitForResponse();
    return NetworkClient.parseResponse(responseBuffer, header);
}
Also used : RequestHeader(org.apache.kafka.common.requests.RequestHeader) ByteBuffer(java.nio.ByteBuffer) Send(org.apache.kafka.common.network.Send) NetworkSend(org.apache.kafka.common.network.NetworkSend)

Example 22 with RequestHeader

use of org.apache.kafka.common.requests.RequestHeader in project apache-kafka-on-k8s by banzaicloud.

the class SaslAuthenticatorTest method testDisallowedKafkaRequestsBeforeAuthentication.

/**
 * Tests that Kafka requests that are forbidden until successful authentication result
 * in authentication failure and do not cause any failures in the server.
 */
@Test
public void testDisallowedKafkaRequestsBeforeAuthentication() throws Exception {
    SecurityProtocol securityProtocol = SecurityProtocol.SASL_PLAINTEXT;
    configureMechanisms("PLAIN", Arrays.asList("PLAIN"));
    server = createEchoServer(securityProtocol);
    // Send metadata request before Kafka SASL handshake request
    String node1 = "invalid1";
    createClientConnection(SecurityProtocol.PLAINTEXT, node1);
    MetadataRequest metadataRequest1 = new MetadataRequest.Builder(Collections.singletonList("sometopic"), true).build();
    RequestHeader metadataRequestHeader1 = new RequestHeader(ApiKeys.METADATA, metadataRequest1.version(), "someclient", 1);
    selector.send(metadataRequest1.toSend(node1, metadataRequestHeader1));
    NetworkTestUtils.waitForChannelClose(selector, node1, ChannelState.READY.state());
    selector.close();
    // Test good connection still works
    createAndCheckClientConnection(securityProtocol, "good1");
    // Send metadata request after Kafka SASL handshake request
    String node2 = "invalid2";
    createClientConnection(SecurityProtocol.PLAINTEXT, node2);
    sendHandshakeRequestReceiveResponse(node2, (short) 1);
    MetadataRequest metadataRequest2 = new MetadataRequest.Builder(Collections.singletonList("sometopic"), true).build();
    RequestHeader metadataRequestHeader2 = new RequestHeader(ApiKeys.METADATA, metadataRequest2.version(), "someclient", 2);
    selector.send(metadataRequest2.toSend(node2, metadataRequestHeader2));
    NetworkTestUtils.waitForChannelClose(selector, node2, ChannelState.READY.state());
    selector.close();
    // Test good connection still works
    createAndCheckClientConnection(securityProtocol, "good2");
}
Also used : MetadataRequest(org.apache.kafka.common.requests.MetadataRequest) SecurityProtocol(org.apache.kafka.common.security.auth.SecurityProtocol) RequestHeader(org.apache.kafka.common.requests.RequestHeader) Test(org.junit.Test)

Example 23 with RequestHeader

use of org.apache.kafka.common.requests.RequestHeader in project apache-kafka-on-k8s by banzaicloud.

the class SaslAuthenticatorTest method testSaslHandshakeRequestWithUnsupportedVersion.

/**
 * Tests that unsupported version of SASL handshake request returns error
 * response and fails authentication. This test is similar to
 * {@link #testUnauthenticatedApiVersionsRequest(SecurityProtocol, short)}
 * where a non-SASL client is used to send requests that are processed by
 * {@link SaslServerAuthenticator} of the server prior to client authentication.
 */
@Test
public void testSaslHandshakeRequestWithUnsupportedVersion() throws Exception {
    SecurityProtocol securityProtocol = SecurityProtocol.SASL_PLAINTEXT;
    configureMechanisms("PLAIN", Arrays.asList("PLAIN"));
    server = createEchoServer(securityProtocol);
    // Send SaslHandshakeRequest and validate that connection is closed by server.
    String node1 = "invalid1";
    createClientConnection(SecurityProtocol.PLAINTEXT, node1);
    SaslHandshakeRequest request = new SaslHandshakeRequest("PLAIN");
    RequestHeader header = new RequestHeader(ApiKeys.SASL_HANDSHAKE, Short.MAX_VALUE, "someclient", 2);
    selector.send(request.toSend(node1, header));
    // This test uses a non-SASL PLAINTEXT client in order to do manual handshake.
    // So the channel is in READY state.
    NetworkTestUtils.waitForChannelClose(selector, node1, ChannelState.READY.state());
    selector.close();
    // Test good connection still works
    createAndCheckClientConnection(securityProtocol, "good1");
}
Also used : SecurityProtocol(org.apache.kafka.common.security.auth.SecurityProtocol) RequestHeader(org.apache.kafka.common.requests.RequestHeader) SaslHandshakeRequest(org.apache.kafka.common.requests.SaslHandshakeRequest) Test(org.junit.Test)

Example 24 with RequestHeader

use of org.apache.kafka.common.requests.RequestHeader in project apache-kafka-on-k8s by banzaicloud.

the class SaslServerAuthenticatorTest method testUnexpectedRequestType.

@Test
public void testUnexpectedRequestType() throws IOException {
    TransportLayer transportLayer = EasyMock.mock(TransportLayer.class);
    Map<String, ?> configs = Collections.singletonMap(BrokerSecurityConfigs.SASL_ENABLED_MECHANISMS_CONFIG, Collections.singletonList(SCRAM_SHA_256.mechanismName()));
    SaslServerAuthenticator authenticator = setupAuthenticator(configs, transportLayer, SCRAM_SHA_256.mechanismName());
    final RequestHeader header = new RequestHeader(ApiKeys.METADATA, (short) 0, "clientId", 13243);
    final Struct headerStruct = header.toStruct();
    final Capture<ByteBuffer> size = EasyMock.newCapture();
    EasyMock.expect(transportLayer.read(EasyMock.capture(size))).andAnswer(new IAnswer<Integer>() {

        @Override
        public Integer answer() throws Throwable {
            size.getValue().putInt(headerStruct.sizeOf());
            return 4;
        }
    });
    final Capture<ByteBuffer> payload = EasyMock.newCapture();
    EasyMock.expect(transportLayer.read(EasyMock.capture(payload))).andAnswer(new IAnswer<Integer>() {

        @Override
        public Integer answer() throws Throwable {
            // serialize only the request header. the authenticator should not parse beyond this
            headerStruct.writeTo(payload.getValue());
            return headerStruct.sizeOf();
        }
    });
    EasyMock.replay(transportLayer);
    try {
        authenticator.authenticate();
        fail("Expected authenticate() to raise an exception");
    } catch (IllegalSaslStateException e) {
    // expected exception
    }
}
Also used : IllegalSaslStateException(org.apache.kafka.common.errors.IllegalSaslStateException) ByteBuffer(java.nio.ByteBuffer) Struct(org.apache.kafka.common.protocol.types.Struct) TransportLayer(org.apache.kafka.common.network.TransportLayer) RequestHeader(org.apache.kafka.common.requests.RequestHeader) Test(org.junit.Test)

Example 25 with RequestHeader

use of org.apache.kafka.common.requests.RequestHeader in project apache-kafka-on-k8s by banzaicloud.

the class NetworkClient method doSend.

private void doSend(ClientRequest clientRequest, boolean isInternalRequest, long now, AbstractRequest request) {
    String nodeId = clientRequest.destination();
    RequestHeader header = clientRequest.makeHeader(request.version());
    if (log.isDebugEnabled()) {
        int latestClientVersion = clientRequest.apiKey().latestVersion();
        if (header.apiVersion() == latestClientVersion) {
            log.trace("Sending {} {} with correlation id {} to node {}", clientRequest.apiKey(), request, clientRequest.correlationId(), nodeId);
        } else {
            log.debug("Using older server API v{} to send {} {} with correlation id {} to node {}", header.apiVersion(), clientRequest.apiKey(), request, clientRequest.correlationId(), nodeId);
        }
    }
    Send send = request.toSend(nodeId, header);
    InFlightRequest inFlightRequest = new InFlightRequest(header, clientRequest.createdTimeMs(), clientRequest.destination(), clientRequest.callback(), clientRequest.expectResponse(), isInternalRequest, request, send, now);
    this.inFlightRequests.add(inFlightRequest);
    selector.send(inFlightRequest.send);
}
Also used : RequestHeader(org.apache.kafka.common.requests.RequestHeader) Send(org.apache.kafka.common.network.Send)

Aggregations

RequestHeader (org.apache.kafka.common.requests.RequestHeader)35 ByteBuffer (java.nio.ByteBuffer)19 SecurityProtocol (org.apache.kafka.common.security.auth.SecurityProtocol)12 Test (org.junit.jupiter.api.Test)12 ApiVersionsRequest (org.apache.kafka.common.requests.ApiVersionsRequest)11 NetworkSend (org.apache.kafka.common.network.NetworkSend)10 ApiVersionsResponse (org.apache.kafka.common.requests.ApiVersionsResponse)10 ApiKeys (org.apache.kafka.common.protocol.ApiKeys)7 IllegalSaslStateException (org.apache.kafka.common.errors.IllegalSaslStateException)6 RequestContext (org.apache.kafka.common.requests.RequestContext)6 Test (org.junit.Test)5 Collections (java.util.Collections)4 MetadataRequest (org.apache.kafka.common.requests.MetadataRequest)4 IOException (java.io.IOException)3 InetAddress (java.net.InetAddress)3 HashMap (java.util.HashMap)3 Map (java.util.Map)3 ApiVersionsResponseData (org.apache.kafka.common.message.ApiVersionsResponseData)3 ApiVersion (org.apache.kafka.common.message.ApiVersionsResponseData.ApiVersion)3 TransportLayer (org.apache.kafka.common.network.TransportLayer)3