Search in sources :

Example 11 with RequestHeader

use of org.apache.kafka.common.requests.RequestHeader in project kafka by apache.

the class SaslAuthenticatorTest method testApiVersionsRequestWithServerUnsupportedVersion.

/**
 * Tests that unsupported version of ApiVersionsRequest before SASL handshake request
 * returns error response and does not result in authentication failure. This test
 * is similar to {@link #testUnauthenticatedApiVersionsRequest(SecurityProtocol, short)}
 * where a non-SASL client is used to send requests that are processed by
 * {@link SaslServerAuthenticator} of the server prior to client authentication.
 */
@Test
public void testApiVersionsRequestWithServerUnsupportedVersion() throws Exception {
    short handshakeVersion = ApiKeys.SASL_HANDSHAKE.latestVersion();
    SecurityProtocol securityProtocol = SecurityProtocol.SASL_PLAINTEXT;
    configureMechanisms("PLAIN", Arrays.asList("PLAIN"));
    server = createEchoServer(securityProtocol);
    // Send ApiVersionsRequest with unsupported version and validate error response.
    String node = "1";
    createClientConnection(SecurityProtocol.PLAINTEXT, node);
    RequestHeader header = new RequestHeader(new RequestHeaderData().setRequestApiKey(ApiKeys.API_VERSIONS.id).setRequestApiVersion(Short.MAX_VALUE).setClientId("someclient").setCorrelationId(1), (short) 2);
    ApiVersionsRequest request = new ApiVersionsRequest.Builder().build();
    selector.send(new NetworkSend(node, request.toSend(header)));
    ByteBuffer responseBuffer = waitForResponse();
    ResponseHeader.parse(responseBuffer, ApiKeys.API_VERSIONS.responseHeaderVersion((short) 0));
    ApiVersionsResponse response = ApiVersionsResponse.parse(responseBuffer, (short) 0);
    assertEquals(Errors.UNSUPPORTED_VERSION.code(), response.data().errorCode());
    ApiVersion apiVersion = response.data().apiKeys().find(ApiKeys.API_VERSIONS.id);
    assertNotNull(apiVersion);
    assertEquals(ApiKeys.API_VERSIONS.id, apiVersion.apiKey());
    assertEquals(ApiKeys.API_VERSIONS.oldestVersion(), apiVersion.minVersion());
    assertEquals(ApiKeys.API_VERSIONS.latestVersion(), apiVersion.maxVersion());
    // Send ApiVersionsRequest with a supported version. This should succeed.
    sendVersionRequestReceiveResponse(node);
    // Test that client can authenticate successfully
    sendHandshakeRequestReceiveResponse(node, handshakeVersion);
    authenticateUsingSaslPlainAndCheckConnection(node, handshakeVersion > 0);
}
Also used : ApiVersionsResponse(org.apache.kafka.common.requests.ApiVersionsResponse) ApiVersion(org.apache.kafka.common.message.ApiVersionsResponseData.ApiVersion) RequestHeaderData(org.apache.kafka.common.message.RequestHeaderData) SecurityProtocol(org.apache.kafka.common.security.auth.SecurityProtocol) RequestHeader(org.apache.kafka.common.requests.RequestHeader) NetworkSend(org.apache.kafka.common.network.NetworkSend) ApiVersionsRequest(org.apache.kafka.common.requests.ApiVersionsRequest) ByteBuffer(java.nio.ByteBuffer) Test(org.junit.jupiter.api.Test)

Example 12 with RequestHeader

use of org.apache.kafka.common.requests.RequestHeader in project kafka by apache.

the class SaslAuthenticatorTest method testInvalidApiVersionsRequestSequence.

/**
 * Tests that ApiVersionsRequest after Kafka SASL handshake request flow,
 * but prior to actual SASL authentication, results in authentication failure.
 * This is similar to {@link #testUnauthenticatedApiVersionsRequest(SecurityProtocol, short)}
 * where a non-SASL client is used to send requests that are processed by
 * {@link SaslServerAuthenticator} of the server prior to client authentication.
 */
@Test
public void testInvalidApiVersionsRequestSequence() throws Exception {
    SecurityProtocol securityProtocol = SecurityProtocol.SASL_PLAINTEXT;
    configureMechanisms("PLAIN", Arrays.asList("PLAIN"));
    server = createEchoServer(securityProtocol);
    // Send handshake request followed by ApiVersionsRequest
    String node1 = "invalid1";
    createClientConnection(SecurityProtocol.PLAINTEXT, node1);
    sendHandshakeRequestReceiveResponse(node1, (short) 1);
    ApiVersionsRequest request = createApiVersionsRequestV0();
    RequestHeader versionsHeader = new RequestHeader(ApiKeys.API_VERSIONS, request.version(), "someclient", 2);
    selector.send(new NetworkSend(node1, request.toSend(versionsHeader)));
    NetworkTestUtils.waitForChannelClose(selector, node1, ChannelState.READY.state());
    selector.close();
    // Test good connection still works
    createAndCheckClientConnection(securityProtocol, "good1");
}
Also used : SecurityProtocol(org.apache.kafka.common.security.auth.SecurityProtocol) RequestHeader(org.apache.kafka.common.requests.RequestHeader) NetworkSend(org.apache.kafka.common.network.NetworkSend) ApiVersionsRequest(org.apache.kafka.common.requests.ApiVersionsRequest) Test(org.junit.jupiter.api.Test)

Example 13 with RequestHeader

use of org.apache.kafka.common.requests.RequestHeader in project kafka by apache.

the class NetworkClientTest method testUnsupportedApiVersionsRequestWithoutVersionProvidedByTheBroker.

@Test
public void testUnsupportedApiVersionsRequestWithoutVersionProvidedByTheBroker() {
    // initiate the connection
    client.ready(node, time.milliseconds());
    // handle the connection, initiate first ApiVersionsRequest
    client.poll(0, time.milliseconds());
    // ApiVersionsRequest is in flight but not sent yet
    assertTrue(client.hasInFlightRequests(node.idString()));
    // completes initiated sends
    client.poll(0, time.milliseconds());
    assertEquals(1, selector.completedSends().size());
    ByteBuffer buffer = selector.completedSendBuffers().get(0).buffer();
    RequestHeader header = parseHeader(buffer);
    assertEquals(ApiKeys.API_VERSIONS, header.apiKey());
    assertEquals(3, header.apiVersion());
    // prepare response
    delayedApiVersionsResponse(0, (short) 0, new ApiVersionsResponse(new ApiVersionsResponseData().setErrorCode(Errors.UNSUPPORTED_VERSION.code())));
    // handle ApiVersionResponse, initiate second ApiVersionRequest
    client.poll(0, time.milliseconds());
    // ApiVersionsRequest is in flight but not sent yet
    assertTrue(client.hasInFlightRequests(node.idString()));
    // ApiVersionsResponse has been received
    assertEquals(1, selector.completedReceives().size());
    // clean up the buffers
    selector.completedSends().clear();
    selector.completedSendBuffers().clear();
    selector.completedReceives().clear();
    // completes initiated sends
    client.poll(0, time.milliseconds());
    // ApiVersionsRequest has been sent
    assertEquals(1, selector.completedSends().size());
    buffer = selector.completedSendBuffers().get(0).buffer();
    header = parseHeader(buffer);
    assertEquals(ApiKeys.API_VERSIONS, header.apiKey());
    assertEquals(0, header.apiVersion());
    // prepare response
    delayedApiVersionsResponse(1, (short) 0, defaultApiVersionsResponse());
    // handle completed receives
    client.poll(0, time.milliseconds());
    // the ApiVersionsRequest is gone
    assertFalse(client.hasInFlightRequests(node.idString()));
    assertEquals(1, selector.completedReceives().size());
    // the client is ready
    assertTrue(client.isReady(node, time.milliseconds()));
}
Also used : ApiVersionsResponse(org.apache.kafka.common.requests.ApiVersionsResponse) RequestHeader(org.apache.kafka.common.requests.RequestHeader) ByteBuffer(java.nio.ByteBuffer) ApiVersionsResponseData(org.apache.kafka.common.message.ApiVersionsResponseData) Test(org.junit.jupiter.api.Test)

Example 14 with RequestHeader

use of org.apache.kafka.common.requests.RequestHeader in project kafka by apache.

the class SaslServerAuthenticator method handleKafkaRequest.

private boolean handleKafkaRequest(byte[] requestBytes) throws IOException, AuthenticationException {
    boolean isKafkaRequest = false;
    String clientMechanism = null;
    try {
        ByteBuffer requestBuffer = ByteBuffer.wrap(requestBytes);
        RequestHeader header = RequestHeader.parse(requestBuffer);
        ApiKeys apiKey = header.apiKey();
        // following a SaslHandshakeRequest since this is not a GSSAPI client token from a Kafka 0.9.0.x client.
        if (saslState == SaslState.INITIAL_REQUEST)
            setSaslState(SaslState.HANDSHAKE_OR_VERSIONS_REQUEST);
        isKafkaRequest = true;
        // unnecessary exposure to some of the more complex schema types.
        if (apiKey != ApiKeys.API_VERSIONS && apiKey != ApiKeys.SASL_HANDSHAKE)
            throw new IllegalSaslStateException("Unexpected Kafka request of type " + apiKey + " during SASL handshake.");
        LOG.debug("Handling Kafka request {} during {}", apiKey, reauthInfo.authenticationOrReauthenticationText());
        RequestContext requestContext = new RequestContext(header, connectionId, clientAddress(), KafkaPrincipal.ANONYMOUS, listenerName, securityProtocol, ClientInformation.EMPTY, false);
        RequestAndSize requestAndSize = requestContext.parseRequest(requestBuffer);
        if (apiKey == ApiKeys.API_VERSIONS)
            handleApiVersionsRequest(requestContext, (ApiVersionsRequest) requestAndSize.request);
        else
            clientMechanism = handleHandshakeRequest(requestContext, (SaslHandshakeRequest) requestAndSize.request);
    } catch (InvalidRequestException e) {
        if (saslState == SaslState.INITIAL_REQUEST) {
            // starting with 0x60, revert to GSSAPI for both these exceptions.
            if (LOG.isDebugEnabled()) {
                StringBuilder tokenBuilder = new StringBuilder();
                for (byte b : requestBytes) {
                    tokenBuilder.append(String.format("%02x", b));
                    if (tokenBuilder.length() >= 20)
                        break;
                }
                LOG.debug("Received client packet of length {} starting with bytes 0x{}, process as GSSAPI packet", requestBytes.length, tokenBuilder);
            }
            if (enabledMechanisms.contains(SaslConfigs.GSSAPI_MECHANISM)) {
                LOG.debug("First client packet is not a SASL mechanism request, using default mechanism GSSAPI");
                clientMechanism = SaslConfigs.GSSAPI_MECHANISM;
            } else
                throw new UnsupportedSaslMechanismException("Exception handling first SASL packet from client, GSSAPI is not supported by server", e);
        } else
            throw e;
    }
    if (clientMechanism != null && (!reauthInfo.reauthenticating() || reauthInfo.saslMechanismUnchanged(clientMechanism))) {
        createSaslServer(clientMechanism);
        setSaslState(SaslState.AUTHENTICATE);
    }
    return isKafkaRequest;
}
Also used : ApiKeys(org.apache.kafka.common.protocol.ApiKeys) RequestAndSize(org.apache.kafka.common.requests.RequestAndSize) UnsupportedSaslMechanismException(org.apache.kafka.common.errors.UnsupportedSaslMechanismException) RequestHeader(org.apache.kafka.common.requests.RequestHeader) InvalidRequestException(org.apache.kafka.common.errors.InvalidRequestException) IllegalSaslStateException(org.apache.kafka.common.errors.IllegalSaslStateException) RequestContext(org.apache.kafka.common.requests.RequestContext) ByteBuffer(java.nio.ByteBuffer) ApiVersionsRequest(org.apache.kafka.common.requests.ApiVersionsRequest)

Example 15 with RequestHeader

use of org.apache.kafka.common.requests.RequestHeader in project kafka by apache.

the class AclAuthorizerBenchmark method setup.

@Setup(Level.Trial)
public void setup() throws Exception {
    prepareAclCache();
    prepareAclToUpdate();
    // By adding `-95` to the resource name prefix, we cause the `TreeMap.from/to` call to return
    // most map entries. In such cases, we rely on the filtering based on `String.startsWith`
    // to return the matching ACLs. Using a more efficient data structure (e.g. a prefix
    // tree) should improve performance significantly).
    actions = Collections.singletonList(new Action(AclOperation.WRITE, new ResourcePattern(ResourceType.TOPIC, resourceNamePrefix + 95, PatternType.LITERAL), 1, true, true));
    authorizeContext = new RequestContext(new RequestHeader(ApiKeys.PRODUCE, Integer.valueOf(1).shortValue(), "someclient", 1), "1", InetAddress.getByName("127.0.0.1"), principal, ListenerName.normalised("listener"), SecurityProtocol.PLAINTEXT, ClientInformation.EMPTY, false);
    authorizeByResourceTypeContext = new RequestContext(new RequestHeader(ApiKeys.PRODUCE, Integer.valueOf(1).shortValue(), "someclient", 1), "1", InetAddress.getByName(authorizeByResourceTypeHostName), principal, ListenerName.normalised("listener"), SecurityProtocol.PLAINTEXT, ClientInformation.EMPTY, false);
}
Also used : Action(org.apache.kafka.server.authorizer.Action) ResourcePattern(org.apache.kafka.common.resource.ResourcePattern) RequestHeader(org.apache.kafka.common.requests.RequestHeader) RequestContext(org.apache.kafka.common.requests.RequestContext) Setup(org.openjdk.jmh.annotations.Setup)

Aggregations

RequestHeader (org.apache.kafka.common.requests.RequestHeader)35 ByteBuffer (java.nio.ByteBuffer)19 SecurityProtocol (org.apache.kafka.common.security.auth.SecurityProtocol)12 Test (org.junit.jupiter.api.Test)12 ApiVersionsRequest (org.apache.kafka.common.requests.ApiVersionsRequest)11 NetworkSend (org.apache.kafka.common.network.NetworkSend)10 ApiVersionsResponse (org.apache.kafka.common.requests.ApiVersionsResponse)10 ApiKeys (org.apache.kafka.common.protocol.ApiKeys)7 IllegalSaslStateException (org.apache.kafka.common.errors.IllegalSaslStateException)6 RequestContext (org.apache.kafka.common.requests.RequestContext)6 Test (org.junit.Test)5 Collections (java.util.Collections)4 MetadataRequest (org.apache.kafka.common.requests.MetadataRequest)4 IOException (java.io.IOException)3 InetAddress (java.net.InetAddress)3 HashMap (java.util.HashMap)3 Map (java.util.Map)3 ApiVersionsResponseData (org.apache.kafka.common.message.ApiVersionsResponseData)3 ApiVersion (org.apache.kafka.common.message.ApiVersionsResponseData.ApiVersion)3 TransportLayer (org.apache.kafka.common.network.TransportLayer)3