Search in sources :

Example 1 with SecurityProtocol

use of org.apache.kafka.common.protocol.SecurityProtocol in project kafka by apache.

the class ClientUtils method createChannelBuilder.

/**
     * @param config client configs
     * @return configured ChannelBuilder based on the configs.
     */
public static ChannelBuilder createChannelBuilder(AbstractConfig config) {
    SecurityProtocol securityProtocol = SecurityProtocol.forName(config.getString(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG));
    if (!SecurityProtocol.nonTestingValues().contains(securityProtocol))
        throw new ConfigException("Invalid SecurityProtocol " + securityProtocol);
    String clientSaslMechanism = config.getString(SaslConfigs.SASL_MECHANISM);
    return ChannelBuilders.clientChannelBuilder(securityProtocol, JaasContext.Type.CLIENT, config, null, clientSaslMechanism, true);
}
Also used : SecurityProtocol(org.apache.kafka.common.protocol.SecurityProtocol) ConfigException(org.apache.kafka.common.config.ConfigException)

Example 2 with SecurityProtocol

use of org.apache.kafka.common.protocol.SecurityProtocol in project kafka by apache.

the class SaslAuthenticatorTest method testUnauthenticatedApiVersionsRequest.

/**
     * Tests that Kafka ApiVersionsRequests are handled by the SASL server authenticator
     * prior to SASL handshake flow and that subsequent authentication succeeds
     * when transport layer is PLAINTEXT/SSL. This test uses a non-SASL client that simulates
     * SASL authentication after ApiVersionsRequest.
     * <p>
     * Test sequence (using <tt>securityProtocol=PLAINTEXT</tt> as an example):
     * <ol>
     *   <li>Starts a SASL_PLAINTEXT test server that simply echoes back client requests after authentication.</li>
     *   <li>A (non-SASL) PLAINTEXT test client connects to the SASL server port. Client is now unauthenticated.<./li>
     *   <li>The unauthenticated non-SASL client sends an ApiVersionsRequest and validates the response.
     *       A valid response indicates that {@link SaslServerAuthenticator} of the test server responded to
     *       the ApiVersionsRequest even though the client is not yet authenticated.</li>
     *   <li>The unauthenticated non-SASL client sends a SaslHandshakeRequest and validates the response. A valid response
     *       indicates that {@link SaslServerAuthenticator} of the test server responded to the SaslHandshakeRequest
     *       after processing ApiVersionsRequest.</li>
     *   <li>The unauthenticated non-SASL client sends the SASL/PLAIN packet containing username/password to authenticate
     *       itself. The client is now authenticated by the server. At this point this test client is at the
     *       same state as a regular SASL_PLAINTEXT client that is <tt>ready</tt>.</li>
     *   <li>The authenticated client sends random data to the server and checks that the data is echoed
     *       back by the test server (ie, not Kafka request-response) to ensure that the client now
     *       behaves exactly as a regular SASL_PLAINTEXT client that has completed authentication.</li>
     * </ol>
     */
private void testUnauthenticatedApiVersionsRequest(SecurityProtocol securityProtocol) throws Exception {
    configureMechanisms("PLAIN", Arrays.asList("PLAIN"));
    server = createEchoServer(securityProtocol);
    // Create non-SASL connection to manually authenticate after ApiVersionsRequest
    String node = "1";
    SecurityProtocol clientProtocol;
    switch(securityProtocol) {
        case SASL_PLAINTEXT:
            clientProtocol = SecurityProtocol.PLAINTEXT;
            break;
        case SASL_SSL:
            clientProtocol = SecurityProtocol.SSL;
            break;
        default:
            throw new IllegalArgumentException("Server protocol " + securityProtocol + " is not SASL");
    }
    createClientConnection(clientProtocol, node);
    NetworkTestUtils.waitForChannelReady(selector, node);
    // Send ApiVersionsRequest and check response
    ApiVersionsResponse versionsResponse = sendVersionRequestReceiveResponse(node);
    assertEquals(ApiKeys.SASL_HANDSHAKE.oldestVersion(), versionsResponse.apiVersion(ApiKeys.SASL_HANDSHAKE.id).minVersion);
    assertEquals(ApiKeys.SASL_HANDSHAKE.latestVersion(), versionsResponse.apiVersion(ApiKeys.SASL_HANDSHAKE.id).maxVersion);
    // Send SaslHandshakeRequest and check response
    SaslHandshakeResponse handshakeResponse = sendHandshakeRequestReceiveResponse(node);
    assertEquals(Collections.singletonList("PLAIN"), handshakeResponse.enabledMechanisms());
    // Complete manual authentication and check send/receive succeed
    authenticateUsingSaslPlainAndCheckConnection(node);
}
Also used : ApiVersionsResponse(org.apache.kafka.common.requests.ApiVersionsResponse) SaslHandshakeResponse(org.apache.kafka.common.requests.SaslHandshakeResponse) SecurityProtocol(org.apache.kafka.common.protocol.SecurityProtocol)

Example 3 with SecurityProtocol

use of org.apache.kafka.common.protocol.SecurityProtocol in project kafka by apache.

the class RequestResponseTest method createUpdateMetadataRequest.

private UpdateMetadataRequest createUpdateMetadataRequest(int version, String rack) {
    Map<TopicPartition, PartitionState> partitionStates = new HashMap<>();
    List<Integer> isr = Arrays.asList(1, 2);
    List<Integer> replicas = Arrays.asList(1, 2, 3, 4);
    partitionStates.put(new TopicPartition("topic5", 105), new PartitionState(0, 2, 1, new ArrayList<>(isr), 2, new HashSet<>(replicas)));
    partitionStates.put(new TopicPartition("topic5", 1), new PartitionState(1, 1, 1, new ArrayList<>(isr), 2, new HashSet<>(replicas)));
    partitionStates.put(new TopicPartition("topic20", 1), new PartitionState(1, 0, 1, new ArrayList<>(isr), 2, new HashSet<>(replicas)));
    SecurityProtocol plaintext = SecurityProtocol.PLAINTEXT;
    List<UpdateMetadataRequest.EndPoint> endPoints1 = new ArrayList<>();
    endPoints1.add(new UpdateMetadataRequest.EndPoint("host1", 1223, plaintext, ListenerName.forSecurityProtocol(plaintext)));
    List<UpdateMetadataRequest.EndPoint> endPoints2 = new ArrayList<>();
    endPoints2.add(new UpdateMetadataRequest.EndPoint("host1", 1244, plaintext, ListenerName.forSecurityProtocol(plaintext)));
    if (version > 0) {
        SecurityProtocol ssl = SecurityProtocol.SSL;
        endPoints2.add(new UpdateMetadataRequest.EndPoint("host2", 1234, ssl, ListenerName.forSecurityProtocol(ssl)));
        endPoints2.add(new UpdateMetadataRequest.EndPoint("host2", 1334, ssl, new ListenerName("CLIENT")));
    }
    Set<UpdateMetadataRequest.Broker> liveBrokers = new HashSet<>(Arrays.asList(new UpdateMetadataRequest.Broker(0, endPoints1, rack), new UpdateMetadataRequest.Broker(1, endPoints2, rack)));
    return new UpdateMetadataRequest.Builder((short) version, 1, 10, partitionStates, liveBrokers).build();
}
Also used : HashMap(java.util.HashMap) LinkedHashMap(java.util.LinkedHashMap) ArrayList(java.util.ArrayList) SecurityProtocol(org.apache.kafka.common.protocol.SecurityProtocol) ListenerName(org.apache.kafka.common.network.ListenerName) TopicPartition(org.apache.kafka.common.TopicPartition) HashSet(java.util.HashSet)

Example 4 with SecurityProtocol

use of org.apache.kafka.common.protocol.SecurityProtocol in project metron by apache.

the class ParserTopologyCLI method createParserTopology.

public ParserTopologyBuilder.ParserTopology createParserTopology(final CommandLine cmd) throws Exception {
    String zookeeperUrl = ParserOptions.ZK_QUORUM.get(cmd);
    Optional<String> brokerUrl = ParserOptions.BROKER_URL.has(cmd) ? Optional.of(ParserOptions.BROKER_URL.get(cmd)) : Optional.empty();
    String sensorTypeRaw = ParserOptions.SENSOR_TYPES.get(cmd);
    List<String> sensorTypes = Arrays.stream(sensorTypeRaw.split(TOPOLOGY_OPTION_SEPARATOR)).map(String::trim).collect(Collectors.toList());
    /*
     * It bears mentioning why we're creating this ValueSupplier indirection here.
     * As a separation of responsibilities, the CLI class defines the order of precedence
     * for the various topological and structural properties for creating a parser.  This is
     * desirable because there are now (i.e. integration tests)
     * and may be in the future (i.e. a REST service to start parsers without using the CLI)
     * other mechanisms to construct parser topologies.  It's sensible to split those concerns..
     *
     * Unfortunately, determining the structural parameters for a parser requires interacting with
     * external services (e.g. zookeeper) that are set up well within the ParserTopology class.
     * Rather than pulling the infrastructure to interact with those services out and moving it into the
     * CLI class and breaking that separation of concerns, we've created a supplier
     * indirection where are providing the logic as to how to create precedence in the CLI class
     * without owning the responsibility of constructing the infrastructure where the values are
     * necessarily supplied.
     *
     */
    // kafka spout parallelism
    ValueSupplier<List> spoutParallelism = (parserConfigs, clazz) -> {
        if (ParserOptions.SPOUT_PARALLELISM.has(cmd)) {
            // Handle the case where there's only one and we can default reasonably
            if (parserConfigs.size() == 1) {
                return Collections.singletonList(Integer.parseInt(ParserOptions.SPOUT_PARALLELISM.get(cmd, "1")));
            }
            // Handle the multiple explicitly passed spout parallelism's case.
            String parallelismRaw = ParserOptions.SPOUT_PARALLELISM.get(cmd, "1");
            List<String> parallelisms = Arrays.stream(parallelismRaw.split(TOPOLOGY_OPTION_SEPARATOR)).map(String::trim).collect(Collectors.toList());
            if (parallelisms.size() != parserConfigs.size()) {
                throw new IllegalArgumentException("Spout parallelism should match number of sensors 1:1");
            }
            List<Integer> spoutParallelisms = new ArrayList<>();
            for (String s : parallelisms) {
                spoutParallelisms.add(Integer.parseInt(s));
            }
            return spoutParallelisms;
        }
        List<Integer> spoutParallelisms = new ArrayList<>();
        for (SensorParserConfig parserConfig : parserConfigs) {
            spoutParallelisms.add(parserConfig.getSpoutParallelism());
        }
        return spoutParallelisms;
    };
    // kafka spout number of tasks
    ValueSupplier<List> spoutNumTasks = (parserConfigs, clazz) -> {
        if (ParserOptions.SPOUT_NUM_TASKS.has(cmd)) {
            // Handle the case where there's only one and we can default reasonably
            if (parserConfigs.size() == 1) {
                return Collections.singletonList(Integer.parseInt(ParserOptions.SPOUT_NUM_TASKS.get(cmd, "1")));
            }
            // Handle the multiple explicitly passed spout parallelism's case.
            String numTasksRaw = ParserOptions.SPOUT_NUM_TASKS.get(cmd, "1");
            List<String> numTasks = Arrays.stream(numTasksRaw.split(TOPOLOGY_OPTION_SEPARATOR)).map(String::trim).collect(Collectors.toList());
            if (numTasks.size() != parserConfigs.size()) {
                throw new IllegalArgumentException("Spout num tasks should match number of sensors 1:1");
            }
            List<Integer> spoutTasksList = new ArrayList<>();
            for (String s : numTasks) {
                spoutTasksList.add(Integer.parseInt(s));
            }
            return spoutTasksList;
        }
        List<Integer> numTasks = new ArrayList<>();
        for (SensorParserConfig parserConfig : parserConfigs) {
            numTasks.add(parserConfig.getSpoutNumTasks());
        }
        return numTasks;
    };
    // parser bolt parallelism
    ValueSupplier<Integer> parserParallelism = (parserConfigs, clazz) -> {
        if (ParserOptions.PARSER_PARALLELISM.has(cmd)) {
            return Integer.parseInt(ParserOptions.PARSER_PARALLELISM.get(cmd, "1"));
        }
        int retValue = 1;
        for (SensorParserConfig config : parserConfigs) {
            Integer configValue = config.getParserParallelism();
            retValue = configValue == null ? retValue : configValue;
        }
        return retValue;
    };
    // parser bolt number of tasks
    ValueSupplier<Integer> parserNumTasks = (parserConfigs, clazz) -> {
        if (ParserOptions.PARSER_NUM_TASKS.has(cmd)) {
            return Integer.parseInt(ParserOptions.PARSER_NUM_TASKS.get(cmd, "1"));
        }
        int retValue = 1;
        for (SensorParserConfig config : parserConfigs) {
            Integer configValue = config.getParserNumTasks();
            retValue = configValue == null ? retValue : configValue;
        }
        return retValue;
    };
    // error bolt parallelism
    ValueSupplier<Integer> errorParallelism = (parserConfigs, clazz) -> {
        if (ParserOptions.ERROR_WRITER_PARALLELISM.has(cmd)) {
            return Integer.parseInt(ParserOptions.ERROR_WRITER_PARALLELISM.get(cmd, "1"));
        }
        int retValue = 1;
        for (SensorParserConfig config : parserConfigs) {
            Integer configValue = config.getErrorWriterParallelism();
            retValue = configValue == null ? retValue : configValue;
        }
        return retValue;
    };
    // error bolt number of tasks
    ValueSupplier<Integer> errorNumTasks = (parserConfigs, clazz) -> {
        if (ParserOptions.ERROR_WRITER_NUM_TASKS.has(cmd)) {
            return Integer.parseInt(ParserOptions.ERROR_WRITER_NUM_TASKS.get(cmd, "1"));
        }
        int retValue = 1;
        for (SensorParserConfig config : parserConfigs) {
            Integer configValue = config.getErrorWriterNumTasks();
            retValue = configValue == null ? retValue : configValue;
        }
        return retValue;
    };
    // kafka spout config
    ValueSupplier<List> spoutConfig = (parserConfigs, clazz) -> {
        if (ParserOptions.SPOUT_CONFIG.has(cmd)) {
            return Collections.singletonList(readJSONMapFromFile(new File(ParserOptions.SPOUT_CONFIG.get(cmd))));
        }
        List<Map<String, Object>> retValue = new ArrayList<>();
        for (SensorParserConfig config : parserConfigs) {
            retValue.add(config.getSpoutConfig());
        }
        return retValue;
    };
    // security protocol
    ValueSupplier<String> securityProtocol = (parserConfigs, clazz) -> {
        Optional<String> sp = Optional.empty();
        if (ParserOptions.SECURITY_PROTOCOL.has(cmd)) {
            sp = Optional.of(ParserOptions.SECURITY_PROTOCOL.get(cmd));
        }
        // Need to adjust to handle list of spoutConfigs. Any non-plaintext wins
        if (!sp.isPresent()) {
            sp = getSecurityProtocol(sp, spoutConfig.get(parserConfigs, List.class));
        }
        // Need to look through parserConfigs for any non-plaintext
        String parserConfigSp = SecurityProtocol.PLAINTEXT.name;
        for (SensorParserConfig config : parserConfigs) {
            String configSp = config.getSecurityProtocol();
            if (!SecurityProtocol.PLAINTEXT.name.equals(configSp)) {
                // We have a winner
                parserConfigSp = configSp;
            }
        }
        return sp.orElse(Optional.ofNullable(parserConfigSp).orElse(null));
    };
    // storm configuration
    ValueSupplier<Config> stormConf = (parserConfigs, clazz) -> {
        // Last one wins
        Config finalConfig = new Config();
        for (SensorParserConfig parserConfig : parserConfigs) {
            Map<String, Object> c = parserConfig.getStormConfig();
            if (c != null && !c.isEmpty()) {
                finalConfig.putAll(c);
            }
            if (parserConfig.getNumAckers() != null) {
                Config.setNumAckers(finalConfig, parserConfig.getNumAckers());
            }
            if (parserConfig.getNumWorkers() != null) {
                Config.setNumWorkers(finalConfig, parserConfig.getNumWorkers());
            }
        }
        return ParserOptions.getConfig(cmd, finalConfig).orElse(finalConfig);
    };
    // output topic
    ValueSupplier<String> outputTopic = (parserConfigs, clazz) -> {
        String topic = null;
        if (ParserOptions.OUTPUT_TOPIC.has(cmd)) {
            topic = ParserOptions.OUTPUT_TOPIC.get(cmd);
        }
        return topic;
    };
    // Error topic will throw an exception if the topics aren't all the same.
    ValueSupplier<String> errorTopic = (parserConfigs, clazz) -> {
        // topic will to set to the 'parser.error.topic' setting in globals when the error bolt is created
        String topic = null;
        for (SensorParserConfig parserConfig : parserConfigs) {
            String currentTopic = parserConfig.getErrorTopic();
            if (topic != null && !topic.equals(currentTopic)) {
                throw new IllegalArgumentException("Parser Aggregation specified with differing error topics");
            }
            topic = currentTopic;
        }
        return topic;
    };
    return getParserTopology(zookeeperUrl, brokerUrl, sensorTypes, spoutParallelism, spoutNumTasks, parserParallelism, parserNumTasks, errorParallelism, errorNumTasks, spoutConfig, securityProtocol, stormConf, outputTopic, errorTopic);
}
Also used : Arrays(java.util.Arrays) ListIterator(java.util.ListIterator) Arg(org.apache.metron.parsers.topology.config.Arg) ValueSupplier(org.apache.metron.parsers.topology.config.ValueSupplier) Options(org.apache.commons.cli.Options) HelpFormatter(org.apache.commons.cli.HelpFormatter) Function(java.util.function.Function) ArrayList(java.util.ArrayList) ConfigHandlers(org.apache.metron.parsers.topology.config.ConfigHandlers) Map(java.util.Map) SensorParserConfig(org.apache.metron.common.configuration.SensorParserConfig) CommandLine(org.apache.commons.cli.CommandLine) JSONUtils(org.apache.metron.common.utils.JSONUtils) PosixParser(org.apache.commons.cli.PosixParser) Option(org.apache.commons.cli.Option) StormSubmitter(org.apache.storm.StormSubmitter) SpoutConfiguration(org.apache.metron.storm.kafka.flux.SpoutConfiguration) CommandLineParser(org.apache.commons.cli.CommandLineParser) SecurityProtocol(org.apache.kafka.common.protocol.SecurityProtocol) IOException(java.io.IOException) FileUtils(org.apache.commons.io.FileUtils) Constants(org.apache.metron.common.Constants) Utils(org.apache.storm.utils.Utils) Collectors(java.util.stream.Collectors) File(java.io.File) LocalCluster(org.apache.storm.LocalCluster) List(java.util.List) ParseException(org.apache.commons.cli.ParseException) KafkaUtils(org.apache.metron.common.utils.KafkaUtils) Optional(java.util.Optional) Config(org.apache.storm.Config) Collections(java.util.Collections) Joiner(com.google.common.base.Joiner) Optional(java.util.Optional) SensorParserConfig(org.apache.metron.common.configuration.SensorParserConfig) Config(org.apache.storm.Config) SensorParserConfig(org.apache.metron.common.configuration.SensorParserConfig) ArrayList(java.util.ArrayList) List(java.util.List) File(java.io.File) Map(java.util.Map)

Example 5 with SecurityProtocol

use of org.apache.kafka.common.protocol.SecurityProtocol in project kafka by apache.

the class SaslAuthenticatorTest method testApiVersionsRequestWithUnsupportedVersion.

/**
     * Tests that unsupported version of ApiVersionsRequest before SASL handshake request
     * returns error response and does not result in authentication failure. This test
     * is similar to {@link #testUnauthenticatedApiVersionsRequest(SecurityProtocol)}
     * where a non-SASL client is used to send requests that are processed by
     * {@link SaslServerAuthenticator} of the server prior to client authentication.
     */
@Test
public void testApiVersionsRequestWithUnsupportedVersion() throws Exception {
    SecurityProtocol securityProtocol = SecurityProtocol.SASL_PLAINTEXT;
    configureMechanisms("PLAIN", Arrays.asList("PLAIN"));
    server = createEchoServer(securityProtocol);
    // Send ApiVersionsRequest with unsupported version and validate error response.
    String node = "1";
    createClientConnection(SecurityProtocol.PLAINTEXT, node);
    RequestHeader header = new RequestHeader(ApiKeys.API_VERSIONS.id, Short.MAX_VALUE, "someclient", 1);
    ApiVersionsRequest request = new ApiVersionsRequest.Builder().build();
    selector.send(request.toSend(node, header));
    ByteBuffer responseBuffer = waitForResponse();
    ResponseHeader.parse(responseBuffer);
    ApiVersionsResponse response = ApiVersionsResponse.parse(responseBuffer, (short) 0);
    assertEquals(Errors.UNSUPPORTED_VERSION, response.error());
    // Send ApiVersionsRequest with a supported version. This should succeed.
    sendVersionRequestReceiveResponse(node);
    // Test that client can authenticate successfully
    sendHandshakeRequestReceiveResponse(node);
    authenticateUsingSaslPlainAndCheckConnection(node);
}
Also used : ApiVersionsResponse(org.apache.kafka.common.requests.ApiVersionsResponse) SecurityProtocol(org.apache.kafka.common.protocol.SecurityProtocol) RequestHeader(org.apache.kafka.common.requests.RequestHeader) ApiVersionsRequest(org.apache.kafka.common.requests.ApiVersionsRequest) ByteBuffer(java.nio.ByteBuffer) Test(org.junit.Test)

Aggregations

SecurityProtocol (org.apache.kafka.common.protocol.SecurityProtocol)7 Test (org.junit.Test)3 ArrayList (java.util.ArrayList)2 HashMap (java.util.HashMap)2 ApiVersionsResponse (org.apache.kafka.common.requests.ApiVersionsResponse)2 Joiner (com.google.common.base.Joiner)1 File (java.io.File)1 IOException (java.io.IOException)1 InetSocketAddress (java.net.InetSocketAddress)1 ByteBuffer (java.nio.ByteBuffer)1 Arrays (java.util.Arrays)1 Collections (java.util.Collections)1 HashSet (java.util.HashSet)1 LinkedHashMap (java.util.LinkedHashMap)1 List (java.util.List)1 ListIterator (java.util.ListIterator)1 Map (java.util.Map)1 Optional (java.util.Optional)1 Function (java.util.function.Function)1 Collectors (java.util.stream.Collectors)1