Search in sources :

Example 11 with ConnectException

use of org.apache.kafka.connect.errors.ConnectException in project kafka by apache.

the class KafkaConfigBackingStore method removeConnectorConfig.

/**
     * Remove configuration for a given connector.
     * @param connector name of the connector to remove
     */
@Override
public void removeConnectorConfig(String connector) {
    log.debug("Removing connector configuration for connector {}", connector);
    try {
        configLog.send(CONNECTOR_KEY(connector), null);
        configLog.send(TARGET_STATE_KEY(connector), null);
        configLog.readToEnd().get(READ_TO_END_TIMEOUT_MS, TimeUnit.MILLISECONDS);
    } catch (InterruptedException | ExecutionException | TimeoutException e) {
        log.error("Failed to remove connector configuration from Kafka: ", e);
        throw new ConnectException("Error removing connector configuration from Kafka", e);
    }
}
Also used : ExecutionException(java.util.concurrent.ExecutionException) TimeoutException(java.util.concurrent.TimeoutException) ConnectException(org.apache.kafka.connect.errors.ConnectException)

Example 12 with ConnectException

use of org.apache.kafka.connect.errors.ConnectException in project kafka by apache.

the class KafkaConfigBackingStore method putTaskConfigs.

/**
     * Write these task configurations and associated commit messages, unless an inconsistency is found that indicates
     * that we would be leaving one of the referenced connectors with an inconsistent state.
     *
     * @param connector the connector to write task configuration
     * @param configs list of task configurations for the connector
     * @throws ConnectException if the task configurations do not resolve inconsistencies found in the existing root
     *                          and task configurations.
     */
@Override
public void putTaskConfigs(String connector, List<Map<String, String>> configs) {
    // any outstanding lagging data to consume.
    try {
        configLog.readToEnd().get(READ_TO_END_TIMEOUT_MS, TimeUnit.MILLISECONDS);
    } catch (InterruptedException | ExecutionException | TimeoutException e) {
        log.error("Failed to write root configuration to Kafka: ", e);
        throw new ConnectException("Error writing root configuration to Kafka", e);
    }
    int taskCount = configs.size();
    // Start sending all the individual updates
    int index = 0;
    for (Map<String, String> taskConfig : configs) {
        Struct connectConfig = new Struct(TASK_CONFIGURATION_V0);
        connectConfig.put("properties", taskConfig);
        byte[] serializedConfig = converter.fromConnectData(topic, TASK_CONFIGURATION_V0, connectConfig);
        log.debug("Writing configuration for task " + index + " configuration: " + taskConfig);
        ConnectorTaskId connectorTaskId = new ConnectorTaskId(connector, index);
        configLog.send(TASK_KEY(connectorTaskId), serializedConfig);
        index++;
    }
    // the end of the log
    try {
        // Read to end to ensure all the task configs have been written
        if (taskCount > 0) {
            configLog.readToEnd().get(READ_TO_END_TIMEOUT_MS, TimeUnit.MILLISECONDS);
        }
        // Write the commit message
        Struct connectConfig = new Struct(CONNECTOR_TASKS_COMMIT_V0);
        connectConfig.put("tasks", taskCount);
        byte[] serializedConfig = converter.fromConnectData(topic, CONNECTOR_TASKS_COMMIT_V0, connectConfig);
        log.debug("Writing commit for connector " + connector + " with " + taskCount + " tasks.");
        configLog.send(COMMIT_TASKS_KEY(connector), serializedConfig);
        // Read to end to ensure all the commit messages have been written
        configLog.readToEnd().get(READ_TO_END_TIMEOUT_MS, TimeUnit.MILLISECONDS);
    } catch (InterruptedException | ExecutionException | TimeoutException e) {
        log.error("Failed to write root configuration to Kafka: ", e);
        throw new ConnectException("Error writing root configuration to Kafka", e);
    }
}
Also used : ConnectorTaskId(org.apache.kafka.connect.util.ConnectorTaskId) ExecutionException(java.util.concurrent.ExecutionException) TimeoutException(java.util.concurrent.TimeoutException) ConnectException(org.apache.kafka.connect.errors.ConnectException) Struct(org.apache.kafka.connect.data.Struct)

Example 13 with ConnectException

use of org.apache.kafka.connect.errors.ConnectException in project kafka by apache.

the class KafkaConfigBackingStore method updateConnectorConfig.

private void updateConnectorConfig(String connector, byte[] serializedConfig) {
    try {
        configLog.send(CONNECTOR_KEY(connector), serializedConfig);
        configLog.readToEnd().get(READ_TO_END_TIMEOUT_MS, TimeUnit.MILLISECONDS);
    } catch (InterruptedException | ExecutionException | TimeoutException e) {
        log.error("Failed to write connector configuration to Kafka: ", e);
        throw new ConnectException("Error writing connector configuration to Kafka", e);
    }
}
Also used : ExecutionException(java.util.concurrent.ExecutionException) TimeoutException(java.util.concurrent.TimeoutException) ConnectException(org.apache.kafka.connect.errors.ConnectException)

Example 14 with ConnectException

use of org.apache.kafka.connect.errors.ConnectException in project kafka by apache.

the class DistributedHerder method halt.

// public for testing
public void halt() {
    synchronized (this) {
        // Clean up any connectors and tasks that are still running.
        log.info("Stopping connectors and tasks that are still assigned to this worker.");
        List<Callable<Void>> callables = new ArrayList<>();
        for (String connectorName : new ArrayList<>(worker.connectorNames())) {
            callables.add(getConnectorStoppingCallable(connectorName));
        }
        for (ConnectorTaskId taskId : new ArrayList<>(worker.taskIds())) {
            callables.add(getTaskStoppingCallable(taskId));
        }
        startAndStop(callables);
        member.stop();
        // Explicitly fail any outstanding requests so they actually get a response and get an
        // understandable reason for their failure.
        HerderRequest request = requests.pollFirst();
        while (request != null) {
            request.callback().onCompletion(new ConnectException("Worker is shutting down"), null);
            request = requests.pollFirst();
        }
        stopServices();
    }
}
Also used : ConnectorTaskId(org.apache.kafka.connect.util.ConnectorTaskId) ArrayList(java.util.ArrayList) Callable(java.util.concurrent.Callable) ConnectException(org.apache.kafka.connect.errors.ConnectException)

Example 15 with ConnectException

use of org.apache.kafka.connect.errors.ConnectException in project kafka by apache.

the class FileStreamSourceTask method start.

@Override
public void start(Map<String, String> props) {
    filename = props.get(FileStreamSourceConnector.FILE_CONFIG);
    if (filename == null || filename.isEmpty()) {
        stream = System.in;
        // Tracking offset for stdin doesn't make sense
        streamOffset = null;
        reader = new BufferedReader(new InputStreamReader(stream));
    }
    topic = props.get(FileStreamSourceConnector.TOPIC_CONFIG);
    if (topic == null)
        throw new ConnectException("FileStreamSourceTask config missing topic setting");
}
Also used : InputStreamReader(java.io.InputStreamReader) BufferedReader(java.io.BufferedReader) ConnectException(org.apache.kafka.connect.errors.ConnectException)

Aggregations

ConnectException (org.apache.kafka.connect.errors.ConnectException)42 HashMap (java.util.HashMap)7 Map (java.util.Map)7 ArrayList (java.util.ArrayList)6 TimeoutException (java.util.concurrent.TimeoutException)6 IOException (java.io.IOException)5 Connector (org.apache.kafka.connect.connector.Connector)5 ExecutionException (java.util.concurrent.ExecutionException)4 NotFoundException (org.apache.kafka.connect.errors.NotFoundException)4 ConnectorTaskId (org.apache.kafka.connect.util.ConnectorTaskId)4 Test (org.junit.Test)4 PrepareForTest (org.powermock.core.classloader.annotations.PrepareForTest)4 ByteBuffer (java.nio.ByteBuffer)3 AlreadyExistsException (org.apache.kafka.connect.errors.AlreadyExistsException)3 BadRequestException (org.apache.kafka.connect.runtime.rest.errors.BadRequestException)3 SinkRecord (org.apache.kafka.connect.sink.SinkRecord)3 SourceRecord (org.apache.kafka.connect.source.SourceRecord)3 ThreadedTest (org.apache.kafka.connect.util.ThreadedTest)3 BufferedReader (java.io.BufferedReader)2 FileInputStream (java.io.FileInputStream)2