Search in sources :

Example 1 with ReceiveTimeoutTransportException

use of org.elasticsearch.transport.ReceiveTimeoutTransportException in project elasticsearch by elastic.

the class InternalClusterInfoService method refresh.

/**
     * Refreshes the ClusterInfo in a blocking fashion
     */
public final ClusterInfo refresh() {
    if (logger.isTraceEnabled()) {
        logger.trace("Performing ClusterInfoUpdateJob");
    }
    final CountDownLatch nodeLatch = updateNodeStats(new ActionListener<NodesStatsResponse>() {

        @Override
        public void onResponse(NodesStatsResponse nodeStatses) {
            ImmutableOpenMap.Builder<String, DiskUsage> newLeastAvaiableUsages = ImmutableOpenMap.builder();
            ImmutableOpenMap.Builder<String, DiskUsage> newMostAvaiableUsages = ImmutableOpenMap.builder();
            fillDiskUsagePerNode(logger, nodeStatses.getNodes(), newLeastAvaiableUsages, newMostAvaiableUsages);
            leastAvailableSpaceUsages = newLeastAvaiableUsages.build();
            mostAvailableSpaceUsages = newMostAvaiableUsages.build();
        }

        @Override
        public void onFailure(Exception e) {
            if (e instanceof ReceiveTimeoutTransportException) {
                logger.error("NodeStatsAction timed out for ClusterInfoUpdateJob", e);
            } else {
                if (e instanceof ClusterBlockException) {
                    if (logger.isTraceEnabled()) {
                        logger.trace("Failed to execute NodeStatsAction for ClusterInfoUpdateJob", e);
                    }
                } else {
                    logger.warn("Failed to execute NodeStatsAction for ClusterInfoUpdateJob", e);
                }
                // we empty the usages list, to be safe - we don't know what's going on.
                leastAvailableSpaceUsages = ImmutableOpenMap.of();
                mostAvailableSpaceUsages = ImmutableOpenMap.of();
            }
        }
    });
    final CountDownLatch indicesLatch = updateIndicesStats(new ActionListener<IndicesStatsResponse>() {

        @Override
        public void onResponse(IndicesStatsResponse indicesStatsResponse) {
            ShardStats[] stats = indicesStatsResponse.getShards();
            ImmutableOpenMap.Builder<String, Long> newShardSizes = ImmutableOpenMap.builder();
            ImmutableOpenMap.Builder<ShardRouting, String> newShardRoutingToDataPath = ImmutableOpenMap.builder();
            buildShardLevelInfo(logger, stats, newShardSizes, newShardRoutingToDataPath, clusterService.state());
            shardSizes = newShardSizes.build();
            shardRoutingToDataPath = newShardRoutingToDataPath.build();
        }

        @Override
        public void onFailure(Exception e) {
            if (e instanceof ReceiveTimeoutTransportException) {
                logger.error("IndicesStatsAction timed out for ClusterInfoUpdateJob", e);
            } else {
                if (e instanceof ClusterBlockException) {
                    if (logger.isTraceEnabled()) {
                        logger.trace("Failed to execute IndicesStatsAction for ClusterInfoUpdateJob", e);
                    }
                } else {
                    logger.warn("Failed to execute IndicesStatsAction for ClusterInfoUpdateJob", e);
                }
                // we empty the usages list, to be safe - we don't know what's going on.
                shardSizes = ImmutableOpenMap.of();
                shardRoutingToDataPath = ImmutableOpenMap.of();
            }
        }
    });
    try {
        nodeLatch.await(fetchTimeout.getMillis(), TimeUnit.MILLISECONDS);
    } catch (InterruptedException e) {
        // restore interrupt status
        Thread.currentThread().interrupt();
        logger.warn("Failed to update node information for ClusterInfoUpdateJob within {} timeout", fetchTimeout);
    }
    try {
        indicesLatch.await(fetchTimeout.getMillis(), TimeUnit.MILLISECONDS);
    } catch (InterruptedException e) {
        // restore interrupt status
        Thread.currentThread().interrupt();
        logger.warn("Failed to update shard information for ClusterInfoUpdateJob within {} timeout", fetchTimeout);
    }
    ClusterInfo clusterInfo = getClusterInfo();
    for (Listener l : listeners) {
        try {
            l.onNewInfo(clusterInfo);
        } catch (Exception e) {
            logger.info("Failed executing ClusterInfoService listener", e);
        }
    }
    return clusterInfo;
}
Also used : IndicesStatsResponse(org.elasticsearch.action.admin.indices.stats.IndicesStatsResponse) LatchedActionListener(org.elasticsearch.action.LatchedActionListener) ActionListener(org.elasticsearch.action.ActionListener) CountDownLatch(java.util.concurrent.CountDownLatch) ClusterBlockException(org.elasticsearch.cluster.block.ClusterBlockException) ClusterBlockException(org.elasticsearch.cluster.block.ClusterBlockException) ReceiveTimeoutTransportException(org.elasticsearch.transport.ReceiveTimeoutTransportException) EsRejectedExecutionException(org.elasticsearch.common.util.concurrent.EsRejectedExecutionException) NodesStatsResponse(org.elasticsearch.action.admin.cluster.node.stats.NodesStatsResponse) ReceiveTimeoutTransportException(org.elasticsearch.transport.ReceiveTimeoutTransportException)

Example 2 with ReceiveTimeoutTransportException

use of org.elasticsearch.transport.ReceiveTimeoutTransportException in project crate by crate.

the class NodeStatsIterator method getNodeStatsContextFromRemoteState.

private CompletableFuture<List<NodeStatsContext>> getNodeStatsContextFromRemoteState(Set<ColumnIdent> toCollect) {
    final CompletableFuture<List<NodeStatsContext>> nodeStatsContextsFuture = new CompletableFuture<>();
    final List<NodeStatsContext> rows = Collections.synchronizedList(new ArrayList<NodeStatsContext>(nodes.size()));
    final AtomicInteger remainingNodesToCollect = new AtomicInteger(nodes.size());
    for (final DiscoveryNode node : nodes) {
        final String nodeId = node.getId();
        final NodeStatsRequest request = new NodeStatsRequest(toCollect);
        transportStatTablesAction.execute(nodeId, request, new ActionListener<NodeStatsResponse>() {

            @Override
            public void onResponse(NodeStatsResponse response) {
                rows.add(response.nodeStatsContext());
                if (remainingNodesToCollect.decrementAndGet() == 0) {
                    nodeStatsContextsFuture.complete(rows);
                }
            }

            @Override
            public void onFailure(Throwable t) {
                if (t instanceof ReceiveTimeoutTransportException) {
                    rows.add(new NodeStatsContext(nodeId, node.name()));
                    if (remainingNodesToCollect.decrementAndGet() == 0) {
                        nodeStatsContextsFuture.complete(rows);
                    }
                } else {
                    nodeStatsContextsFuture.completeExceptionally(t);
                }
            }
        }, TimeValue.timeValueMillis(3000L));
    }
    return nodeStatsContextsFuture;
}
Also used : DiscoveryNode(org.elasticsearch.cluster.node.DiscoveryNode) NodeStatsResponse(io.crate.executor.transport.NodeStatsResponse) ReceiveTimeoutTransportException(org.elasticsearch.transport.ReceiveTimeoutTransportException) CompletableFuture(java.util.concurrent.CompletableFuture) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) NodeStatsRequest(io.crate.executor.transport.NodeStatsRequest) NodeStatsContext(io.crate.operation.reference.sys.node.NodeStatsContext)

Example 3 with ReceiveTimeoutTransportException

use of org.elasticsearch.transport.ReceiveTimeoutTransportException in project nifi by apache.

the class FetchElasticsearch5 method onTrigger.

@Override
public void onTrigger(final ProcessContext context, final ProcessSession session) throws ProcessException {
    synchronized (esClient) {
        if (esClient.get() == null) {
            super.setup(context);
        }
    }
    FlowFile flowFile = session.get();
    if (flowFile == null) {
        return;
    }
    final String index = context.getProperty(INDEX).evaluateAttributeExpressions(flowFile).getValue();
    final String docId = context.getProperty(DOC_ID).evaluateAttributeExpressions(flowFile).getValue();
    final String docType = context.getProperty(TYPE).evaluateAttributeExpressions(flowFile).getValue();
    final Charset charset = Charset.forName(context.getProperty(CHARSET).evaluateAttributeExpressions(flowFile).getValue());
    final ComponentLog logger = getLogger();
    try {
        logger.debug("Fetching {}/{}/{} from Elasticsearch", new Object[] { index, docType, docId });
        GetRequestBuilder getRequestBuilder = esClient.get().prepareGet(index, docType, docId);
        final GetResponse getResponse = getRequestBuilder.execute().actionGet();
        if (getResponse == null || !getResponse.isExists()) {
            logger.debug("Failed to read {}/{}/{} from Elasticsearch: Document not found", new Object[] { index, docType, docId });
            // We couldn't find the document, so penalize it and send it to "not found"
            flowFile = session.penalize(flowFile);
            session.transfer(flowFile, REL_NOT_FOUND);
        } else {
            flowFile = session.putAllAttributes(flowFile, new HashMap<String, String>() {

                {
                    put("filename", docId);
                    put("es.index", index);
                    put("es.type", docType);
                }
            });
            flowFile = session.write(flowFile, new OutputStreamCallback() {

                @Override
                public void process(OutputStream out) throws IOException {
                    out.write(getResponse.getSourceAsString().getBytes(charset));
                }
            });
            logger.debug("Elasticsearch document " + docId + " fetched, routing to success");
            // The document is JSON, so update the MIME type of the flow file
            flowFile = session.putAttribute(flowFile, CoreAttributes.MIME_TYPE.key(), "application/json");
            session.getProvenanceReporter().fetch(flowFile, getResponse.remoteAddress().getAddress());
            session.transfer(flowFile, REL_SUCCESS);
        }
    } catch (NoNodeAvailableException | ElasticsearchTimeoutException | ReceiveTimeoutTransportException | NodeClosedException exceptionToRetry) {
        logger.error("Failed to read into Elasticsearch due to {}, this may indicate an error in configuration " + "(hosts, username/password, etc.), or this issue may be transient. Routing to retry", new Object[] { exceptionToRetry.getLocalizedMessage() }, exceptionToRetry);
        session.transfer(flowFile, REL_RETRY);
        context.yield();
    } catch (Exception e) {
        logger.error("Failed to read {} from Elasticsearch due to {}", new Object[] { flowFile, e.getLocalizedMessage() }, e);
        session.transfer(flowFile, REL_FAILURE);
        context.yield();
    }
}
Also used : FlowFile(org.apache.nifi.flowfile.FlowFile) HashMap(java.util.HashMap) OutputStream(java.io.OutputStream) Charset(java.nio.charset.Charset) NoNodeAvailableException(org.elasticsearch.client.transport.NoNodeAvailableException) ComponentLog(org.apache.nifi.logging.ComponentLog) GetResponse(org.elasticsearch.action.get.GetResponse) NodeClosedException(org.elasticsearch.node.NodeClosedException) ProcessException(org.apache.nifi.processor.exception.ProcessException) ElasticsearchTimeoutException(org.elasticsearch.ElasticsearchTimeoutException) ReceiveTimeoutTransportException(org.elasticsearch.transport.ReceiveTimeoutTransportException) NoNodeAvailableException(org.elasticsearch.client.transport.NoNodeAvailableException) IOException(java.io.IOException) ReceiveTimeoutTransportException(org.elasticsearch.transport.ReceiveTimeoutTransportException) ElasticsearchTimeoutException(org.elasticsearch.ElasticsearchTimeoutException) NodeClosedException(org.elasticsearch.node.NodeClosedException) OutputStreamCallback(org.apache.nifi.processor.io.OutputStreamCallback) GetRequestBuilder(org.elasticsearch.action.get.GetRequestBuilder)

Example 4 with ReceiveTimeoutTransportException

use of org.elasticsearch.transport.ReceiveTimeoutTransportException in project nifi by apache.

the class PutElasticsearch5 method onTrigger.

@Override
public void onTrigger(final ProcessContext context, final ProcessSession session) throws ProcessException {
    synchronized (esClient) {
        if (esClient.get() == null) {
            super.setup(context);
        }
    }
    final String id_attribute = context.getProperty(ID_ATTRIBUTE).getValue();
    final int batchSize = context.getProperty(BATCH_SIZE).evaluateAttributeExpressions().asInteger();
    final List<FlowFile> flowFiles = session.get(batchSize);
    if (flowFiles.isEmpty()) {
        return;
    }
    final ComponentLog logger = getLogger();
    // Keep track of the list of flow files that need to be transferred. As they are transferred, remove them from the list.
    List<FlowFile> flowFilesToTransfer = new LinkedList<>(flowFiles);
    try {
        final BulkRequestBuilder bulk = esClient.get().prepareBulk();
        for (FlowFile file : flowFiles) {
            final String index = context.getProperty(INDEX).evaluateAttributeExpressions(file).getValue();
            final String docType = context.getProperty(TYPE).evaluateAttributeExpressions(file).getValue();
            final String indexOp = context.getProperty(INDEX_OP).evaluateAttributeExpressions(file).getValue();
            final Charset charset = Charset.forName(context.getProperty(CHARSET).evaluateAttributeExpressions(file).getValue());
            final String id = file.getAttribute(id_attribute);
            if (id == null) {
                logger.warn("No value in identifier attribute {} for {}, transferring to failure", new Object[] { id_attribute, file });
                flowFilesToTransfer.remove(file);
                session.transfer(file, REL_FAILURE);
            } else {
                session.read(file, new InputStreamCallback() {

                    @Override
                    public void process(final InputStream in) throws IOException {
                        // For the bulk insert, each document has to be on its own line, so remove all CRLF
                        String json = IOUtils.toString(in, charset).replace("\r\n", " ").replace('\n', ' ').replace('\r', ' ');
                        if (indexOp.equalsIgnoreCase("index")) {
                            bulk.add(esClient.get().prepareIndex(index, docType, id).setSource(json.getBytes(charset)));
                        } else if (indexOp.equalsIgnoreCase("upsert")) {
                            bulk.add(esClient.get().prepareUpdate(index, docType, id).setDoc(json.getBytes(charset)).setDocAsUpsert(true));
                        } else if (indexOp.equalsIgnoreCase("update")) {
                            bulk.add(esClient.get().prepareUpdate(index, docType, id).setDoc(json.getBytes(charset)));
                        } else {
                            throw new IOException("Index operation: " + indexOp + " not supported.");
                        }
                    }
                });
            }
        }
        if (bulk.numberOfActions() > 0) {
            final BulkResponse response = bulk.execute().actionGet();
            if (response.hasFailures()) {
                // Responses are guaranteed to be in order, remove them in reverse order
                BulkItemResponse[] responses = response.getItems();
                if (responses != null && responses.length > 0) {
                    for (int i = responses.length - 1; i >= 0; i--) {
                        final BulkItemResponse item = responses[i];
                        final FlowFile flowFile = flowFilesToTransfer.get(item.getItemId());
                        if (item.isFailed()) {
                            logger.warn("Failed to insert {} into Elasticsearch due to {}, transferring to failure", new Object[] { flowFile, item.getFailure().getMessage() });
                            session.transfer(flowFile, REL_FAILURE);
                        } else {
                            session.getProvenanceReporter().send(flowFile, response.remoteAddress().getAddress());
                            session.transfer(flowFile, REL_SUCCESS);
                        }
                        flowFilesToTransfer.remove(flowFile);
                    }
                }
            }
            // Transfer any remaining flowfiles to success
            for (FlowFile ff : flowFilesToTransfer) {
                session.getProvenanceReporter().send(ff, response.remoteAddress().getAddress());
                session.transfer(ff, REL_SUCCESS);
            }
        }
    } catch (NoNodeAvailableException | ElasticsearchTimeoutException | ReceiveTimeoutTransportException | NodeClosedException exceptionToRetry) {
        // Authorization errors and other problems are often returned as NoNodeAvailableExceptions without a
        // traceable cause. However the cause seems to be logged, just not available to this caught exception.
        // Since the error message will show up as a bulletin, we make specific mention to check the logs for
        // more details.
        logger.error("Failed to insert into Elasticsearch due to {}. More detailed information may be available in " + "the NiFi logs.", new Object[] { exceptionToRetry.getLocalizedMessage() }, exceptionToRetry);
        session.transfer(flowFilesToTransfer, REL_RETRY);
        context.yield();
    } catch (Exception exceptionToFail) {
        logger.error("Failed to insert into Elasticsearch due to {}, transferring to failure", new Object[] { exceptionToFail.getLocalizedMessage() }, exceptionToFail);
        session.transfer(flowFilesToTransfer, REL_FAILURE);
        context.yield();
    }
}
Also used : FlowFile(org.apache.nifi.flowfile.FlowFile) InputStream(java.io.InputStream) Charset(java.nio.charset.Charset) BulkItemResponse(org.elasticsearch.action.bulk.BulkItemResponse) BulkResponse(org.elasticsearch.action.bulk.BulkResponse) IOException(java.io.IOException) NoNodeAvailableException(org.elasticsearch.client.transport.NoNodeAvailableException) ComponentLog(org.apache.nifi.logging.ComponentLog) LinkedList(java.util.LinkedList) NodeClosedException(org.elasticsearch.node.NodeClosedException) ProcessException(org.apache.nifi.processor.exception.ProcessException) ElasticsearchTimeoutException(org.elasticsearch.ElasticsearchTimeoutException) ReceiveTimeoutTransportException(org.elasticsearch.transport.ReceiveTimeoutTransportException) NoNodeAvailableException(org.elasticsearch.client.transport.NoNodeAvailableException) IOException(java.io.IOException) ReceiveTimeoutTransportException(org.elasticsearch.transport.ReceiveTimeoutTransportException) ElasticsearchTimeoutException(org.elasticsearch.ElasticsearchTimeoutException) InputStreamCallback(org.apache.nifi.processor.io.InputStreamCallback) NodeClosedException(org.elasticsearch.node.NodeClosedException) BulkRequestBuilder(org.elasticsearch.action.bulk.BulkRequestBuilder)

Example 5 with ReceiveTimeoutTransportException

use of org.elasticsearch.transport.ReceiveTimeoutTransportException in project nifi by apache.

the class TestFetchElasticsearch5 method testFetchElasticsearch5OnTriggerWithExceptions.

@Test
public void testFetchElasticsearch5OnTriggerWithExceptions() throws IOException {
    FetchElasticsearch5TestProcessor processor = new FetchElasticsearch5TestProcessor(true);
    runner = TestRunners.newTestRunner(processor);
    runner.setProperty(AbstractElasticsearch5TransportClientProcessor.CLUSTER_NAME, "elasticsearch");
    runner.setProperty(AbstractElasticsearch5TransportClientProcessor.HOSTS, "127.0.0.1:9300");
    runner.setProperty(AbstractElasticsearch5TransportClientProcessor.PING_TIMEOUT, "5s");
    runner.setProperty(AbstractElasticsearch5TransportClientProcessor.SAMPLER_INTERVAL, "5s");
    runner.setProperty(FetchElasticsearch5.INDEX, "doc");
    runner.setProperty(FetchElasticsearch5.TYPE, "status");
    runner.setValidateExpressionUsage(true);
    runner.setProperty(FetchElasticsearch5.DOC_ID, "${doc_id}");
    // No Node Available exception
    processor.setExceptionToThrow(new NoNodeAvailableException("test"));
    runner.enqueue(docExample, new HashMap<String, String>() {

        {
            put("doc_id", "28039652140");
        }
    });
    runner.run(1, true, true);
    runner.assertAllFlowFilesTransferred(FetchElasticsearch5.REL_RETRY, 1);
    runner.clearTransferState();
    // Elasticsearch5 Timeout exception
    processor.setExceptionToThrow(new ElasticsearchTimeoutException("test"));
    runner.enqueue(docExample, new HashMap<String, String>() {

        {
            put("doc_id", "28039652141");
        }
    });
    runner.run(1, true, true);
    runner.assertAllFlowFilesTransferred(FetchElasticsearch5.REL_RETRY, 1);
    runner.clearTransferState();
    // Receive Timeout Transport exception
    processor.setExceptionToThrow(new ReceiveTimeoutTransportException(mock(StreamInput.class)));
    runner.enqueue(docExample, new HashMap<String, String>() {

        {
            put("doc_id", "28039652141");
        }
    });
    runner.run(1, true, true);
    runner.assertAllFlowFilesTransferred(FetchElasticsearch5.REL_RETRY, 1);
    runner.clearTransferState();
    // Node Closed exception
    processor.setExceptionToThrow(new NodeClosedException(mock(StreamInput.class)));
    runner.enqueue(docExample, new HashMap<String, String>() {

        {
            put("doc_id", "28039652141");
        }
    });
    runner.run(1, true, true);
    runner.assertAllFlowFilesTransferred(FetchElasticsearch5.REL_RETRY, 1);
    runner.clearTransferState();
    // Elasticsearch5 Parse exception
    processor.setExceptionToThrow(new ElasticsearchParseException("test"));
    runner.enqueue(docExample, new HashMap<String, String>() {

        {
            put("doc_id", "28039652141");
        }
    });
    runner.run(1, true, true);
    // This test generates an exception on execute(),routes to failure
    runner.assertTransferCount(FetchElasticsearch5.REL_FAILURE, 1);
}
Also used : ReceiveTimeoutTransportException(org.elasticsearch.transport.ReceiveTimeoutTransportException) ElasticsearchTimeoutException(org.elasticsearch.ElasticsearchTimeoutException) ElasticsearchParseException(org.elasticsearch.ElasticsearchParseException) NodeClosedException(org.elasticsearch.node.NodeClosedException) Matchers.anyString(org.mockito.Matchers.anyString) NoNodeAvailableException(org.elasticsearch.client.transport.NoNodeAvailableException) Test(org.junit.Test)

Aggregations

ReceiveTimeoutTransportException (org.elasticsearch.transport.ReceiveTimeoutTransportException)14 ElasticsearchTimeoutException (org.elasticsearch.ElasticsearchTimeoutException)11 NoNodeAvailableException (org.elasticsearch.client.transport.NoNodeAvailableException)8 NodeClosedException (org.elasticsearch.node.NodeClosedException)8 FlowFile (org.apache.nifi.flowfile.FlowFile)5 ComponentLog (org.apache.nifi.logging.ComponentLog)5 ProcessException (org.apache.nifi.processor.exception.ProcessException)5 IOException (java.io.IOException)4 Charset (java.nio.charset.Charset)4 ElasticsearchParseException (org.elasticsearch.ElasticsearchParseException)4 EsRejectedExecutionException (org.elasticsearch.common.util.concurrent.EsRejectedExecutionException)4 Test (org.junit.Test)4 Matchers.anyString (org.mockito.Matchers.anyString)4 ParameterizedMessage (org.apache.logging.log4j.message.ParameterizedMessage)3 InputStream (java.io.InputStream)2 OutputStream (java.io.OutputStream)2 HashMap (java.util.HashMap)2 LinkedList (java.util.LinkedList)2 CountDownLatch (java.util.concurrent.CountDownLatch)2 InputStreamCallback (org.apache.nifi.processor.io.InputStreamCallback)2