Search in sources :

Example 1 with AtomicArray

use of org.elasticsearch.common.util.concurrent.AtomicArray in project elasticsearch by elastic.

the class TransportMultiSearchAction method doExecute.

@Override
protected void doExecute(MultiSearchRequest request, ActionListener<MultiSearchResponse> listener) {
    ClusterState clusterState = clusterService.state();
    clusterState.blocks().globalBlockedRaiseException(ClusterBlockLevel.READ);
    int maxConcurrentSearches = request.maxConcurrentSearchRequests();
    if (maxConcurrentSearches == 0) {
        maxConcurrentSearches = defaultMaxConcurrentSearches(availableProcessors, clusterState);
    }
    Queue<SearchRequestSlot> searchRequestSlots = new ConcurrentLinkedQueue<>();
    for (int i = 0; i < request.requests().size(); i++) {
        SearchRequest searchRequest = request.requests().get(i);
        searchRequestSlots.add(new SearchRequestSlot(searchRequest, i));
    }
    int numRequests = request.requests().size();
    final AtomicArray<MultiSearchResponse.Item> responses = new AtomicArray<>(numRequests);
    final AtomicInteger responseCounter = new AtomicInteger(numRequests);
    int numConcurrentSearches = Math.min(numRequests, maxConcurrentSearches);
    for (int i = 0; i < numConcurrentSearches; i++) {
        executeSearch(searchRequestSlots, responses, responseCounter, listener);
    }
}
Also used : ClusterState(org.elasticsearch.cluster.ClusterState) AtomicArray(org.elasticsearch.common.util.concurrent.AtomicArray) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) ConcurrentLinkedQueue(java.util.concurrent.ConcurrentLinkedQueue)

Example 2 with AtomicArray

use of org.elasticsearch.common.util.concurrent.AtomicArray in project elasticsearch by elastic.

the class TransportSearchHelper method buildScrollId.

static String buildScrollId(AtomicArray<? extends SearchPhaseResult> searchPhaseResults) throws IOException {
    try (RAMOutputStream out = new RAMOutputStream()) {
        out.writeString(searchPhaseResults.length() == 1 ? ParsedScrollId.QUERY_AND_FETCH_TYPE : ParsedScrollId.QUERY_THEN_FETCH_TYPE);
        out.writeVInt(searchPhaseResults.asList().size());
        for (AtomicArray.Entry<? extends SearchPhaseResult> entry : searchPhaseResults.asList()) {
            SearchPhaseResult searchPhaseResult = entry.value;
            out.writeLong(searchPhaseResult.id());
            out.writeString(searchPhaseResult.shardTarget().getNodeId());
        }
        byte[] bytes = new byte[(int) out.getFilePointer()];
        out.writeTo(bytes, 0);
        return Base64.getUrlEncoder().encodeToString(bytes);
    }
}
Also used : AtomicArray(org.elasticsearch.common.util.concurrent.AtomicArray) RAMOutputStream(org.apache.lucene.store.RAMOutputStream) SearchPhaseResult(org.elasticsearch.search.SearchPhaseResult)

Example 3 with AtomicArray

use of org.elasticsearch.common.util.concurrent.AtomicArray in project elasticsearch by elastic.

the class SearchPhaseController method aggregateDfs.

public AggregatedDfs aggregateDfs(AtomicArray<DfsSearchResult> results) {
    ObjectObjectHashMap<Term, TermStatistics> termStatistics = HppcMaps.newNoNullKeysMap();
    ObjectObjectHashMap<String, CollectionStatistics> fieldStatistics = HppcMaps.newNoNullKeysMap();
    long aggMaxDoc = 0;
    for (AtomicArray.Entry<DfsSearchResult> lEntry : results.asList()) {
        final Term[] terms = lEntry.value.terms();
        final TermStatistics[] stats = lEntry.value.termStatistics();
        assert terms.length == stats.length;
        for (int i = 0; i < terms.length; i++) {
            assert terms[i] != null;
            TermStatistics existing = termStatistics.get(terms[i]);
            if (existing != null) {
                assert terms[i].bytes().equals(existing.term());
                // totalTermFrequency is an optional statistic we need to check if either one or both
                // are set to -1 which means not present and then set it globally to -1
                termStatistics.put(terms[i], new TermStatistics(existing.term(), existing.docFreq() + stats[i].docFreq(), optionalSum(existing.totalTermFreq(), stats[i].totalTermFreq())));
            } else {
                termStatistics.put(terms[i], stats[i]);
            }
        }
        assert !lEntry.value.fieldStatistics().containsKey(null);
        final Object[] keys = lEntry.value.fieldStatistics().keys;
        final Object[] values = lEntry.value.fieldStatistics().values;
        for (int i = 0; i < keys.length; i++) {
            if (keys[i] != null) {
                String key = (String) keys[i];
                CollectionStatistics value = (CollectionStatistics) values[i];
                assert key != null;
                CollectionStatistics existing = fieldStatistics.get(key);
                if (existing != null) {
                    CollectionStatistics merged = new CollectionStatistics(key, existing.maxDoc() + value.maxDoc(), optionalSum(existing.docCount(), value.docCount()), optionalSum(existing.sumTotalTermFreq(), value.sumTotalTermFreq()), optionalSum(existing.sumDocFreq(), value.sumDocFreq()));
                    fieldStatistics.put(key, merged);
                } else {
                    fieldStatistics.put(key, value);
                }
            }
        }
        aggMaxDoc += lEntry.value.maxDoc();
    }
    return new AggregatedDfs(termStatistics, fieldStatistics, aggMaxDoc);
}
Also used : AtomicArray(org.elasticsearch.common.util.concurrent.AtomicArray) DfsSearchResult(org.elasticsearch.search.dfs.DfsSearchResult) Term(org.apache.lucene.index.Term) TermStatistics(org.apache.lucene.search.TermStatistics) CollectionStatistics(org.apache.lucene.search.CollectionStatistics) AggregatedDfs(org.elasticsearch.search.dfs.AggregatedDfs)

Example 4 with AtomicArray

use of org.elasticsearch.common.util.concurrent.AtomicArray in project elasticsearch by elastic.

the class SearchPhaseController method getHits.

private SearchHits getHits(ReducedQueryPhase reducedQueryPhase, boolean ignoreFrom, ScoreDoc[] sortedDocs, AtomicArray<? extends QuerySearchResultProvider> fetchResultsArr) {
    List<? extends AtomicArray.Entry<? extends QuerySearchResultProvider>> fetchResults = fetchResultsArr.asList();
    boolean sorted = false;
    int sortScoreIndex = -1;
    if (reducedQueryPhase.oneResult.topDocs() instanceof TopFieldDocs) {
        TopFieldDocs fieldDocs = (TopFieldDocs) reducedQueryPhase.oneResult.queryResult().topDocs();
        if (fieldDocs instanceof CollapseTopFieldDocs && fieldDocs.fields.length == 1 && fieldDocs.fields[0].getType() == SortField.Type.SCORE) {
            sorted = false;
        } else {
            sorted = true;
            for (int i = 0; i < fieldDocs.fields.length; i++) {
                if (fieldDocs.fields[i].getType() == SortField.Type.SCORE) {
                    sortScoreIndex = i;
                }
            }
        }
    }
    // clean the fetch counter
    for (AtomicArray.Entry<? extends QuerySearchResultProvider> entry : fetchResults) {
        entry.value.fetchResult().initCounter();
    }
    int from = ignoreFrom ? 0 : reducedQueryPhase.oneResult.queryResult().from();
    int numSearchHits = (int) Math.min(reducedQueryPhase.fetchHits - from, reducedQueryPhase.oneResult.size());
    // with collapsing we can have more fetch hits than sorted docs
    numSearchHits = Math.min(sortedDocs.length, numSearchHits);
    // merge hits
    List<SearchHit> hits = new ArrayList<>();
    if (!fetchResults.isEmpty()) {
        for (int i = 0; i < numSearchHits; i++) {
            ScoreDoc shardDoc = sortedDocs[i];
            QuerySearchResultProvider fetchResultProvider = fetchResultsArr.get(shardDoc.shardIndex);
            if (fetchResultProvider == null) {
                continue;
            }
            FetchSearchResult fetchResult = fetchResultProvider.fetchResult();
            int index = fetchResult.counterGetAndIncrement();
            if (index < fetchResult.hits().internalHits().length) {
                SearchHit searchHit = fetchResult.hits().internalHits()[index];
                searchHit.score(shardDoc.score);
                searchHit.shard(fetchResult.shardTarget());
                if (sorted) {
                    FieldDoc fieldDoc = (FieldDoc) shardDoc;
                    searchHit.sortValues(fieldDoc.fields, reducedQueryPhase.oneResult.sortValueFormats());
                    if (sortScoreIndex != -1) {
                        searchHit.score(((Number) fieldDoc.fields[sortScoreIndex]).floatValue());
                    }
                }
                hits.add(searchHit);
            }
        }
    }
    return new SearchHits(hits.toArray(new SearchHit[hits.size()]), reducedQueryPhase.totalHits, reducedQueryPhase.maxScore);
}
Also used : AtomicArray(org.elasticsearch.common.util.concurrent.AtomicArray) QuerySearchResultProvider(org.elasticsearch.search.query.QuerySearchResultProvider) SearchHit(org.elasticsearch.search.SearchHit) FieldDoc(org.apache.lucene.search.FieldDoc) FetchSearchResult(org.elasticsearch.search.fetch.FetchSearchResult) ArrayList(java.util.ArrayList) IntArrayList(com.carrotsearch.hppc.IntArrayList) CollapseTopFieldDocs(org.apache.lucene.search.grouping.CollapseTopFieldDocs) TopFieldDocs(org.apache.lucene.search.TopFieldDocs) ScoreDoc(org.apache.lucene.search.ScoreDoc) SearchHits(org.elasticsearch.search.SearchHits) CollapseTopFieldDocs(org.apache.lucene.search.grouping.CollapseTopFieldDocs)

Example 5 with AtomicArray

use of org.elasticsearch.common.util.concurrent.AtomicArray in project elasticsearch by elastic.

the class TransportMultiTermVectorsAction method doExecute.

@Override
protected void doExecute(final MultiTermVectorsRequest request, final ActionListener<MultiTermVectorsResponse> listener) {
    ClusterState clusterState = clusterService.state();
    clusterState.blocks().globalBlockedRaiseException(ClusterBlockLevel.READ);
    final AtomicArray<MultiTermVectorsItemResponse> responses = new AtomicArray<>(request.requests.size());
    Map<ShardId, MultiTermVectorsShardRequest> shardRequests = new HashMap<>();
    for (int i = 0; i < request.requests.size(); i++) {
        TermVectorsRequest termVectorsRequest = request.requests.get(i);
        termVectorsRequest.routing(clusterState.metaData().resolveIndexRouting(termVectorsRequest.parent(), termVectorsRequest.routing(), termVectorsRequest.index()));
        if (!clusterState.metaData().hasConcreteIndex(termVectorsRequest.index())) {
            responses.set(i, new MultiTermVectorsItemResponse(null, new MultiTermVectorsResponse.Failure(termVectorsRequest.index(), termVectorsRequest.type(), termVectorsRequest.id(), new IndexNotFoundException(termVectorsRequest.index()))));
            continue;
        }
        String concreteSingleIndex = indexNameExpressionResolver.concreteSingleIndex(clusterState, termVectorsRequest).getName();
        if (termVectorsRequest.routing() == null && clusterState.getMetaData().routingRequired(concreteSingleIndex, termVectorsRequest.type())) {
            responses.set(i, new MultiTermVectorsItemResponse(null, new MultiTermVectorsResponse.Failure(concreteSingleIndex, termVectorsRequest.type(), termVectorsRequest.id(), new IllegalArgumentException("routing is required for [" + concreteSingleIndex + "]/[" + termVectorsRequest.type() + "]/[" + termVectorsRequest.id() + "]"))));
            continue;
        }
        ShardId shardId = clusterService.operationRouting().shardId(clusterState, concreteSingleIndex, termVectorsRequest.id(), termVectorsRequest.routing());
        MultiTermVectorsShardRequest shardRequest = shardRequests.get(shardId);
        if (shardRequest == null) {
            shardRequest = new MultiTermVectorsShardRequest(shardId.getIndexName(), shardId.id());
            shardRequest.preference(request.preference);
            shardRequests.put(shardId, shardRequest);
        }
        shardRequest.add(i, termVectorsRequest);
    }
    if (shardRequests.size() == 0) {
        // only failures..
        listener.onResponse(new MultiTermVectorsResponse(responses.toArray(new MultiTermVectorsItemResponse[responses.length()])));
    }
    final AtomicInteger counter = new AtomicInteger(shardRequests.size());
    for (final MultiTermVectorsShardRequest shardRequest : shardRequests.values()) {
        shardAction.execute(shardRequest, new ActionListener<MultiTermVectorsShardResponse>() {

            @Override
            public void onResponse(MultiTermVectorsShardResponse response) {
                for (int i = 0; i < response.locations.size(); i++) {
                    responses.set(response.locations.get(i), new MultiTermVectorsItemResponse(response.responses.get(i), response.failures.get(i)));
                }
                if (counter.decrementAndGet() == 0) {
                    finishHim();
                }
            }

            @Override
            public void onFailure(Exception e) {
                // create failures for all relevant requests
                for (int i = 0; i < shardRequest.locations.size(); i++) {
                    TermVectorsRequest termVectorsRequest = shardRequest.requests.get(i);
                    responses.set(shardRequest.locations.get(i), new MultiTermVectorsItemResponse(null, new MultiTermVectorsResponse.Failure(shardRequest.index(), termVectorsRequest.type(), termVectorsRequest.id(), e)));
                }
                if (counter.decrementAndGet() == 0) {
                    finishHim();
                }
            }

            private void finishHim() {
                listener.onResponse(new MultiTermVectorsResponse(responses.toArray(new MultiTermVectorsItemResponse[responses.length()])));
            }
        });
    }
}
Also used : ClusterState(org.elasticsearch.cluster.ClusterState) AtomicArray(org.elasticsearch.common.util.concurrent.AtomicArray) HashMap(java.util.HashMap) IndexNotFoundException(org.elasticsearch.index.IndexNotFoundException) ShardId(org.elasticsearch.index.shard.ShardId) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) IndexNotFoundException(org.elasticsearch.index.IndexNotFoundException)

Aggregations

AtomicArray (org.elasticsearch.common.util.concurrent.AtomicArray)18 ScoreDoc (org.apache.lucene.search.ScoreDoc)8 ArrayList (java.util.ArrayList)6 AtomicInteger (java.util.concurrent.atomic.AtomicInteger)6 Index (org.elasticsearch.index.Index)6 SearchShardTarget (org.elasticsearch.search.SearchShardTarget)6 TopDocs (org.apache.lucene.search.TopDocs)5 ActionListener (org.elasticsearch.action.ActionListener)5 DfsSearchResult (org.elasticsearch.search.dfs.DfsSearchResult)5 IOException (java.io.IOException)4 ClusterState (org.elasticsearch.cluster.ClusterState)4 QuerySearchResult (org.elasticsearch.search.query.QuerySearchResult)4 QuerySearchResultProvider (org.elasticsearch.search.query.QuerySearchResultProvider)4 UncheckedIOException (java.io.UncheckedIOException)3 HashMap (java.util.HashMap)3 AtomicReference (java.util.concurrent.atomic.AtomicReference)3 IndexNotFoundException (org.elasticsearch.index.IndexNotFoundException)3 CompletionSuggestion (org.elasticsearch.search.suggest.completion.CompletionSuggestion)3 IntArrayList (com.carrotsearch.hppc.IntArrayList)2 List (java.util.List)2