Search in sources :

Example 96 with Releasable

use of org.opensearch.common.lease.Releasable in project OpenSearch by opensearch-project.

the class DeflateCompressor method threadLocalOutputStream.

@Override
public OutputStream threadLocalOutputStream(OutputStream out) throws IOException {
    out.write(HEADER);
    final ReleasableReference<Deflater> current = deflaterForStreamRef.get();
    final Releasable releasable;
    final Deflater deflater;
    if (current.inUse) {
        // Nested compression streams should not happen but we still handle them safely by using a fresh Deflater
        deflater = new Deflater(LEVEL, true);
        releasable = deflater::end;
    } else {
        deflater = current.get();
        releasable = current;
    }
    final boolean syncFlush = true;
    DeflaterOutputStream deflaterOutputStream = new DeflaterOutputStream(out, deflater, BUFFER_SIZE, syncFlush);
    return new BufferedOutputStream(deflaterOutputStream, BUFFER_SIZE) {

        // Due to https://bugs.openjdk.java.net/browse/JDK-8054565 we can't rely on the buffered output stream to only close once
        // in code that has to support Java 8 so we manually manage a close flag for this stream.
        private boolean closed = false;

        public void close() throws IOException {
            if (closed) {
                return;
            }
            closed = true;
            try {
                super.close();
            } finally {
                // important to release native memory
                releasable.close();
            }
        }
    };
}
Also used : Deflater(java.util.zip.Deflater) DeflaterOutputStream(java.util.zip.DeflaterOutputStream) Releasable(org.opensearch.common.lease.Releasable) BufferedOutputStream(java.io.BufferedOutputStream)

Example 97 with Releasable

use of org.opensearch.common.lease.Releasable in project OpenSearch by opensearch-project.

the class InternalEngine method delete.

@Override
public DeleteResult delete(Delete delete) throws IOException {
    versionMap.enforceSafeAccess();
    assert Objects.equals(delete.uid().field(), IdFieldMapper.NAME) : delete.uid().field();
    assert assertIncomingSequenceNumber(delete.origin(), delete.seqNo());
    final DeleteResult deleteResult;
    int reservedDocs = 0;
    // NOTE: we don't throttle this when merges fall behind because delete-by-id does not create new segments:
    try (ReleasableLock ignored = readLock.acquire();
        Releasable ignored2 = versionMap.acquireLock(delete.uid().bytes())) {
        ensureOpen();
        lastWriteNanos = delete.startTime();
        final DeletionStrategy plan = deletionStrategyForOperation(delete);
        reservedDocs = plan.reservedDocs;
        if (plan.earlyResultOnPreflightError.isPresent()) {
            assert delete.origin() == Operation.Origin.PRIMARY : delete.origin();
            deleteResult = plan.earlyResultOnPreflightError.get();
        } else {
            // generate or register sequence number
            if (delete.origin() == Operation.Origin.PRIMARY) {
                delete = new Delete(delete.type(), delete.id(), delete.uid(), generateSeqNoForOperationOnPrimary(delete), delete.primaryTerm(), delete.version(), delete.versionType(), delete.origin(), delete.startTime(), delete.getIfSeqNo(), delete.getIfPrimaryTerm());
                advanceMaxSeqNoOfUpdatesOrDeletesOnPrimary(delete.seqNo());
            } else {
                markSeqNoAsSeen(delete.seqNo());
            }
            assert delete.seqNo() >= 0 : "ops should have an assigned seq no.; origin: " + delete.origin();
            if (plan.deleteFromLucene || plan.addStaleOpToLucene) {
                deleteResult = deleteInLucene(delete, plan);
                if (plan.deleteFromLucene) {
                    numDocDeletes.inc();
                    versionMap.putDeleteUnderLock(delete.uid().bytes(), new DeleteVersionValue(plan.versionOfDeletion, delete.seqNo(), delete.primaryTerm(), engineConfig.getThreadPool().relativeTimeInMillis()));
                }
            } else {
                deleteResult = new DeleteResult(plan.versionOfDeletion, delete.primaryTerm(), delete.seqNo(), plan.currentlyDeleted == false);
            }
        }
        if (delete.origin().isFromTranslog() == false && deleteResult.getResultType() == Result.Type.SUCCESS) {
            final Translog.Location location = translog.add(new Translog.Delete(delete, deleteResult));
            deleteResult.setTranslogLocation(location);
        }
        localCheckpointTracker.markSeqNoAsProcessed(deleteResult.getSeqNo());
        if (deleteResult.getTranslogLocation() == null) {
            // the op is coming from the translog (and is hence persisted already) or does not have a sequence number (version conflict)
            assert delete.origin().isFromTranslog() || deleteResult.getSeqNo() == SequenceNumbers.UNASSIGNED_SEQ_NO;
            localCheckpointTracker.markSeqNoAsPersisted(deleteResult.getSeqNo());
        }
        deleteResult.setTook(System.nanoTime() - delete.startTime());
        deleteResult.freeze();
    } catch (RuntimeException | IOException e) {
        try {
            maybeFailEngine("delete", e);
        } catch (Exception inner) {
            e.addSuppressed(inner);
        }
        throw e;
    } finally {
        releaseInFlightDocs(reservedDocs);
    }
    maybePruneDeletes();
    return deleteResult;
}
Also used : IOException(java.io.IOException) LongPoint(org.apache.lucene.document.LongPoint) ReleasableLock(org.opensearch.common.util.concurrent.ReleasableLock) AlreadyClosedException(org.apache.lucene.store.AlreadyClosedException) LockObtainFailedException(org.apache.lucene.store.LockObtainFailedException) IOException(java.io.IOException) TranslogCorruptedException(org.opensearch.index.translog.TranslogCorruptedException) Translog(org.opensearch.index.translog.Translog) Releasable(org.opensearch.common.lease.Releasable)

Example 98 with Releasable

use of org.opensearch.common.lease.Releasable in project OpenSearch by opensearch-project.

the class InternalEngine method restoreVersionMapAndCheckpointTracker.

/**
 * Restores the live version map and local checkpoint of this engine using documents (including soft-deleted)
 * after the local checkpoint in the safe commit. This step ensures the live version map and checkpoint tracker
 * are in sync with the Lucene commit.
 */
private void restoreVersionMapAndCheckpointTracker(DirectoryReader directoryReader) throws IOException {
    final IndexSearcher searcher = new IndexSearcher(directoryReader);
    searcher.setQueryCache(null);
    final Query query = new BooleanQuery.Builder().add(LongPoint.newRangeQuery(SeqNoFieldMapper.NAME, getPersistedLocalCheckpoint() + 1, Long.MAX_VALUE), BooleanClause.Occur.MUST).add(new DocValuesFieldExistsQuery(SeqNoFieldMapper.PRIMARY_TERM_NAME), BooleanClause.Occur.MUST).build();
    final Weight weight = searcher.createWeight(searcher.rewrite(query), ScoreMode.COMPLETE_NO_SCORES, 1.0f);
    for (LeafReaderContext leaf : directoryReader.leaves()) {
        final Scorer scorer = weight.scorer(leaf);
        if (scorer == null) {
            continue;
        }
        final CombinedDocValues dv = new CombinedDocValues(leaf.reader());
        final IdOnlyFieldVisitor idFieldVisitor = new IdOnlyFieldVisitor();
        final DocIdSetIterator iterator = scorer.iterator();
        int docId;
        while ((docId = iterator.nextDoc()) != DocIdSetIterator.NO_MORE_DOCS) {
            final long primaryTerm = dv.docPrimaryTerm(docId);
            final long seqNo = dv.docSeqNo(docId);
            localCheckpointTracker.markSeqNoAsProcessed(seqNo);
            localCheckpointTracker.markSeqNoAsPersisted(seqNo);
            idFieldVisitor.reset();
            leaf.reader().document(docId, idFieldVisitor);
            if (idFieldVisitor.getId() == null) {
                assert dv.isTombstone(docId);
                continue;
            }
            final BytesRef uid = new Term(IdFieldMapper.NAME, Uid.encodeId(idFieldVisitor.getId())).bytes();
            try (Releasable ignored = versionMap.acquireLock(uid)) {
                final VersionValue curr = versionMap.getUnderLock(uid);
                if (curr == null || compareOpToVersionMapOnSeqNo(idFieldVisitor.getId(), seqNo, primaryTerm, curr) == OpVsLuceneDocStatus.OP_NEWER) {
                    if (dv.isTombstone(docId)) {
                        // use 0L for the start time so we can prune this delete tombstone quickly
                        // when the local checkpoint advances (i.e., after a recovery completed).
                        final long startTime = 0L;
                        versionMap.putDeleteUnderLock(uid, new DeleteVersionValue(dv.docVersion(docId), seqNo, primaryTerm, startTime));
                    } else {
                        versionMap.putIndexUnderLock(uid, new IndexVersionValue(null, dv.docVersion(docId), seqNo, primaryTerm));
                    }
                }
            }
        }
    }
    // remove live entries in the version map
    refresh("restore_version_map_and_checkpoint_tracker", SearcherScope.INTERNAL, true);
}
Also used : IndexSearcher(org.apache.lucene.search.IndexSearcher) BooleanQuery(org.apache.lucene.search.BooleanQuery) Query(org.apache.lucene.search.Query) DocValuesFieldExistsQuery(org.apache.lucene.search.DocValuesFieldExistsQuery) TermQuery(org.apache.lucene.search.TermQuery) BooleanQuery(org.apache.lucene.search.BooleanQuery) Scorer(org.apache.lucene.search.Scorer) DocValuesFieldExistsQuery(org.apache.lucene.search.DocValuesFieldExistsQuery) Term(org.apache.lucene.index.Term) IdOnlyFieldVisitor(org.opensearch.index.fieldvisitor.IdOnlyFieldVisitor) Weight(org.apache.lucene.search.Weight) LongPoint(org.apache.lucene.document.LongPoint) LeafReaderContext(org.apache.lucene.index.LeafReaderContext) Releasable(org.opensearch.common.lease.Releasable) DocIdSetIterator(org.apache.lucene.search.DocIdSetIterator) BytesRef(org.apache.lucene.util.BytesRef)

Example 99 with Releasable

use of org.opensearch.common.lease.Releasable in project OpenSearch by opensearch-project.

the class InternalEngine method get.

@Override
public GetResult get(Get get, BiFunction<String, SearcherScope, Engine.Searcher> searcherFactory) throws EngineException {
    assert Objects.equals(get.uid().field(), IdFieldMapper.NAME) : get.uid().field();
    try (ReleasableLock ignored = readLock.acquire()) {
        ensureOpen();
        SearcherScope scope;
        if (get.realtime()) {
            VersionValue versionValue = null;
            try (Releasable ignore = versionMap.acquireLock(get.uid().bytes())) {
                // we need to lock here to access the version map to do this truly in RT
                versionValue = getVersionFromMap(get.uid().bytes());
            }
            if (versionValue != null) {
                if (versionValue.isDelete()) {
                    return GetResult.NOT_EXISTS;
                }
                if (get.versionType().isVersionConflictForReads(versionValue.version, get.version())) {
                    throw new VersionConflictEngineException(shardId, get.id(), get.versionType().explainConflictForReads(versionValue.version, get.version()));
                }
                if (get.getIfSeqNo() != SequenceNumbers.UNASSIGNED_SEQ_NO && (get.getIfSeqNo() != versionValue.seqNo || get.getIfPrimaryTerm() != versionValue.term)) {
                    throw new VersionConflictEngineException(shardId, get.id(), get.getIfSeqNo(), get.getIfPrimaryTerm(), versionValue.seqNo, versionValue.term);
                }
                if (get.isReadFromTranslog()) {
                    // the update call doesn't need the consistency since it's source only + _parent but parent can go away in 7.0
                    if (versionValue.getLocation() != null) {
                        try {
                            Translog.Operation operation = translog.readOperation(versionValue.getLocation());
                            if (operation != null) {
                                // in the case of a already pruned translog generation we might get null here - yet very unlikely
                                final Translog.Index index = (Translog.Index) operation;
                                TranslogLeafReader reader = new TranslogLeafReader(index);
                                return new GetResult(new Engine.Searcher("realtime_get", reader, IndexSearcher.getDefaultSimilarity(), null, IndexSearcher.getDefaultQueryCachingPolicy(), reader), new VersionsAndSeqNoResolver.DocIdAndVersion(0, index.version(), index.seqNo(), index.primaryTerm(), reader, 0), true);
                            }
                        } catch (IOException e) {
                            // lets check if the translog has failed with a tragic event
                            maybeFailEngine("realtime_get", e);
                            throw new EngineException(shardId, "failed to read operation from translog", e);
                        }
                    } else {
                        trackTranslogLocation.set(true);
                    }
                }
                assert versionValue.seqNo >= 0 : versionValue;
                refreshIfNeeded("realtime_get", versionValue.seqNo);
            }
            scope = SearcherScope.INTERNAL;
        } else {
            // we expose what has been externally expose in a point in time snapshot via an explicit refresh
            scope = SearcherScope.EXTERNAL;
        }
        // no version, get the version from the index, we know that we refresh on flush
        return getFromSearcher(get, searcherFactory, scope);
    }
}
Also used : VersionsAndSeqNoResolver(org.opensearch.common.lucene.uid.VersionsAndSeqNoResolver) IOException(java.io.IOException) ReleasableLock(org.opensearch.common.util.concurrent.ReleasableLock) Translog(org.opensearch.index.translog.Translog) Releasable(org.opensearch.common.lease.Releasable)

Example 100 with Releasable

use of org.opensearch.common.lease.Releasable in project OpenSearch by opensearch-project.

the class SearchService method executeQueryPhase.

public void executeQueryPhase(InternalScrollSearchRequest request, SearchShardTask task, ActionListener<ScrollQuerySearchResult> listener) {
    final LegacyReaderContext readerContext = (LegacyReaderContext) findReaderContext(request.contextId(), request);
    final Releasable markAsUsed;
    try {
        markAsUsed = readerContext.markAsUsed(getScrollKeepAlive(request.scroll()));
    } catch (Exception e) {
        // We need to release the reader context of the scroll when we hit any exception (here the keep_alive can be too large)
        freeReaderContext(readerContext.id());
        throw e;
    }
    runAsync(getExecutor(readerContext.indexShard()), () -> {
        final ShardSearchRequest shardSearchRequest = readerContext.getShardSearchRequest(null);
        try (SearchContext searchContext = createContext(readerContext, shardSearchRequest, task, false);
            SearchOperationListenerExecutor executor = new SearchOperationListenerExecutor(searchContext)) {
            searchContext.searcher().setAggregatedDfs(readerContext.getAggregatedDfs(null));
            processScroll(request, readerContext, searchContext);
            queryPhase.execute(searchContext);
            executor.success();
            readerContext.setRescoreDocIds(searchContext.rescoreDocIds());
            return new ScrollQuerySearchResult(searchContext.queryResult(), searchContext.shardTarget());
        } catch (Exception e) {
            logger.trace("Query phase failed", e);
            // we handle the failure in the failure listener below
            throw e;
        }
    }, wrapFailureListener(listener, readerContext, markAsUsed));
}
Also used : ShardSearchRequest(org.opensearch.search.internal.ShardSearchRequest) LegacyReaderContext(org.opensearch.search.internal.LegacyReaderContext) SearchContext(org.opensearch.search.internal.SearchContext) Releasable(org.opensearch.common.lease.Releasable) ScrollQuerySearchResult(org.opensearch.search.query.ScrollQuerySearchResult) OpenSearchRejectedExecutionException(org.opensearch.common.util.concurrent.OpenSearchRejectedExecutionException) AggregationInitializationException(org.opensearch.search.aggregations.AggregationInitializationException) IOException(java.io.IOException) ExecutionException(java.util.concurrent.ExecutionException) OpenSearchException(org.opensearch.OpenSearchException) IndexNotFoundException(org.opensearch.index.IndexNotFoundException)

Aggregations

Releasable (org.opensearch.common.lease.Releasable)161 ShardId (org.opensearch.index.shard.ShardId)50 IOException (java.io.IOException)45 CountDownLatch (java.util.concurrent.CountDownLatch)36 Settings (org.opensearch.common.settings.Settings)35 IndexingPressurePerShardStats (org.opensearch.index.stats.IndexingPressurePerShardStats)32 ExecutionException (java.util.concurrent.ExecutionException)30 ArrayList (java.util.ArrayList)28 AtomicBoolean (java.util.concurrent.atomic.AtomicBoolean)28 AlreadyClosedException (org.apache.lucene.store.AlreadyClosedException)27 OpenSearchException (org.opensearch.OpenSearchException)25 ThreadPool (org.opensearch.threadpool.ThreadPool)25 PlainActionFuture (org.opensearch.action.support.PlainActionFuture)24 ActionListener (org.opensearch.action.ActionListener)23 IndexRequest (org.opensearch.action.index.IndexRequest)22 ShardRouting (org.opensearch.cluster.routing.ShardRouting)22 IndexingPressureStats (org.opensearch.index.stats.IndexingPressureStats)22 BrokenBarrierException (java.util.concurrent.BrokenBarrierException)21 List (java.util.List)20 Translog (org.opensearch.index.translog.Translog)19