Search in sources :

Example 1 with ReleasableLock

use of org.opensearch.common.util.concurrent.ReleasableLock in project OpenSearch by opensearch-project.

the class Cache method invalidateAll.

/**
 * Invalidate all cache entries. A removal notification will be issued for invalidated entries with
 * {@link org.opensearch.common.cache.RemovalNotification.RemovalReason} INVALIDATED.
 */
public void invalidateAll() {
    Entry<K, V> h;
    boolean[] haveSegmentLock = new boolean[NUMBER_OF_SEGMENTS];
    try {
        for (int i = 0; i < NUMBER_OF_SEGMENTS; i++) {
            segments[i].segmentLock.writeLock().lock();
            haveSegmentLock[i] = true;
        }
        try (ReleasableLock ignored = lruLock.acquire()) {
            h = head;
            Arrays.stream(segments).forEach(segment -> segment.map = new HashMap<>());
            Entry<K, V> current = head;
            while (current != null) {
                current.state = State.DELETED;
                current = current.after;
            }
            head = tail = null;
            count = 0;
            weight = 0;
        }
    } finally {
        for (int i = NUMBER_OF_SEGMENTS - 1; i >= 0; i--) {
            if (haveSegmentLock[i]) {
                segments[i].segmentLock.writeLock().unlock();
            }
        }
    }
    while (h != null) {
        removalListener.onRemoval(new RemovalNotification<>(h.key, h.value, RemovalNotification.RemovalReason.INVALIDATED));
        h = h.after;
    }
}
Also used : HashMap(java.util.HashMap) ReleasableLock(org.opensearch.common.util.concurrent.ReleasableLock)

Example 2 with ReleasableLock

use of org.opensearch.common.util.concurrent.ReleasableLock in project OpenSearch by opensearch-project.

the class InternalEngine method flush.

@Override
public void flush(boolean force, boolean waitIfOngoing) throws EngineException {
    ensureOpen();
    if (force && waitIfOngoing == false) {
        assert false : "wait_if_ongoing must be true for a force flush: force=" + force + " wait_if_ongoing=" + waitIfOngoing;
        throw new IllegalArgumentException("wait_if_ongoing must be true for a force flush: force=" + force + " wait_if_ongoing=" + waitIfOngoing);
    }
    try (ReleasableLock lock = readLock.acquire()) {
        ensureOpen();
        if (flushLock.tryLock() == false) {
            // if we can't get the lock right away we block if needed otherwise barf
            if (waitIfOngoing == false) {
                return;
            }
            logger.trace("waiting for in-flight flush to finish");
            flushLock.lock();
            logger.trace("acquired flush lock after blocking");
        } else {
            logger.trace("acquired flush lock immediately");
        }
        try {
            // Only flush if (1) Lucene has uncommitted docs, or (2) forced by caller, or (3) the
            // newly created commit points to a different translog generation (can free translog),
            // or (4) the local checkpoint information in the last commit is stale, which slows down future recoveries.
            boolean hasUncommittedChanges = indexWriter.hasUncommittedChanges();
            boolean shouldPeriodicallyFlush = shouldPeriodicallyFlush();
            if (hasUncommittedChanges || force || shouldPeriodicallyFlush || getProcessedLocalCheckpoint() > Long.parseLong(lastCommittedSegmentInfos.userData.get(SequenceNumbers.LOCAL_CHECKPOINT_KEY))) {
                ensureCanFlush();
                try {
                    translog.rollGeneration();
                    logger.trace("starting commit for flush; commitTranslog=true");
                    commitIndexWriter(indexWriter, translog);
                    logger.trace("finished commit for flush");
                    // a temporary debugging to investigate test failure - issue#32827. Remove when the issue is resolved
                    logger.debug("new commit on flush, hasUncommittedChanges:{}, force:{}, shouldPeriodicallyFlush:{}", hasUncommittedChanges, force, shouldPeriodicallyFlush);
                    // we need to refresh in order to clear older version values
                    refresh("version_table_flush", SearcherScope.INTERNAL, true);
                    translog.trimUnreferencedReaders();
                } catch (AlreadyClosedException e) {
                    failOnTragicEvent(e);
                    throw e;
                } catch (Exception e) {
                    throw new FlushFailedEngineException(shardId, e);
                }
                refreshLastCommittedSegmentInfos();
            }
        } catch (FlushFailedEngineException ex) {
            maybeFailEngine("flush", ex);
            throw ex;
        } finally {
            flushLock.unlock();
        }
    }
    // (e.g., moves backwards) we will at least still sometimes prune deleted tombstones:
    if (engineConfig.isEnableGcDeletes()) {
        pruneDeletedTombstones();
    }
}
Also used : AlreadyClosedException(org.apache.lucene.store.AlreadyClosedException) ReleasableLock(org.opensearch.common.util.concurrent.ReleasableLock) AlreadyClosedException(org.apache.lucene.store.AlreadyClosedException) LockObtainFailedException(org.apache.lucene.store.LockObtainFailedException) IOException(java.io.IOException) TranslogCorruptedException(org.opensearch.index.translog.TranslogCorruptedException)

Example 3 with ReleasableLock

use of org.opensearch.common.util.concurrent.ReleasableLock in project OpenSearch by opensearch-project.

the class InternalEngine method trimOperationsFromTranslog.

@Override
public void trimOperationsFromTranslog(long belowTerm, long aboveSeqNo) throws EngineException {
    try (ReleasableLock lock = readLock.acquire()) {
        ensureOpen();
        translog.trimOperations(belowTerm, aboveSeqNo);
    } catch (AlreadyClosedException e) {
        failOnTragicEvent(e);
        throw e;
    } catch (Exception e) {
        try {
            failEngine("translog operations trimming failed", e);
        } catch (Exception inner) {
            e.addSuppressed(inner);
        }
        throw new EngineException(shardId, "failed to trim translog operations", e);
    }
}
Also used : AlreadyClosedException(org.apache.lucene.store.AlreadyClosedException) ReleasableLock(org.opensearch.common.util.concurrent.ReleasableLock) AlreadyClosedException(org.apache.lucene.store.AlreadyClosedException) LockObtainFailedException(org.apache.lucene.store.LockObtainFailedException) IOException(java.io.IOException) TranslogCorruptedException(org.opensearch.index.translog.TranslogCorruptedException)

Example 4 with ReleasableLock

use of org.opensearch.common.util.concurrent.ReleasableLock in project OpenSearch by opensearch-project.

the class InternalEngine method rollTranslogGeneration.

@Override
public void rollTranslogGeneration() throws EngineException {
    try (ReleasableLock ignored = readLock.acquire()) {
        ensureOpen();
        translog.rollGeneration();
        translog.trimUnreferencedReaders();
    } catch (AlreadyClosedException e) {
        failOnTragicEvent(e);
        throw e;
    } catch (Exception e) {
        try {
            failEngine("translog trimming failed", e);
        } catch (Exception inner) {
            e.addSuppressed(inner);
        }
        throw new EngineException(shardId, "failed to roll translog", e);
    }
}
Also used : AlreadyClosedException(org.apache.lucene.store.AlreadyClosedException) ReleasableLock(org.opensearch.common.util.concurrent.ReleasableLock) AlreadyClosedException(org.apache.lucene.store.AlreadyClosedException) LockObtainFailedException(org.apache.lucene.store.LockObtainFailedException) IOException(java.io.IOException) TranslogCorruptedException(org.opensearch.index.translog.TranslogCorruptedException)

Example 5 with ReleasableLock

use of org.opensearch.common.util.concurrent.ReleasableLock in project OpenSearch by opensearch-project.

the class InternalEngine method index.

@Override
public IndexResult index(Index index) throws IOException {
    assert Objects.equals(index.uid().field(), IdFieldMapper.NAME) : index.uid().field();
    final boolean doThrottle = index.origin().isRecovery() == false;
    try (ReleasableLock releasableLock = readLock.acquire()) {
        ensureOpen();
        assert assertIncomingSequenceNumber(index.origin(), index.seqNo());
        int reservedDocs = 0;
        try (Releasable ignored = versionMap.acquireLock(index.uid().bytes());
            Releasable indexThrottle = doThrottle ? throttle.acquireThrottle() : () -> {
            }) {
            lastWriteNanos = index.startTime();
            /* A NOTE ABOUT APPEND ONLY OPTIMIZATIONS:
                 * if we have an autoGeneratedID that comes into the engine we can potentially optimize
                 * and just use addDocument instead of updateDocument and skip the entire version and index lookupVersion across the board.
                 * Yet, we have to deal with multiple document delivery, for this we use a property of the document that is added
                 * to detect if it has potentially been added before. We use the documents timestamp for this since it's something
                 * that:
                 *  - doesn't change per document
                 *  - is preserved in the transaction log
                 *  - and is assigned before we start to index / replicate
                 * NOTE: it's not important for this timestamp to be consistent across nodes etc. it's just a number that is in the common
                 * case increasing and can be used in the failure case when we retry and resent documents to establish a happens before
                 * relationship. For instance:
                 *  - doc A has autoGeneratedIdTimestamp = 10, isRetry = false
                 *  - doc B has autoGeneratedIdTimestamp = 9, isRetry = false
                 *
                 *  while both docs are in in flight, we disconnect on one node, reconnect and send doc A again
                 *  - now doc A' has autoGeneratedIdTimestamp = 10, isRetry = true
                 *
                 *  if A' arrives on the shard first we update maxUnsafeAutoIdTimestamp to 10 and use update document. All subsequent
                 *  documents that arrive (A and B) will also use updateDocument since their timestamps are less than
                 *  maxUnsafeAutoIdTimestamp. While this is not strictly needed for doc B it is just much simpler to implement since it
                 *  will just de-optimize some doc in the worst case.
                 *
                 *  if A arrives on the shard first we use addDocument since maxUnsafeAutoIdTimestamp is < 10. A` will then just be skipped
                 *  or calls updateDocument.
                 */
            final IndexingStrategy plan = indexingStrategyForOperation(index);
            reservedDocs = plan.reservedDocs;
            final IndexResult indexResult;
            if (plan.earlyResultOnPreFlightError.isPresent()) {
                assert index.origin() == Operation.Origin.PRIMARY : index.origin();
                indexResult = plan.earlyResultOnPreFlightError.get();
                assert indexResult.getResultType() == Result.Type.FAILURE : indexResult.getResultType();
            } else {
                // generate or register sequence number
                if (index.origin() == Operation.Origin.PRIMARY) {
                    index = new Index(index.uid(), index.parsedDoc(), generateSeqNoForOperationOnPrimary(index), index.primaryTerm(), index.version(), index.versionType(), index.origin(), index.startTime(), index.getAutoGeneratedIdTimestamp(), index.isRetry(), index.getIfSeqNo(), index.getIfPrimaryTerm());
                    final boolean toAppend = plan.indexIntoLucene && plan.useLuceneUpdateDocument == false;
                    if (toAppend == false) {
                        advanceMaxSeqNoOfUpdatesOrDeletesOnPrimary(index.seqNo());
                    }
                } else {
                    markSeqNoAsSeen(index.seqNo());
                }
                assert index.seqNo() >= 0 : "ops should have an assigned seq no.; origin: " + index.origin();
                if (plan.indexIntoLucene || plan.addStaleOpToLucene) {
                    indexResult = indexIntoLucene(index, plan);
                } else {
                    indexResult = new IndexResult(plan.versionForIndexing, index.primaryTerm(), index.seqNo(), plan.currentNotFoundOrDeleted);
                }
            }
            if (index.origin().isFromTranslog() == false) {
                final Translog.Location location;
                if (indexResult.getResultType() == Result.Type.SUCCESS) {
                    location = translog.add(new Translog.Index(index, indexResult));
                } else if (indexResult.getSeqNo() != SequenceNumbers.UNASSIGNED_SEQ_NO) {
                    // if we have document failure, record it as a no-op in the translog and Lucene with the generated seq_no
                    final NoOp noOp = new NoOp(indexResult.getSeqNo(), index.primaryTerm(), index.origin(), index.startTime(), indexResult.getFailure().toString());
                    location = innerNoOp(noOp).getTranslogLocation();
                } else {
                    location = null;
                }
                indexResult.setTranslogLocation(location);
            }
            if (plan.indexIntoLucene && indexResult.getResultType() == Result.Type.SUCCESS) {
                final Translog.Location translogLocation = trackTranslogLocation.get() ? indexResult.getTranslogLocation() : null;
                versionMap.maybePutIndexUnderLock(index.uid().bytes(), new IndexVersionValue(translogLocation, plan.versionForIndexing, index.seqNo(), index.primaryTerm()));
            }
            localCheckpointTracker.markSeqNoAsProcessed(indexResult.getSeqNo());
            if (indexResult.getTranslogLocation() == null) {
                // the op is coming from the translog (and is hence persisted already) or it does not have a sequence number
                assert index.origin().isFromTranslog() || indexResult.getSeqNo() == SequenceNumbers.UNASSIGNED_SEQ_NO;
                localCheckpointTracker.markSeqNoAsPersisted(indexResult.getSeqNo());
            }
            indexResult.setTook(System.nanoTime() - index.startTime());
            indexResult.freeze();
            return indexResult;
        } finally {
            releaseInFlightDocs(reservedDocs);
        }
    } catch (RuntimeException | IOException e) {
        try {
            if (e instanceof AlreadyClosedException == false && treatDocumentFailureAsTragicError(index)) {
                failEngine("index id[" + index.id() + "] origin[" + index.origin() + "] seq#[" + index.seqNo() + "]", e);
            } else {
                maybeFailEngine("index id[" + index.id() + "] origin[" + index.origin() + "] seq#[" + index.seqNo() + "]", e);
            }
        } catch (Exception inner) {
            e.addSuppressed(inner);
        }
        throw e;
    }
}
Also used : IOException(java.io.IOException) AlreadyClosedException(org.apache.lucene.store.AlreadyClosedException) ReleasableLock(org.opensearch.common.util.concurrent.ReleasableLock) LongPoint(org.apache.lucene.document.LongPoint) AlreadyClosedException(org.apache.lucene.store.AlreadyClosedException) LockObtainFailedException(org.apache.lucene.store.LockObtainFailedException) IOException(java.io.IOException) TranslogCorruptedException(org.opensearch.index.translog.TranslogCorruptedException) Translog(org.opensearch.index.translog.Translog) Releasable(org.opensearch.common.lease.Releasable)

Aggregations

ReleasableLock (org.opensearch.common.util.concurrent.ReleasableLock)27 IOException (java.io.IOException)15 AlreadyClosedException (org.apache.lucene.store.AlreadyClosedException)12 LockObtainFailedException (org.apache.lucene.store.LockObtainFailedException)7 TranslogCorruptedException (org.opensearch.index.translog.TranslogCorruptedException)7 ArrayList (java.util.ArrayList)5 EOFException (java.io.EOFException)4 Path (java.nio.file.Path)4 HashMap (java.util.HashMap)4 Releasable (org.opensearch.common.lease.Releasable)4 MissingHistoryOperationsException (org.opensearch.index.engine.MissingHistoryOperationsException)4 Translog (org.opensearch.index.translog.Translog)4 Closeable (java.io.Closeable)3 Arrays (java.util.Arrays)3 Iterator (java.util.Iterator)3 LongPoint (org.apache.lucene.document.LongPoint)3 FileChannel (java.nio.channels.FileChannel)2 Files (java.nio.file.Files)2 StandardOpenOption (java.nio.file.StandardOpenOption)2 Collections (java.util.Collections)2