Search in sources :

Example 41 with ReleasableLock

use of org.elasticsearch.common.util.concurrent.ReleasableLock in project crate by crate.

the class InternalEngine method segments.

@Override
public List<Segment> segments(boolean verbose) {
    try (ReleasableLock lock = readLock.acquire()) {
        Segment[] segmentsArr = getSegmentInfo(lastCommittedSegmentInfos, verbose);
        // fill in the merges flag
        Set<OnGoingMerge> onGoingMerges = mergeScheduler.onGoingMerges();
        for (OnGoingMerge onGoingMerge : onGoingMerges) {
            for (SegmentCommitInfo segmentInfoPerCommit : onGoingMerge.getMergedSegments()) {
                for (Segment segment : segmentsArr) {
                    if (segment.getName().equals(segmentInfoPerCommit.info.name)) {
                        segment.mergeId = onGoingMerge.getId();
                        break;
                    }
                }
            }
        }
        return Arrays.asList(segmentsArr);
    }
}
Also used : SegmentCommitInfo(org.apache.lucene.index.SegmentCommitInfo) OnGoingMerge(org.elasticsearch.index.merge.OnGoingMerge) ReleasableLock(org.elasticsearch.common.util.concurrent.ReleasableLock)

Example 42 with ReleasableLock

use of org.elasticsearch.common.util.concurrent.ReleasableLock in project crate by crate.

the class InternalEngine method tryRenewSyncCommit.

final boolean tryRenewSyncCommit() {
    boolean renewed = false;
    try (ReleasableLock lock = writeLock.acquire()) {
        ensureOpen();
        ensureCanFlush();
        String syncId = lastCommittedSegmentInfos.getUserData().get(SYNC_COMMIT_ID);
        long localCheckpointOfLastCommit = Long.parseLong(lastCommittedSegmentInfos.userData.get(SequenceNumbers.LOCAL_CHECKPOINT_KEY));
        if (syncId != null && indexWriter.hasUncommittedChanges() && translog.estimateTotalOperationsFromMinSeq(localCheckpointOfLastCommit + 1) == 0) {
            logger.trace("start renewing sync commit [{}]", syncId);
            commitIndexWriter(indexWriter, translog, syncId);
            logger.debug("successfully sync committed. sync id [{}].", syncId);
            lastCommittedSegmentInfos = store.readLastCommittedSegmentsInfo();
            renewed = true;
        }
    } catch (IOException ex) {
        maybeFailEngine("renew sync commit", ex);
        throw new EngineException(shardId, "failed to renew sync commit", ex);
    }
    if (renewed) {
        // refresh outside of the write lock
        // we have to refresh internal reader here to ensure we release unreferenced segments.
        refresh("renew sync commit", SearcherScope.INTERNAL, true);
    }
    return renewed;
}
Also used : IOException(java.io.IOException) ReleasableLock(org.elasticsearch.common.util.concurrent.ReleasableLock)

Example 43 with ReleasableLock

use of org.elasticsearch.common.util.concurrent.ReleasableLock in project crate by crate.

the class InternalEngine method delete.

@Override
public DeleteResult delete(Delete delete) throws IOException {
    versionMap.enforceSafeAccess();
    assert Objects.equals(delete.uid().field(), IdFieldMapper.NAME) : delete.uid().field();
    assert assertIncomingSequenceNumber(delete.origin(), delete.seqNo());
    final DeleteResult deleteResult;
    // NOTE: we don't throttle this when merges fall behind because delete-by-id does not create new segments:
    try (ReleasableLock ignored = readLock.acquire();
        Releasable ignored2 = versionMap.acquireLock(delete.uid().bytes())) {
        ensureOpen();
        lastWriteNanos = delete.startTime();
        final DeletionStrategy plan = deletionStrategyForOperation(delete);
        if (plan.earlyResultOnPreflightError.isPresent()) {
            deleteResult = plan.earlyResultOnPreflightError.get();
        } else {
            // generate or register sequence number
            if (delete.origin() == Operation.Origin.PRIMARY) {
                delete = new Delete(delete.id(), delete.uid(), generateSeqNoForOperationOnPrimary(delete), delete.primaryTerm(), delete.version(), delete.versionType(), delete.origin(), delete.startTime(), delete.getIfSeqNo(), delete.getIfPrimaryTerm());
                advanceMaxSeqNoOfUpdatesOrDeletesOnPrimary(delete.seqNo());
            } else {
                markSeqNoAsSeen(delete.seqNo());
            }
            assert delete.seqNo() >= 0 : "ops should have an assigned seq no.; origin: " + delete.origin();
            if (plan.deleteFromLucene || plan.addStaleOpToLucene) {
                deleteResult = deleteInLucene(delete, plan);
            } else {
                deleteResult = new DeleteResult(plan.versionOfDeletion, delete.primaryTerm(), delete.seqNo(), plan.currentlyDeleted == false);
            }
        }
        if (delete.origin().isFromTranslog() == false && deleteResult.getResultType() == Result.Type.SUCCESS) {
            final Translog.Location location = translog.add(new Translog.Delete(delete, deleteResult));
            deleteResult.setTranslogLocation(location);
        }
        localCheckpointTracker.markSeqNoAsProcessed(deleteResult.getSeqNo());
        if (deleteResult.getTranslogLocation() == null) {
            // the op is coming from the translog (and is hence persisted already) or does not have a sequence number (version conflict)
            assert delete.origin().isFromTranslog() || deleteResult.getSeqNo() == SequenceNumbers.UNASSIGNED_SEQ_NO : "version conflict: delete operation not coming from translog should not have seqNo, but found [" + deleteResult.getSeqNo() + "]";
            localCheckpointTracker.markSeqNoAsPersisted(deleteResult.getSeqNo());
        }
        deleteResult.setTook(System.nanoTime() - delete.startTime());
        deleteResult.freeze();
    } catch (RuntimeException | IOException e) {
        try {
            maybeFailEngine("delete", e);
        } catch (Exception inner) {
            e.addSuppressed(inner);
        }
        throw e;
    }
    maybePruneDeletes();
    return deleteResult;
}
Also used : Releasable(org.elasticsearch.common.lease.Releasable) IOException(java.io.IOException) ReleasableLock(org.elasticsearch.common.util.concurrent.ReleasableLock) AlreadyClosedException(org.apache.lucene.store.AlreadyClosedException) LockObtainFailedException(org.apache.lucene.store.LockObtainFailedException) TranslogCorruptedException(org.elasticsearch.index.translog.TranslogCorruptedException) IOException(java.io.IOException) Translog(org.elasticsearch.index.translog.Translog)

Example 44 with ReleasableLock

use of org.elasticsearch.common.util.concurrent.ReleasableLock in project crate by crate.

the class Translog method trimUnreferencedReaders.

/**
 * Trims unreferenced translog generations by asking {@link TranslogDeletionPolicy} for the minimum
 * required generation
 */
public void trimUnreferencedReaders() throws IOException {
    // first check under read lock if any readers can be trimmed
    try (ReleasableLock ignored = readLock.acquire()) {
        if (closed.get()) {
            // we're shutdown potentially on some tragic event, don't delete anything
            return;
        }
        if (getMinReferencedGen() == getMinFileGeneration()) {
            return;
        }
    }
    // move most of the data to disk to reduce the time the write lock is held
    sync();
    try (ReleasableLock ignored = writeLock.acquire()) {
        if (closed.get()) {
            // we're shutdown potentially on some tragic event, don't delete anything
            return;
        }
        final long minReferencedGen = getMinReferencedGen();
        for (Iterator<TranslogReader> iterator = readers.iterator(); iterator.hasNext(); ) {
            TranslogReader reader = iterator.next();
            if (reader.getGeneration() >= minReferencedGen) {
                break;
            }
            iterator.remove();
            IOUtils.closeWhileHandlingException(reader);
            final Path translogPath = reader.path();
            logger.trace("delete translog file [{}], not referenced and not current anymore", translogPath);
            // The checkpoint is used when opening the translog to know which files should be recovered from.
            // We now update the checkpoint to ignore the file we are going to remove.
            // Note that there is a provision in recoverFromFiles to allow for the case where we synced the checkpoint
            // but crashed before we could delete the file.
            // sync at once to make sure that there's at most one unreferenced generation.
            current.sync();
            deleteReaderFiles(reader);
        }
        assert readers.isEmpty() == false || current.generation == minReferencedGen : "all readers were cleaned but the minReferenceGen [" + minReferencedGen + "] is not the current writer's gen [" + current.generation + "]";
    } catch (final Exception ex) {
        closeOnTragicEvent(ex);
        throw ex;
    }
}
Also used : Path(java.nio.file.Path) ReleasableLock(org.elasticsearch.common.util.concurrent.ReleasableLock) AlreadyClosedException(org.apache.lucene.store.AlreadyClosedException) EOFException(java.io.EOFException) IOException(java.io.IOException)

Example 45 with ReleasableLock

use of org.elasticsearch.common.util.concurrent.ReleasableLock in project crate by crate.

the class Translog method trimOperations.

/**
 * Trims translog for terms of files below <code>belowTerm</code> and seq# above <code>aboveSeqNo</code>.
 * Effectively it moves max visible seq# {@link Checkpoint#trimmedAboveSeqNo} therefore {@link TranslogSnapshot} skips those operations.
 */
public void trimOperations(long belowTerm, long aboveSeqNo) throws IOException {
    assert aboveSeqNo >= SequenceNumbers.NO_OPS_PERFORMED : "aboveSeqNo has to a valid sequence number";
    try (ReleasableLock lock = writeLock.acquire()) {
        ensureOpen();
        if (current.getPrimaryTerm() < belowTerm) {
            throw new IllegalArgumentException("Trimming the translog can only be done for terms lower than the current one. " + "Trim requested for term [ " + belowTerm + " ] , current is [ " + current.getPrimaryTerm() + " ]");
        }
        // we assume that the current translog generation doesn't have trimmable ops. Verify that.
        assert current.assertNoSeqAbove(belowTerm, aboveSeqNo);
        // update all existed ones (if it is necessary) as checkpoint and reader are immutable
        final List<TranslogReader> newReaders = new ArrayList<>(readers.size());
        try {
            for (TranslogReader reader : readers) {
                final TranslogReader newReader = reader.getPrimaryTerm() < belowTerm ? reader.closeIntoTrimmedReader(aboveSeqNo, getChannelFactory()) : reader;
                newReaders.add(newReader);
            }
        } catch (IOException e) {
            IOUtils.closeWhileHandlingException(newReaders);
            tragedy.setTragicException(e);
            closeOnTragicEvent(e);
            throw e;
        }
        this.readers.clear();
        this.readers.addAll(newReaders);
    }
}
Also used : ArrayList(java.util.ArrayList) IOException(java.io.IOException) ReleasableLock(org.elasticsearch.common.util.concurrent.ReleasableLock)

Aggregations

ReleasableLock (org.elasticsearch.common.util.concurrent.ReleasableLock)48 IOException (java.io.IOException)30 AlreadyClosedException (org.apache.lucene.store.AlreadyClosedException)21 LockObtainFailedException (org.apache.lucene.store.LockObtainFailedException)12 TranslogCorruptedException (org.elasticsearch.index.translog.TranslogCorruptedException)12 ArrayList (java.util.ArrayList)7 EOFException (java.io.EOFException)6 Path (java.nio.file.Path)6 Translog (org.elasticsearch.index.translog.Translog)6 IndexFormatTooOldException (org.apache.lucene.index.IndexFormatTooOldException)5 Releasable (org.elasticsearch.common.lease.Releasable)5 Closeable (java.io.Closeable)4 ReadWriteLock (java.util.concurrent.locks.ReadWriteLock)3 ReentrantReadWriteLock (java.util.concurrent.locks.ReentrantReadWriteLock)3 LongSupplier (java.util.function.LongSupplier)3 ParameterizedMessage (org.apache.logging.log4j.message.ParameterizedMessage)3 ReleasableBytesReference (org.elasticsearch.common.bytes.ReleasableBytesReference)3 ReleasableBytesStreamOutput (org.elasticsearch.common.io.stream.ReleasableBytesStreamOutput)3 FileChannel (java.nio.channels.FileChannel)2 Files (java.nio.file.Files)2