Search in sources :

Example 1 with ReleasableLock

use of org.elasticsearch.common.util.concurrent.ReleasableLock in project elasticsearch by elastic.

the class Engine method close.

@Override
public void close() throws IOException {
    if (isClosed.get() == false) {
        // don't acquire the write lock if we are already closed
        logger.debug("close now acquiring writeLock");
        try (ReleasableLock lock = writeLock.acquire()) {
            logger.debug("close acquired writeLock");
            closeNoLock("api");
        }
    }
}
Also used : ReleasableLock(org.elasticsearch.common.util.concurrent.ReleasableLock)

Example 2 with ReleasableLock

use of org.elasticsearch.common.util.concurrent.ReleasableLock in project elasticsearch by elastic.

the class InternalEngine method tryRenewSyncCommit.

final boolean tryRenewSyncCommit() {
    boolean renewed = false;
    try (ReleasableLock lock = writeLock.acquire()) {
        ensureOpen();
        ensureCanFlush();
        String syncId = lastCommittedSegmentInfos.getUserData().get(SYNC_COMMIT_ID);
        if (syncId != null && translog.totalOperations() == 0 && indexWriter.hasUncommittedChanges()) {
            logger.trace("start renewing sync commit [{}]", syncId);
            commitIndexWriter(indexWriter, translog, syncId);
            logger.debug("successfully sync committed. sync id [{}].", syncId);
            lastCommittedSegmentInfos = store.readLastCommittedSegmentsInfo();
            renewed = true;
        }
    } catch (IOException ex) {
        maybeFailEngine("renew sync commit", ex);
        throw new EngineException(shardId, "failed to renew sync commit", ex);
    }
    if (renewed) {
        // refresh outside of the write lock
        refresh("renew sync commit");
    }
    return renewed;
}
Also used : IOException(java.io.IOException) ReleasableLock(org.elasticsearch.common.util.concurrent.ReleasableLock)

Example 3 with ReleasableLock

use of org.elasticsearch.common.util.concurrent.ReleasableLock in project elasticsearch by elastic.

the class InternalEngine method flush.

@Override
public CommitId flush(boolean force, boolean waitIfOngoing) throws EngineException {
    ensureOpen();
    final byte[] newCommitId;
    /*
         * Unfortunately the lock order is important here. We have to acquire the readlock first otherwise
         * if we are flushing at the end of the recovery while holding the write lock we can deadlock if:
         *  Thread 1: flushes via API and gets the flush lock but blocks on the readlock since Thread 2 has the writeLock
         *  Thread 2: flushes at the end of the recovery holding the writeLock and blocks on the flushLock owned by Thread 1
         */
    try (ReleasableLock lock = readLock.acquire()) {
        ensureOpen();
        if (flushLock.tryLock() == false) {
            // if we can't get the lock right away we block if needed otherwise barf
            if (waitIfOngoing) {
                logger.trace("waiting for in-flight flush to finish");
                flushLock.lock();
                logger.trace("acquired flush lock after blocking");
            } else {
                return new CommitId(lastCommittedSegmentInfos.getId());
            }
        } else {
            logger.trace("acquired flush lock immediately");
        }
        try {
            if (indexWriter.hasUncommittedChanges() || force) {
                ensureCanFlush();
                try {
                    translog.prepareCommit();
                    logger.trace("starting commit for flush; commitTranslog=true");
                    commitIndexWriter(indexWriter, translog, null);
                    logger.trace("finished commit for flush");
                    // we need to refresh in order to clear older version values
                    refresh("version_table_flush");
                    // after refresh documents can be retrieved from the index so we can now commit the translog
                    translog.commit();
                } catch (Exception e) {
                    throw new FlushFailedEngineException(shardId, e);
                }
                /*
                     * we have to inc-ref the store here since if the engine is closed by a tragic event
                     * we don't acquire the write lock and wait until we have exclusive access. This might also
                     * dec the store reference which can essentially close the store and unless we can inc the reference
                     * we can't use it.
                     */
                store.incRef();
                try {
                    // reread the last committed segment infos
                    lastCommittedSegmentInfos = store.readLastCommittedSegmentsInfo();
                } catch (Exception e) {
                    if (isClosed.get() == false) {
                        try {
                            logger.warn("failed to read latest segment infos on flush", e);
                        } catch (Exception inner) {
                            e.addSuppressed(inner);
                        }
                        if (Lucene.isCorruptionException(e)) {
                            throw new FlushFailedEngineException(shardId, e);
                        }
                    }
                } finally {
                    store.decRef();
                }
            }
            newCommitId = lastCommittedSegmentInfos.getId();
        } catch (FlushFailedEngineException ex) {
            maybeFailEngine("flush", ex);
            throw ex;
        } finally {
            flushLock.unlock();
        }
    }
    // (e.g., moves backwards) we will at least still sometimes prune deleted tombstones:
    if (engineConfig.isEnableGcDeletes()) {
        pruneDeletedTombstones();
    }
    return new CommitId(newCommitId);
}
Also used : ReleasableLock(org.elasticsearch.common.util.concurrent.ReleasableLock) AlreadyClosedException(org.apache.lucene.store.AlreadyClosedException) LockObtainFailedException(org.apache.lucene.store.LockObtainFailedException) TranslogCorruptedException(org.elasticsearch.index.translog.TranslogCorruptedException) IOException(java.io.IOException) IndexFormatTooOldException(org.apache.lucene.index.IndexFormatTooOldException)

Example 4 with ReleasableLock

use of org.elasticsearch.common.util.concurrent.ReleasableLock in project elasticsearch by elastic.

the class InternalEngine method refresh.

@Override
public void refresh(String source) throws EngineException {
    // since it flushes the index as well (though, in terms of concurrency, we are allowed to do it)
    try (ReleasableLock lock = readLock.acquire()) {
        ensureOpen();
        searcherManager.maybeRefreshBlocking();
    } catch (AlreadyClosedException e) {
        failOnTragicEvent(e);
        throw e;
    } catch (Exception e) {
        try {
            failEngine("refresh failed", e);
        } catch (Exception inner) {
            e.addSuppressed(inner);
        }
        throw new RefreshFailedEngineException(shardId, e);
    }
    // TODO: maybe we should just put a scheduled job in threadPool?
    // We check for pruning in each delete request, but we also prune here e.g. in case a delete burst comes in and then no more deletes
    // for a long time:
    maybePruneDeletedTombstones();
    versionMapRefreshPending.set(false);
    mergeScheduler.refreshConfig();
}
Also used : AlreadyClosedException(org.apache.lucene.store.AlreadyClosedException) ReleasableLock(org.elasticsearch.common.util.concurrent.ReleasableLock) AlreadyClosedException(org.apache.lucene.store.AlreadyClosedException) LockObtainFailedException(org.apache.lucene.store.LockObtainFailedException) TranslogCorruptedException(org.elasticsearch.index.translog.TranslogCorruptedException) IOException(java.io.IOException) IndexFormatTooOldException(org.apache.lucene.index.IndexFormatTooOldException)

Example 5 with ReleasableLock

use of org.elasticsearch.common.util.concurrent.ReleasableLock in project elasticsearch by elastic.

the class InternalEngine method writeIndexingBuffer.

@Override
public void writeIndexingBuffer() throws EngineException {
    // since it flushes the index as well (though, in terms of concurrency, we are allowed to do it)
    try (ReleasableLock lock = readLock.acquire()) {
        ensureOpen();
        // TODO: it's not great that we secretly tie searcher visibility to "freeing up heap" here... really we should keep two
        // searcher managers, one for searching which is only refreshed by the schedule the user requested (refresh_interval, or invoking
        // refresh API), and another for version map interactions.  See #15768.
        final long versionMapBytes = versionMap.ramBytesUsedForRefresh();
        final long indexingBufferBytes = indexWriter.ramBytesUsed();
        final boolean useRefresh = versionMapRefreshPending.get() || (indexingBufferBytes / 4 < versionMapBytes);
        if (useRefresh) {
            // The version map is using > 25% of the indexing buffer, so we do a refresh so the version map also clears
            logger.debug("use refresh to write indexing buffer (heap size=[{}]), to also clear version map (heap size=[{}])", new ByteSizeValue(indexingBufferBytes), new ByteSizeValue(versionMapBytes));
            refresh("write indexing buffer");
        } else {
            // Most of our heap is used by the indexing buffer, so we do a cheaper (just writes segments, doesn't open a new searcher) IW.flush:
            logger.debug("use IndexWriter.flush to write indexing buffer (heap size=[{}]) since version map is small (heap size=[{}])", new ByteSizeValue(indexingBufferBytes), new ByteSizeValue(versionMapBytes));
            indexWriter.flush();
        }
    } catch (AlreadyClosedException e) {
        failOnTragicEvent(e);
        throw e;
    } catch (Exception e) {
        try {
            failEngine("writeIndexingBuffer failed", e);
        } catch (Exception inner) {
            e.addSuppressed(inner);
        }
        throw new RefreshFailedEngineException(shardId, e);
    }
}
Also used : ByteSizeValue(org.elasticsearch.common.unit.ByteSizeValue) AlreadyClosedException(org.apache.lucene.store.AlreadyClosedException) ReleasableLock(org.elasticsearch.common.util.concurrent.ReleasableLock) AlreadyClosedException(org.apache.lucene.store.AlreadyClosedException) LockObtainFailedException(org.apache.lucene.store.LockObtainFailedException) TranslogCorruptedException(org.elasticsearch.index.translog.TranslogCorruptedException) IOException(java.io.IOException) IndexFormatTooOldException(org.apache.lucene.index.IndexFormatTooOldException)

Aggregations

ReleasableLock (org.elasticsearch.common.util.concurrent.ReleasableLock)48 IOException (java.io.IOException)30 AlreadyClosedException (org.apache.lucene.store.AlreadyClosedException)21 LockObtainFailedException (org.apache.lucene.store.LockObtainFailedException)12 TranslogCorruptedException (org.elasticsearch.index.translog.TranslogCorruptedException)12 ArrayList (java.util.ArrayList)7 EOFException (java.io.EOFException)6 Path (java.nio.file.Path)6 Translog (org.elasticsearch.index.translog.Translog)6 IndexFormatTooOldException (org.apache.lucene.index.IndexFormatTooOldException)5 Releasable (org.elasticsearch.common.lease.Releasable)5 Closeable (java.io.Closeable)4 ReadWriteLock (java.util.concurrent.locks.ReadWriteLock)3 ReentrantReadWriteLock (java.util.concurrent.locks.ReentrantReadWriteLock)3 LongSupplier (java.util.function.LongSupplier)3 ParameterizedMessage (org.apache.logging.log4j.message.ParameterizedMessage)3 ReleasableBytesReference (org.elasticsearch.common.bytes.ReleasableBytesReference)3 ReleasableBytesStreamOutput (org.elasticsearch.common.io.stream.ReleasableBytesStreamOutput)3 FileChannel (java.nio.channels.FileChannel)2 Files (java.nio.file.Files)2