Search in sources :

Example 71 with AlreadyClosedException

use of org.apache.lucene.store.AlreadyClosedException in project crate by crate.

the class Translog method add.

/**
 * Adds an operation to the transaction log.
 *
 * @param operation the operation to add
 * @return the location of the operation in the translog
 * @throws IOException if adding the operation to the translog resulted in an I/O exception
 */
public Location add(final Operation operation) throws IOException {
    final ReleasableBytesStreamOutput out = new ReleasableBytesStreamOutput(bigArrays);
    boolean successfullySerialized = false;
    try {
        final long start = out.position();
        out.skip(Integer.BYTES);
        writeOperationNoSize(new BufferedChecksumStreamOutput(out), operation);
        final long end = out.position();
        final int operationSize = (int) (end - Integer.BYTES - start);
        out.seek(start);
        out.writeInt(operationSize);
        out.seek(end);
        successfullySerialized = true;
        try (ReleasableBytesReference bytes = new ReleasableBytesReference(out.bytes(), out);
            ReleasableLock ignored = readLock.acquire()) {
            ensureOpen();
            if (operation.primaryTerm() > current.getPrimaryTerm()) {
                assert false : "Operation term is newer than the current term; " + "current term[" + current.getPrimaryTerm() + "], operation term[" + operation + "]";
                throw new IllegalArgumentException("Operation term is newer than the current term; " + "current term[" + current.getPrimaryTerm() + "], operation term[" + operation + "]");
            }
            return current.add(bytes, operation.seqNo());
        }
    } catch (final AlreadyClosedException | IOException ex) {
        closeOnTragicEvent(ex);
        throw ex;
    } catch (final Exception ex) {
        closeOnTragicEvent(ex);
        throw new TranslogException(shardId, "Failed to write operation [" + operation + "]", ex);
    } finally {
        if (successfullySerialized == false) {
            Releasables.close(out);
        }
    }
}
Also used : ReleasableBytesReference(org.elasticsearch.common.bytes.ReleasableBytesReference) AlreadyClosedException(org.apache.lucene.store.AlreadyClosedException) IOException(java.io.IOException) ReleasableLock(org.elasticsearch.common.util.concurrent.ReleasableLock) AlreadyClosedException(org.apache.lucene.store.AlreadyClosedException) EOFException(java.io.EOFException) IOException(java.io.IOException) ReleasableBytesStreamOutput(org.elasticsearch.common.io.stream.ReleasableBytesStreamOutput)

Example 72 with AlreadyClosedException

use of org.apache.lucene.store.AlreadyClosedException in project crate by crate.

the class TranslogReader method closeIntoTrimmedReader.

/**
 * Closes current reader and creates new one with new checkoint and same file channel
 */
TranslogReader closeIntoTrimmedReader(long aboveSeqNo, ChannelFactory channelFactory) throws IOException {
    if (closed.compareAndSet(false, true)) {
        Closeable toCloseOnFailure = channel;
        final TranslogReader newReader;
        try {
            if (aboveSeqNo < checkpoint.trimmedAboveSeqNo || aboveSeqNo < checkpoint.maxSeqNo && checkpoint.trimmedAboveSeqNo == SequenceNumbers.UNASSIGNED_SEQ_NO) {
                final Path checkpointFile = path.getParent().resolve(getCommitCheckpointFileName(checkpoint.generation));
                final Checkpoint newCheckpoint = new Checkpoint(checkpoint.offset, checkpoint.numOps, checkpoint.generation, checkpoint.minSeqNo, checkpoint.maxSeqNo, checkpoint.globalCheckpoint, checkpoint.minTranslogGeneration, aboveSeqNo);
                Checkpoint.write(channelFactory, checkpointFile, newCheckpoint, StandardOpenOption.WRITE);
                IOUtils.fsync(checkpointFile, false);
                IOUtils.fsync(checkpointFile.getParent(), true);
                newReader = new TranslogReader(newCheckpoint, channel, path, header);
            } else {
                newReader = new TranslogReader(checkpoint, channel, path, header);
            }
            toCloseOnFailure = null;
            return newReader;
        } finally {
            IOUtils.close(toCloseOnFailure);
        }
    } else {
        throw new AlreadyClosedException(toString() + " is already closed");
    }
}
Also used : Path(java.nio.file.Path) Closeable(java.io.Closeable) AlreadyClosedException(org.apache.lucene.store.AlreadyClosedException)

Example 73 with AlreadyClosedException

use of org.apache.lucene.store.AlreadyClosedException in project crate by crate.

the class ShardSegments method buildShardSegment.

private Stream<ShardSegment> buildShardSegment(IndexShard indexShard) {
    try {
        List<Segment> segments = indexShard.segments(false);
        ShardId shardId = indexShard.shardId();
        return segments.stream().map(sgmt -> new ShardSegment(shardId.getId(), shardId.getIndexName(), sgmt, indexShard.routingEntry().primary()));
    } catch (AlreadyClosedException ignored) {
        return Stream.empty();
    }
}
Also used : ShardId(org.elasticsearch.index.shard.ShardId) AlreadyClosedException(org.apache.lucene.store.AlreadyClosedException) Segment(org.elasticsearch.index.engine.Segment)

Example 74 with AlreadyClosedException

use of org.apache.lucene.store.AlreadyClosedException in project crate by crate.

the class ReservoirSampler method getSamples.

private Samples getSamples(List<Reference> columns, int maxSamples, DocTableInfo docTable, Random random, Metadata metadata, CoordinatorTxnCtx coordinatorTxnCtx, List<Streamer> streamers, List<Engine.Searcher> searchersToRelease, RamAccounting ramAccounting) {
    ramAccounting.addBytes(DataTypes.LONG.fixedSize() * maxSamples);
    Reservoir<Long> fetchIdSamples = new Reservoir<>(maxSamples, random);
    ArrayList<DocIdToRow> docIdToRowsFunctionPerReader = new ArrayList<>();
    long totalNumDocs = 0;
    long totalSizeInBytes = 0;
    for (String index : docTable.concreteOpenIndices()) {
        var indexMetadata = metadata.index(index);
        if (indexMetadata == null) {
            continue;
        }
        var indexService = indicesService.indexService(indexMetadata.getIndex());
        if (indexService == null) {
            continue;
        }
        var mapperService = indexService.mapperService();
        FieldTypeLookup fieldTypeLookup = mapperService::fullName;
        var ctx = new DocInputFactory(nodeCtx, new LuceneReferenceResolver(indexService.index().getName(), fieldTypeLookup, docTable.partitionedByColumns())).getCtx(coordinatorTxnCtx);
        ctx.add(columns);
        List<Input<?>> inputs = ctx.topLevelInputs();
        List<? extends LuceneCollectorExpression<?>> expressions = ctx.expressions();
        CollectorContext collectorContext = new CollectorContext();
        for (LuceneCollectorExpression<?> expression : expressions) {
            expression.startCollect(collectorContext);
        }
        for (IndexShard indexShard : indexService) {
            if (!indexShard.routingEntry().primary()) {
                continue;
            }
            try {
                Engine.Searcher searcher = indexShard.acquireSearcher("update-table-statistics");
                searchersToRelease.add(searcher);
                totalNumDocs += searcher.getIndexReader().numDocs();
                totalSizeInBytes += indexShard.storeStats().getSizeInBytes();
                DocIdToRow docIdToRow = new DocIdToRow(searcher, inputs, expressions);
                docIdToRowsFunctionPerReader.add(docIdToRow);
                try {
                    // We do the sampling in 2 phases. First we get the docIds;
                    // then we retrieve the column values for the sampled docIds.
                    // we do this in 2 phases because the reservoir sampling might override previously seen
                    // items and we want to avoid unnecessary disk-lookup
                    var collector = new ReservoirCollector(fetchIdSamples, searchersToRelease.size() - 1);
                    searcher.search(new MatchAllDocsQuery(), collector);
                } catch (IOException e) {
                    throw new UncheckedIOException(e);
                }
            } catch (IllegalIndexShardStateException | AlreadyClosedException ignored) {
            }
        }
    }
    var rowAccounting = new RowCellsAccountingWithEstimators(Symbols.typeView(columns), ramAccounting, 0);
    ArrayList<Row> records = new ArrayList<>();
    for (long fetchId : fetchIdSamples.samples()) {
        int readerId = FetchId.decodeReaderId(fetchId);
        DocIdToRow docIdToRow = docIdToRowsFunctionPerReader.get(readerId);
        Object[] row = docIdToRow.apply(FetchId.decodeDocId(fetchId));
        try {
            rowAccounting.accountForAndMaybeBreak(row);
        } catch (CircuitBreakingException e) {
            LOGGER.info("Stopped gathering samples for `ANALYZE` operation because circuit breaker triggered. " + "Generating statistics with {} instead of {} records", records.size(), maxSamples);
            break;
        }
        records.add(new RowN(row));
    }
    return new Samples(records, streamers, totalNumDocs, totalSizeInBytes);
}
Also used : DocInputFactory(io.crate.execution.engine.collect.DocInputFactory) ArrayList(java.util.ArrayList) UncheckedIOException(java.io.UncheckedIOException) AlreadyClosedException(org.apache.lucene.store.AlreadyClosedException) RowCellsAccountingWithEstimators(io.crate.breaker.RowCellsAccountingWithEstimators) Input(io.crate.data.Input) LuceneReferenceResolver(io.crate.expression.reference.doc.lucene.LuceneReferenceResolver) CollectorContext(io.crate.expression.reference.doc.lucene.CollectorContext) Engine(org.elasticsearch.index.engine.Engine) IndexShard(org.elasticsearch.index.shard.IndexShard) UncheckedIOException(java.io.UncheckedIOException) IOException(java.io.IOException) MatchAllDocsQuery(org.apache.lucene.search.MatchAllDocsQuery) IllegalIndexShardStateException(org.elasticsearch.index.shard.IllegalIndexShardStateException) RowN(io.crate.data.RowN) FieldTypeLookup(io.crate.lucene.FieldTypeLookup) CircuitBreakingException(org.elasticsearch.common.breaker.CircuitBreakingException) Row(io.crate.data.Row)

Example 75 with AlreadyClosedException

use of org.apache.lucene.store.AlreadyClosedException in project crate by crate.

the class IndexShard method resetEngineToGlobalCheckpoint.

/**
 * Rollback the current engine to the safe commit, then replay local translog up to the global checkpoint.
 */
void resetEngineToGlobalCheckpoint() throws IOException {
    assert Thread.holdsLock(mutex) == false : "resetting engine under mutex";
    assert getActiveOperationsCount() == OPERATIONS_BLOCKED : "resetting engine without blocking operations; active operations are [" + getActiveOperations() + ']';
    // persist the global checkpoint to disk
    sync();
    final SeqNoStats seqNoStats = seqNoStats();
    final TranslogStats translogStats = translogStats();
    // flush to make sure the latest commit, which will be opened by the read-only engine, includes all operations.
    flush(new FlushRequest().waitIfOngoing(true));
    SetOnce<Engine> newEngineReference = new SetOnce<>();
    final long globalCheckpoint = getLastKnownGlobalCheckpoint();
    assert globalCheckpoint == getLastSyncedGlobalCheckpoint();
    synchronized (engineMutex) {
        verifyNotClosed();
        // we must create both new read-only engine and new read-write engine under engineMutex to ensure snapshotStoreMetadata,
        // acquireXXXCommit and close works.
        final Engine readOnlyEngine = new ReadOnlyEngine(newEngineConfig(replicationTracker), seqNoStats, translogStats, false, Function.identity()) {

            @Override
            public IndexCommitRef acquireLastIndexCommit(boolean flushFirst) {
                synchronized (engineMutex) {
                    if (newEngineReference.get() == null) {
                        throw new AlreadyClosedException("engine was closed");
                    }
                    // ignore flushFirst since we flushed above and we do not want to interfere with ongoing translog replay
                    return newEngineReference.get().acquireLastIndexCommit(false);
                }
            }

            @Override
            public IndexCommitRef acquireSafeIndexCommit() {
                synchronized (engineMutex) {
                    if (newEngineReference.get() == null) {
                        throw new AlreadyClosedException("engine was closed");
                    }
                    return newEngineReference.get().acquireSafeIndexCommit();
                }
            }

            @Override
            public void close() throws IOException {
                assert Thread.holdsLock(engineMutex);
                Engine newEngine = newEngineReference.get();
                if (newEngine == currentEngineReference.get()) {
                    // we successfully installed the new engine so do not close it.
                    newEngine = null;
                }
                IOUtils.close(super::close, newEngine);
            }
        };
        IOUtils.close(currentEngineReference.getAndSet(readOnlyEngine));
        newEngineReference.set(engineFactory.newReadWriteEngine(newEngineConfig(replicationTracker)));
        onNewEngine(newEngineReference.get());
    }
    final Engine.TranslogRecoveryRunner translogRunner = (engine, snapshot) -> runTranslogRecovery(engine, snapshot, Engine.Operation.Origin.LOCAL_RESET, () -> {
    // TODO: add a dedicate recovery stats for the reset translog
    });
    newEngineReference.get().recoverFromTranslog(translogRunner, globalCheckpoint);
    newEngineReference.get().refresh("reset_engine");
    synchronized (engineMutex) {
        verifyNotClosed();
        IOUtils.close(currentEngineReference.getAndSet(newEngineReference.get()));
        // We set active because we are now writing operations to the engine; this way,
        // if we go idle after some time and become inactive, we still give sync'd flush a chance to run.
        active.set(true);
    }
    // time elapses after the engine is created above (pulling the config settings) until we set the engine reference, during
    // which settings changes could possibly have happened, so here we forcefully push any config changes to the new engine.
    onSettingsChanged();
}
Also used : Query(org.apache.lucene.search.Query) UpgradeRequest(org.elasticsearch.action.admin.indices.upgrade.post.UpgradeRequest) LongSupplier(java.util.function.LongSupplier) BigArrays(org.elasticsearch.common.util.BigArrays) IndexMetadata(org.elasticsearch.cluster.metadata.IndexMetadata) Term(org.apache.lucene.index.Term) AlreadyClosedException(org.apache.lucene.store.AlreadyClosedException) RecoveryStats(org.elasticsearch.index.recovery.RecoveryStats) ReferenceManager(org.apache.lucene.search.ReferenceManager) SeqNoStats(org.elasticsearch.index.seqno.SeqNoStats) UsageTrackingQueryCachingPolicy(org.apache.lucene.search.UsageTrackingQueryCachingPolicy) EngineConfig(org.elasticsearch.index.engine.EngineConfig) WriteStateException(org.elasticsearch.gateway.WriteStateException) IndexNotFoundException(org.elasticsearch.index.IndexNotFoundException) Map(java.util.Map) ObjectLongMap(com.carrotsearch.hppc.ObjectLongMap) QueryCachingPolicy(org.apache.lucene.search.QueryCachingPolicy) CheckedRunnable(org.elasticsearch.common.CheckedRunnable) EnumSet(java.util.EnumSet) PeerRecoveryTargetService(org.elasticsearch.indices.recovery.PeerRecoveryTargetService) Set(java.util.Set) StandardCharsets(java.nio.charset.StandardCharsets) ClosedByInterruptException(java.nio.channels.ClosedByInterruptException) Booleans(io.crate.common.Booleans) CountDownLatch(java.util.concurrent.CountDownLatch) RecoverySource(org.elasticsearch.cluster.routing.RecoverySource) AbstractRunnable(org.elasticsearch.common.util.concurrent.AbstractRunnable) Exceptions(io.crate.exceptions.Exceptions) Logger(org.apache.logging.log4j.Logger) RestStatus(org.elasticsearch.rest.RestStatus) ReplicationTracker(org.elasticsearch.index.seqno.ReplicationTracker) CopyOnWriteArrayList(java.util.concurrent.CopyOnWriteArrayList) ThreadInterruptedException(org.apache.lucene.util.ThreadInterruptedException) IndexCommit(org.apache.lucene.index.IndexCommit) StoreStats(org.elasticsearch.index.store.StoreStats) Tuple(io.crate.common.collections.Tuple) RecoveryFailedException(org.elasticsearch.indices.recovery.RecoveryFailedException) BytesStreamOutput(org.elasticsearch.common.io.stream.BytesStreamOutput) IndexModule(org.elasticsearch.index.IndexModule) CodecService(org.elasticsearch.index.codec.CodecService) ArrayList(java.util.ArrayList) CircuitBreakerService(org.elasticsearch.indices.breaker.CircuitBreakerService) XContentHelper(org.elasticsearch.common.xcontent.XContentHelper) RetentionLease(org.elasticsearch.index.seqno.RetentionLease) RetentionLeases(org.elasticsearch.index.seqno.RetentionLeases) IndexCache(org.elasticsearch.index.cache.IndexCache) Store(org.elasticsearch.index.store.Store) BiConsumer(java.util.function.BiConsumer) StreamSupport(java.util.stream.StreamSupport) IndicesService(org.elasticsearch.indices.IndicesService) TranslogConfig(org.elasticsearch.index.translog.TranslogConfig) Nullable(javax.annotation.Nullable) EngineException(org.elasticsearch.index.engine.EngineException) SourceToParse(org.elasticsearch.index.mapper.SourceToParse) AsyncIOProcessor(org.elasticsearch.common.util.concurrent.AsyncIOProcessor) SequenceNumbers(org.elasticsearch.index.seqno.SequenceNumbers) SetOnce(org.apache.lucene.util.SetOnce) IdFieldMapper(org.elasticsearch.index.mapper.IdFieldMapper) IndexService(org.elasticsearch.index.IndexService) IOUtils(io.crate.common.io.IOUtils) IOException(java.io.IOException) ParsedDocument(org.elasticsearch.index.mapper.ParsedDocument) RepositoriesService(org.elasticsearch.repositories.RepositoriesService) Segment(org.elasticsearch.index.engine.Segment) AtomicLong(java.util.concurrent.atomic.AtomicLong) ReplicationResponse(org.elasticsearch.action.support.replication.ReplicationResponse) CounterMetric(org.elasticsearch.common.metrics.CounterMetric) ActionListener(org.elasticsearch.action.ActionListener) ElasticsearchException(org.elasticsearch.ElasticsearchException) SafeCommitInfo(org.elasticsearch.index.engine.SafeCommitInfo) TimeoutException(java.util.concurrent.TimeoutException) SnapshotRecoverySource(org.elasticsearch.cluster.routing.RecoverySource.SnapshotRecoverySource) VersionType(org.elasticsearch.index.VersionType) StoreFileMetadata(org.elasticsearch.index.store.StoreFileMetadata) Settings(org.elasticsearch.common.settings.Settings) ResyncTask(org.elasticsearch.index.shard.PrimaryReplicaSyncer.ResyncTask) Locale(java.util.Locale) ThreadPool(org.elasticsearch.threadpool.ThreadPool) ActionRunnable(org.elasticsearch.action.ActionRunnable) Releasable(org.elasticsearch.common.lease.Releasable) ByteSizeValue(org.elasticsearch.common.unit.ByteSizeValue) RefreshFailedEngineException(org.elasticsearch.index.engine.RefreshFailedEngineException) CheckIndex(org.apache.lucene.index.CheckIndex) UNASSIGNED_SEQ_NO(org.elasticsearch.index.seqno.SequenceNumbers.UNASSIGNED_SEQ_NO) IndexShardRoutingTable(org.elasticsearch.cluster.routing.IndexShardRoutingTable) Collectors(java.util.stream.Collectors) SegmentInfos(org.apache.lucene.index.SegmentInfos) ReadOnlyEngine(org.elasticsearch.index.engine.ReadOnlyEngine) Engine(org.elasticsearch.index.engine.Engine) Objects(java.util.Objects) MapperService(org.elasticsearch.index.mapper.MapperService) TranslogStats(org.elasticsearch.index.translog.TranslogStats) List(java.util.List) Version(org.elasticsearch.Version) MeanMetric(org.elasticsearch.common.metrics.MeanMetric) RetentionLeaseStats(org.elasticsearch.index.seqno.RetentionLeaseStats) MappingMetadata(org.elasticsearch.cluster.metadata.MappingMetadata) IndicesClusterStateService(org.elasticsearch.indices.cluster.IndicesClusterStateService) RecoveryState(org.elasticsearch.indices.recovery.RecoveryState) RetentionLeaseSyncer(org.elasticsearch.index.seqno.RetentionLeaseSyncer) TimeValue(io.crate.common.unit.TimeValue) Optional(java.util.Optional) ShardRouting(org.elasticsearch.cluster.routing.ShardRouting) CommitStats(org.elasticsearch.index.engine.CommitStats) AtomicBoolean(java.util.concurrent.atomic.AtomicBoolean) CheckedConsumer(org.elasticsearch.common.CheckedConsumer) Index(org.elasticsearch.index.Index) Lucene(org.elasticsearch.common.lucene.Lucene) AtomicReference(java.util.concurrent.atomic.AtomicReference) Function(java.util.function.Function) ParameterizedMessage(org.apache.logging.log4j.message.ParameterizedMessage) HashSet(java.util.HashSet) ForceMergeRequest(org.elasticsearch.action.admin.indices.forcemerge.ForceMergeRequest) RootObjectMapper(org.elasticsearch.index.mapper.RootObjectMapper) MetadataSnapshot(org.elasticsearch.index.store.Store.MetadataSnapshot) IndexSettings(org.elasticsearch.index.IndexSettings) Mapping(org.elasticsearch.index.mapper.Mapping) DocumentMapper(org.elasticsearch.index.mapper.DocumentMapper) PrintStream(java.io.PrintStream) Repository(org.elasticsearch.repositories.Repository) Uid(org.elasticsearch.index.mapper.Uid) RecoveryTarget(org.elasticsearch.indices.recovery.RecoveryTarget) EngineFactory(org.elasticsearch.index.engine.EngineFactory) IndexingMemoryController(org.elasticsearch.indices.IndexingMemoryController) TimeUnit(java.util.concurrent.TimeUnit) Consumer(java.util.function.Consumer) ExceptionsHelper(org.elasticsearch.ExceptionsHelper) FlushRequest(org.elasticsearch.action.admin.indices.flush.FlushRequest) Closeable(java.io.Closeable) Assertions(org.elasticsearch.Assertions) Translog(org.elasticsearch.index.translog.Translog) Collections(java.util.Collections) RunOnce(org.elasticsearch.common.util.concurrent.RunOnce) SeqNoStats(org.elasticsearch.index.seqno.SeqNoStats) FlushRequest(org.elasticsearch.action.admin.indices.flush.FlushRequest) SetOnce(org.apache.lucene.util.SetOnce) ReadOnlyEngine(org.elasticsearch.index.engine.ReadOnlyEngine) TranslogStats(org.elasticsearch.index.translog.TranslogStats) AlreadyClosedException(org.apache.lucene.store.AlreadyClosedException) ReadOnlyEngine(org.elasticsearch.index.engine.ReadOnlyEngine) Engine(org.elasticsearch.index.engine.Engine)

Aggregations

AlreadyClosedException (org.apache.lucene.store.AlreadyClosedException)79 IOException (java.io.IOException)53 LockObtainFailedException (org.apache.lucene.store.LockObtainFailedException)16 CountDownLatch (java.util.concurrent.CountDownLatch)15 MockDirectoryWrapper (org.apache.lucene.store.MockDirectoryWrapper)14 TranslogCorruptedException (org.elasticsearch.index.translog.TranslogCorruptedException)13 MockAnalyzer (org.apache.lucene.analysis.MockAnalyzer)12 AtomicBoolean (java.util.concurrent.atomic.AtomicBoolean)11 Document (org.apache.lucene.document.Document)11 ElasticsearchException (org.elasticsearch.ElasticsearchException)11 ReleasableLock (org.elasticsearch.common.util.concurrent.ReleasableLock)10 UncheckedIOException (java.io.UncheckedIOException)9 ParsedDocument (org.elasticsearch.index.mapper.ParsedDocument)9 EOFException (java.io.EOFException)8 ArrayList (java.util.ArrayList)7 FileNotFoundException (java.io.FileNotFoundException)6 FileAlreadyExistsException (java.nio.file.FileAlreadyExistsException)6 NoSuchFileException (java.nio.file.NoSuchFileException)6 BrokenBarrierException (java.util.concurrent.BrokenBarrierException)6 CopyOnWriteArrayList (java.util.concurrent.CopyOnWriteArrayList)6