Search in sources :

Example 1 with IndexCommit

use of org.apache.lucene.index.IndexCommit in project elasticsearch by elastic.

the class RecoverySourceHandler method recoverToTarget.

/**
     * performs the recovery from the local engine to the target
     */
public RecoveryResponse recoverToTarget() throws IOException {
    try (Translog.View translogView = shard.acquireTranslogView()) {
        logger.trace("captured translog id [{}] for recovery", translogView.minTranslogGeneration());
        boolean isSequenceNumberBasedRecoveryPossible = request.startingSeqNo() != SequenceNumbersService.UNASSIGNED_SEQ_NO && isTranslogReadyForSequenceNumberBasedRecovery(translogView);
        if (isSequenceNumberBasedRecoveryPossible) {
            logger.trace("performing sequence numbers based recovery. starting at [{}]", request.startingSeqNo());
        } else {
            final IndexCommit phase1Snapshot;
            try {
                phase1Snapshot = shard.acquireIndexCommit(false);
            } catch (final Exception e) {
                IOUtils.closeWhileHandlingException(translogView);
                throw new RecoveryEngineException(shard.shardId(), 1, "snapshot failed", e);
            }
            try {
                phase1(phase1Snapshot, translogView);
            } catch (final Exception e) {
                throw new RecoveryEngineException(shard.shardId(), 1, "phase1 failed", e);
            } finally {
                try {
                    shard.releaseIndexCommit(phase1Snapshot);
                } catch (final IOException ex) {
                    logger.warn("releasing snapshot caused exception", ex);
                }
            }
        }
        try {
            prepareTargetForTranslog(translogView.totalOperations(), shard.segmentStats(false).getMaxUnsafeAutoIdTimestamp());
        } catch (final Exception e) {
            throw new RecoveryEngineException(shard.shardId(), 1, "prepare target for translog failed", e);
        }
        // engine was just started at the end of phase1
        if (shard.state() == IndexShardState.RELOCATED) {
            assert request.isPrimaryRelocation() == false : "recovery target should not retry primary relocation if previous attempt made it past finalization step";
            /*
                 * The primary shard has been relocated while we copied files. This means that we can't guarantee any more that all
                 * operations that were replicated during the file copy (when the target engine was not yet opened) will be present in the
                 * local translog and thus will be resent on phase2. The reason is that an operation replicated by the target primary is
                 * sent to the recovery target and the local shard (old primary) concurrently, meaning it may have arrived at the recovery
                 * target before we opened the engine and is still in-flight on the local shard.
                 *
                 * Checking the relocated status here, after we opened the engine on the target, is safe because primary relocation waits
                 * for all ongoing operations to complete and be fully replicated. Therefore all future operation by the new primary are
                 * guaranteed to reach the target shard when its engine is open.
                 */
            throw new IndexShardRelocatedException(request.shardId());
        }
        logger.trace("snapshot translog for recovery; current size is [{}]", translogView.totalOperations());
        try {
            phase2(isSequenceNumberBasedRecoveryPossible ? request.startingSeqNo() : SequenceNumbersService.UNASSIGNED_SEQ_NO, translogView.snapshot());
        } catch (Exception e) {
            throw new RecoveryEngineException(shard.shardId(), 2, "phase2 failed", e);
        }
        finalizeRecovery();
    }
    return response;
}
Also used : IndexShardRelocatedException(org.elasticsearch.index.shard.IndexShardRelocatedException) RecoveryEngineException(org.elasticsearch.index.engine.RecoveryEngineException) IOException(java.io.IOException) IndexCommit(org.apache.lucene.index.IndexCommit) IndexShardClosedException(org.elasticsearch.index.shard.IndexShardClosedException) IndexFormatTooNewException(org.apache.lucene.index.IndexFormatTooNewException) IndexShardRelocatedException(org.elasticsearch.index.shard.IndexShardRelocatedException) RecoveryEngineException(org.elasticsearch.index.engine.RecoveryEngineException) CorruptIndexException(org.apache.lucene.index.CorruptIndexException) IOException(java.io.IOException) IndexFormatTooOldException(org.apache.lucene.index.IndexFormatTooOldException) RemoteTransportException(org.elasticsearch.transport.RemoteTransportException) Translog(org.elasticsearch.index.translog.Translog)

Example 2 with IndexCommit

use of org.apache.lucene.index.IndexCommit in project elasticsearch by elastic.

the class RecoverySourceHandler method phase1.

/**
     * Perform phase1 of the recovery operations. Once this {@link IndexCommit}
     * snapshot has been performed no commit operations (files being fsync'd)
     * are effectively allowed on this index until all recovery phases are done
     * <p>
     * Phase1 examines the segment files on the target node and copies over the
     * segments that are missing. Only segments that have the same size and
     * checksum can be reused
     */
public void phase1(final IndexCommit snapshot, final Translog.View translogView) {
    cancellableThreads.checkForCancel();
    // Total size of segment files that are recovered
    long totalSize = 0;
    // Total size of segment files that were able to be re-used
    long existingTotalSize = 0;
    final Store store = shard.store();
    store.incRef();
    try {
        StopWatch stopWatch = new StopWatch().start();
        final Store.MetadataSnapshot recoverySourceMetadata;
        try {
            recoverySourceMetadata = store.getMetadata(snapshot);
        } catch (CorruptIndexException | IndexFormatTooOldException | IndexFormatTooNewException ex) {
            shard.failShard("recovery", ex);
            throw ex;
        }
        for (String name : snapshot.getFileNames()) {
            final StoreFileMetaData md = recoverySourceMetadata.get(name);
            if (md == null) {
                logger.info("Snapshot differs from actual index for file: {} meta: {}", name, recoverySourceMetadata.asMap());
                throw new CorruptIndexException("Snapshot differs from actual index - maybe index was removed metadata has " + recoverySourceMetadata.asMap().size() + " files", name);
            }
        }
        // Generate a "diff" of all the identical, different, and missing
        // segment files on the target node, using the existing files on
        // the source node
        String recoverySourceSyncId = recoverySourceMetadata.getSyncId();
        String recoveryTargetSyncId = request.metadataSnapshot().getSyncId();
        final boolean recoverWithSyncId = recoverySourceSyncId != null && recoverySourceSyncId.equals(recoveryTargetSyncId);
        if (recoverWithSyncId) {
            final long numDocsTarget = request.metadataSnapshot().getNumDocs();
            final long numDocsSource = recoverySourceMetadata.getNumDocs();
            if (numDocsTarget != numDocsSource) {
                throw new IllegalStateException("try to recover " + request.shardId() + " from primary shard with sync id but number " + "of docs differ: " + numDocsSource + " (" + request.sourceNode().getName() + ", primary) vs " + numDocsTarget + "(" + request.targetNode().getName() + ")");
            }
            // we shortcut recovery here because we have nothing to copy. but we must still start the engine on the target.
            // so we don't return here
            logger.trace("skipping [phase1]- identical sync id [{}] found on both source and target", recoverySourceSyncId);
        } else {
            final Store.RecoveryDiff diff = recoverySourceMetadata.recoveryDiff(request.metadataSnapshot());
            for (StoreFileMetaData md : diff.identical) {
                response.phase1ExistingFileNames.add(md.name());
                response.phase1ExistingFileSizes.add(md.length());
                existingTotalSize += md.length();
                if (logger.isTraceEnabled()) {
                    logger.trace("recovery [phase1]: not recovering [{}], exist in local store and has checksum [{}]," + " size [{}]", md.name(), md.checksum(), md.length());
                }
                totalSize += md.length();
            }
            List<StoreFileMetaData> phase1Files = new ArrayList<>(diff.different.size() + diff.missing.size());
            phase1Files.addAll(diff.different);
            phase1Files.addAll(diff.missing);
            for (StoreFileMetaData md : phase1Files) {
                if (request.metadataSnapshot().asMap().containsKey(md.name())) {
                    logger.trace("recovery [phase1]: recovering [{}], exists in local store, but is different: remote [{}], local [{}]", md.name(), request.metadataSnapshot().asMap().get(md.name()), md);
                } else {
                    logger.trace("recovery [phase1]: recovering [{}], does not exist in remote", md.name());
                }
                response.phase1FileNames.add(md.name());
                response.phase1FileSizes.add(md.length());
                totalSize += md.length();
            }
            response.phase1TotalSize = totalSize;
            response.phase1ExistingTotalSize = existingTotalSize;
            logger.trace("recovery [phase1]: recovering_files [{}] with total_size [{}], reusing_files [{}] with total_size [{}]", response.phase1FileNames.size(), new ByteSizeValue(totalSize), response.phase1ExistingFileNames.size(), new ByteSizeValue(existingTotalSize));
            cancellableThreads.execute(() -> recoveryTarget.receiveFileInfo(response.phase1FileNames, response.phase1FileSizes, response.phase1ExistingFileNames, response.phase1ExistingFileSizes, translogView.totalOperations()));
            // How many bytes we've copied since we last called RateLimiter.pause
            final Function<StoreFileMetaData, OutputStream> outputStreamFactories = md -> new BufferedOutputStream(new RecoveryOutputStream(md, translogView), chunkSizeInBytes);
            sendFiles(store, phase1Files.toArray(new StoreFileMetaData[phase1Files.size()]), outputStreamFactories);
            // are deleted
            try {
                cancellableThreads.executeIO(() -> recoveryTarget.cleanFiles(translogView.totalOperations(), recoverySourceMetadata));
            } catch (RemoteTransportException | IOException targetException) {
                final IOException corruptIndexException;
                //   - maybe due to old segments without checksums or length only checks
                if ((corruptIndexException = ExceptionsHelper.unwrapCorruption(targetException)) != null) {
                    try {
                        final Store.MetadataSnapshot recoverySourceMetadata1 = store.getMetadata(snapshot);
                        StoreFileMetaData[] metadata = StreamSupport.stream(recoverySourceMetadata1.spliterator(), false).toArray(size -> new StoreFileMetaData[size]);
                        ArrayUtil.timSort(metadata, (o1, o2) -> {
                            // check small files first
                            return Long.compare(o1.length(), o2.length());
                        });
                        for (StoreFileMetaData md : metadata) {
                            cancellableThreads.checkForCancel();
                            logger.debug("checking integrity for file {} after remove corruption exception", md);
                            if (store.checkIntegrityNoException(md) == false) {
                                // we are corrupted on the primary -- fail!
                                shard.failShard("recovery", corruptIndexException);
                                logger.warn("Corrupted file detected {} checksum mismatch", md);
                                throw corruptIndexException;
                            }
                        }
                    } catch (IOException ex) {
                        targetException.addSuppressed(ex);
                        throw targetException;
                    }
                    // corruption has happened on the way to replica
                    RemoteTransportException exception = new RemoteTransportException("File corruption occurred on recovery but " + "checksums are ok", null);
                    exception.addSuppressed(targetException);
                    logger.warn((org.apache.logging.log4j.util.Supplier<?>) () -> new ParameterizedMessage("{} Remote file corruption during finalization of recovery on node {}. local checksum OK", shard.shardId(), request.targetNode()), corruptIndexException);
                    throw exception;
                } else {
                    throw targetException;
                }
            }
        }
        logger.trace("recovery [phase1]: took [{}]", stopWatch.totalTime());
        response.phase1Time = stopWatch.totalTime().millis();
    } catch (Exception e) {
        throw new RecoverFilesRecoveryException(request.shardId(), response.phase1FileNames.size(), new ByteSizeValue(totalSize), e);
    } finally {
        store.decRef();
    }
}
Also used : IndexCommit(org.apache.lucene.index.IndexCommit) CancellableThreads(org.elasticsearch.common.util.CancellableThreads) IndexShardClosedException(org.elasticsearch.index.shard.IndexShardClosedException) IndexFormatTooNewException(org.apache.lucene.index.IndexFormatTooNewException) Nullable(org.elasticsearch.common.Nullable) IndexShardRelocatedException(org.elasticsearch.index.shard.IndexShardRelocatedException) RecoveryEngineException(org.elasticsearch.index.engine.RecoveryEngineException) ParameterizedMessage(org.apache.logging.log4j.message.ParameterizedMessage) Function(java.util.function.Function) Supplier(java.util.function.Supplier) CorruptIndexException(org.apache.lucene.index.CorruptIndexException) BufferedOutputStream(java.io.BufferedOutputStream) ArrayList(java.util.ArrayList) BytesArray(org.elasticsearch.common.bytes.BytesArray) StoreFileMetaData(org.elasticsearch.index.store.StoreFileMetaData) Settings(org.elasticsearch.common.settings.Settings) SequenceNumbersService(org.elasticsearch.index.seqno.SequenceNumbersService) Store(org.elasticsearch.index.store.Store) LocalCheckpointTracker(org.elasticsearch.index.seqno.LocalCheckpointTracker) Streams(org.elasticsearch.common.io.Streams) StreamSupport(java.util.stream.StreamSupport) IOContext(org.apache.lucene.store.IOContext) Releasable(org.elasticsearch.common.lease.Releasable) Loggers(org.elasticsearch.common.logging.Loggers) ByteSizeValue(org.elasticsearch.common.unit.ByteSizeValue) OutputStream(java.io.OutputStream) ArrayUtil(org.apache.lucene.util.ArrayUtil) IndexShardState(org.elasticsearch.index.shard.IndexShardState) IndexInput(org.apache.lucene.store.IndexInput) IndexShard(org.elasticsearch.index.shard.IndexShard) IOUtils(org.apache.lucene.util.IOUtils) IOException(java.io.IOException) StopWatch(org.elasticsearch.common.StopWatch) IndexFormatTooOldException(org.apache.lucene.index.IndexFormatTooOldException) ExceptionsHelper(org.elasticsearch.ExceptionsHelper) RemoteTransportException(org.elasticsearch.transport.RemoteTransportException) List(java.util.List) Logger(org.apache.logging.log4j.Logger) InputStreamIndexInput(org.elasticsearch.common.lucene.store.InputStreamIndexInput) Translog(org.elasticsearch.index.translog.Translog) RateLimiter(org.apache.lucene.store.RateLimiter) BufferedOutputStream(java.io.BufferedOutputStream) OutputStream(java.io.OutputStream) ArrayList(java.util.ArrayList) ByteSizeValue(org.elasticsearch.common.unit.ByteSizeValue) Store(org.elasticsearch.index.store.Store) IndexFormatTooOldException(org.apache.lucene.index.IndexFormatTooOldException) StoreFileMetaData(org.elasticsearch.index.store.StoreFileMetaData) Supplier(java.util.function.Supplier) BufferedOutputStream(java.io.BufferedOutputStream) RemoteTransportException(org.elasticsearch.transport.RemoteTransportException) CorruptIndexException(org.apache.lucene.index.CorruptIndexException) IOException(java.io.IOException) IndexShardClosedException(org.elasticsearch.index.shard.IndexShardClosedException) IndexFormatTooNewException(org.apache.lucene.index.IndexFormatTooNewException) IndexShardRelocatedException(org.elasticsearch.index.shard.IndexShardRelocatedException) RecoveryEngineException(org.elasticsearch.index.engine.RecoveryEngineException) CorruptIndexException(org.apache.lucene.index.CorruptIndexException) IOException(java.io.IOException) IndexFormatTooOldException(org.apache.lucene.index.IndexFormatTooOldException) RemoteTransportException(org.elasticsearch.transport.RemoteTransportException) StopWatch(org.elasticsearch.common.StopWatch) ParameterizedMessage(org.apache.logging.log4j.message.ParameterizedMessage) IndexFormatTooNewException(org.apache.lucene.index.IndexFormatTooNewException)

Example 3 with IndexCommit

use of org.apache.lucene.index.IndexCommit in project elasticsearch by elastic.

the class SnapshotShardsService method snapshot.

/**
     * Creates shard snapshot
     *
     * @param snapshot       snapshot
     * @param snapshotStatus snapshot status
     */
private void snapshot(final IndexShard indexShard, final Snapshot snapshot, final IndexId indexId, final IndexShardSnapshotStatus snapshotStatus) {
    Repository repository = snapshotsService.getRepositoriesService().repository(snapshot.getRepository());
    ShardId shardId = indexShard.shardId();
    if (!indexShard.routingEntry().primary()) {
        throw new IndexShardSnapshotFailedException(shardId, "snapshot should be performed only on primary");
    }
    if (indexShard.routingEntry().relocating()) {
        // do not snapshot when in the process of relocation of primaries so we won't get conflicts
        throw new IndexShardSnapshotFailedException(shardId, "cannot snapshot while relocating");
    }
    if (indexShard.state() == IndexShardState.CREATED || indexShard.state() == IndexShardState.RECOVERING) {
        // shard has just been created, or still recovering
        throw new IndexShardSnapshotFailedException(shardId, "shard didn't fully recover yet");
    }
    try {
        // we flush first to make sure we get the latest writes snapshotted
        IndexCommit snapshotIndexCommit = indexShard.acquireIndexCommit(true);
        try {
            repository.snapshotShard(indexShard, snapshot.getSnapshotId(), indexId, snapshotIndexCommit, snapshotStatus);
            if (logger.isDebugEnabled()) {
                StringBuilder sb = new StringBuilder();
                sb.append("    index    : version [").append(snapshotStatus.indexVersion()).append("], number_of_files [").append(snapshotStatus.numberOfFiles()).append("] with total_size [").append(new ByteSizeValue(snapshotStatus.totalSize())).append("]\n");
                logger.debug("snapshot ({}) completed to {}, took [{}]\n{}", snapshot, repository, TimeValue.timeValueMillis(snapshotStatus.time()), sb);
            }
        } finally {
            indexShard.releaseIndexCommit(snapshotIndexCommit);
        }
    } catch (SnapshotFailedEngineException e) {
        throw e;
    } catch (IndexShardSnapshotFailedException e) {
        throw e;
    } catch (Exception e) {
        throw new IndexShardSnapshotFailedException(shardId, "Failed to snapshot", e);
    }
}
Also used : ShardId(org.elasticsearch.index.shard.ShardId) Repository(org.elasticsearch.repositories.Repository) SnapshotFailedEngineException(org.elasticsearch.index.engine.SnapshotFailedEngineException) ByteSizeValue(org.elasticsearch.common.unit.ByteSizeValue) IndexShardSnapshotFailedException(org.elasticsearch.index.snapshots.IndexShardSnapshotFailedException) IndexCommit(org.apache.lucene.index.IndexCommit) IndexShardSnapshotFailedException(org.elasticsearch.index.snapshots.IndexShardSnapshotFailedException) SnapshotFailedEngineException(org.elasticsearch.index.engine.SnapshotFailedEngineException) IOException(java.io.IOException)

Example 4 with IndexCommit

use of org.apache.lucene.index.IndexCommit in project elasticsearch by elastic.

the class TruncateTranslogCommand method execute.

@Override
protected void execute(Terminal terminal, OptionSet options, Environment env) throws Exception {
    boolean batch = options.has(batchMode);
    Path translogPath = getTranslogPath(options);
    Path idxLocation = translogPath.getParent().resolve("index");
    if (Files.exists(translogPath) == false || Files.isDirectory(translogPath) == false) {
        throw new ElasticsearchException("translog directory [" + translogPath + "], must exist and be a directory");
    }
    if (Files.exists(idxLocation) == false || Files.isDirectory(idxLocation) == false) {
        throw new ElasticsearchException("unable to find a shard at [" + idxLocation + "], which must exist and be a directory");
    }
    // Hold the lock open for the duration of the tool running
    try (Directory dir = FSDirectory.open(idxLocation, NativeFSLockFactory.INSTANCE);
        Lock writeLock = dir.obtainLock(IndexWriter.WRITE_LOCK_NAME)) {
        Set<Path> translogFiles;
        try {
            terminal.println("Checking existing translog files");
            translogFiles = filesInDirectory(translogPath);
        } catch (IOException e) {
            terminal.println("encountered IOException while listing directory, aborting...");
            throw new ElasticsearchException("failed to find existing translog files", e);
        }
        // Warn about ES being stopped and files being deleted
        warnAboutDeletingFiles(terminal, translogFiles, batch);
        List<IndexCommit> commits;
        try {
            terminal.println("Reading translog UUID information from Lucene commit from shard at [" + idxLocation + "]");
            commits = DirectoryReader.listCommits(dir);
        } catch (IndexNotFoundException infe) {
            throw new ElasticsearchException("unable to find a valid shard at [" + idxLocation + "]", infe);
        }
        // Retrieve the generation and UUID from the existing data
        Map<String, String> commitData = commits.get(commits.size() - 1).getUserData();
        String translogGeneration = commitData.get(Translog.TRANSLOG_GENERATION_KEY);
        String translogUUID = commitData.get(Translog.TRANSLOG_UUID_KEY);
        if (translogGeneration == null || translogUUID == null) {
            throw new ElasticsearchException("shard must have a valid translog generation and UUID but got: [{}] and: [{}]", translogGeneration, translogUUID);
        }
        terminal.println("Translog Generation: " + translogGeneration);
        terminal.println("Translog UUID      : " + translogUUID);
        Path tempEmptyCheckpoint = translogPath.resolve("temp-" + Translog.CHECKPOINT_FILE_NAME);
        Path realEmptyCheckpoint = translogPath.resolve(Translog.CHECKPOINT_FILE_NAME);
        Path tempEmptyTranslog = translogPath.resolve("temp-" + Translog.TRANSLOG_FILE_PREFIX + translogGeneration + Translog.TRANSLOG_FILE_SUFFIX);
        Path realEmptyTranslog = translogPath.resolve(Translog.TRANSLOG_FILE_PREFIX + translogGeneration + Translog.TRANSLOG_FILE_SUFFIX);
        // Write empty checkpoint and translog to empty files
        long gen = Long.parseLong(translogGeneration);
        int translogLen = writeEmptyTranslog(tempEmptyTranslog, translogUUID);
        writeEmptyCheckpoint(tempEmptyCheckpoint, translogLen, gen);
        terminal.println("Removing existing translog files");
        IOUtils.rm(translogFiles.toArray(new Path[] {}));
        terminal.println("Creating new empty checkpoint at [" + realEmptyCheckpoint + "]");
        Files.move(tempEmptyCheckpoint, realEmptyCheckpoint, StandardCopyOption.ATOMIC_MOVE);
        terminal.println("Creating new empty translog at [" + realEmptyTranslog + "]");
        Files.move(tempEmptyTranslog, realEmptyTranslog, StandardCopyOption.ATOMIC_MOVE);
        // Fsync the translog directory after rename
        IOUtils.fsync(translogPath, true);
    } catch (LockObtainFailedException lofe) {
        throw new ElasticsearchException("Failed to lock shard's directory at [" + idxLocation + "], is Elasticsearch still running?");
    }
    terminal.println("Done.");
}
Also used : Path(java.nio.file.Path) ElasticsearchException(org.elasticsearch.ElasticsearchException) IOException(java.io.IOException) IndexCommit(org.apache.lucene.index.IndexCommit) Lock(org.apache.lucene.store.Lock) LockObtainFailedException(org.apache.lucene.store.LockObtainFailedException) IndexNotFoundException(org.elasticsearch.index.IndexNotFoundException) Directory(org.apache.lucene.store.Directory) FSDirectory(org.apache.lucene.store.FSDirectory)

Example 5 with IndexCommit

use of org.apache.lucene.index.IndexCommit in project elasticsearch by elastic.

the class IndexShardTests method testAcquireIndexCommit.

public void testAcquireIndexCommit() throws IOException {
    final IndexShard shard = newStartedShard();
    int numDocs = randomInt(20);
    for (int i = 0; i < numDocs; i++) {
        indexDoc(shard, "type", "id_" + i);
    }
    final boolean flushFirst = randomBoolean();
    IndexCommit commit = shard.acquireIndexCommit(flushFirst);
    int moreDocs = randomInt(20);
    for (int i = 0; i < moreDocs; i++) {
        indexDoc(shard, "type", "id_" + numDocs + i);
    }
    flushShard(shard);
    // check that we can still read the commit that we captured
    try (IndexReader reader = DirectoryReader.open(commit)) {
        assertThat(reader.numDocs(), equalTo(flushFirst ? numDocs : 0));
    }
    shard.releaseIndexCommit(commit);
    flushShard(shard, true);
    // check it's clean up
    assertThat(DirectoryReader.listCommits(shard.store().directory()), hasSize(1));
    closeShards(shard);
}
Also used : IndexReader(org.apache.lucene.index.IndexReader) IndexCommit(org.apache.lucene.index.IndexCommit)

Aggregations

IndexCommit (org.apache.lucene.index.IndexCommit)60 IOException (java.io.IOException)24 ArrayList (java.util.ArrayList)22 AtomicLong (java.util.concurrent.atomic.AtomicLong)11 Directory (org.apache.lucene.store.Directory)11 Test (org.junit.Test)10 IndexWriter (org.apache.lucene.index.IndexWriter)9 Store (org.elasticsearch.index.store.Store)9 Translog (org.elasticsearch.index.translog.Translog)8 List (java.util.List)7 Map (java.util.Map)7 SolrException (org.apache.solr.common.SolrException)7 NoSuchFileException (java.nio.file.NoSuchFileException)6 HashMap (java.util.HashMap)6 LongPoint (org.apache.lucene.document.LongPoint)6 DirectoryReader (org.apache.lucene.index.DirectoryReader)6 IndexReader (org.apache.lucene.index.IndexReader)6 UncheckedIOException (java.io.UncheckedIOException)5 Collections (java.util.Collections)5 IndexSettings (org.elasticsearch.index.IndexSettings)5