Search in sources :

Example 86 with CorruptIndexException

use of org.apache.lucene.index.CorruptIndexException in project crate by crate.

the class ExceptionsHelperTests method testSuppressedCycle.

@Test
public void testSuppressedCycle() {
    RuntimeException e1 = new RuntimeException();
    RuntimeException e2 = new RuntimeException();
    e1.addSuppressed(e2);
    e2.addSuppressed(e1);
    ExceptionsHelper.unwrapCorruption(e1);
    final CorruptIndexException corruptIndexException = new CorruptIndexException("corrupt", "resource");
    RuntimeException e3 = new RuntimeException(corruptIndexException);
    e3.addSuppressed(e1);
    assertThat(ExceptionsHelper.unwrapCorruption(e3), equalTo(corruptIndexException));
    RuntimeException e4 = new RuntimeException(e1);
    e4.addSuppressed(corruptIndexException);
    assertThat(ExceptionsHelper.unwrapCorruption(e4), equalTo(corruptIndexException));
}
Also used : CorruptIndexException(org.apache.lucene.index.CorruptIndexException) Test(org.junit.Test)

Example 87 with CorruptIndexException

use of org.apache.lucene.index.CorruptIndexException in project crate by crate.

the class TranslogHeader method read.

/**
 * Read a translog header from the given path and file channel
 */
static TranslogHeader read(final String translogUUID, final Path path, final FileChannel channel) throws IOException {
    try {
        // This input is intentionally not closed because closing it will close the FileChannel.
        final BufferedChecksumStreamInput in = new BufferedChecksumStreamInput(new InputStreamStreamInput(java.nio.channels.Channels.newInputStream(channel), channel.size()), path.toString());
        final int version;
        try {
            version = CodecUtil.checkHeader(new InputStreamDataInput(in), TRANSLOG_CODEC, VERSION_CHECKSUMS, VERSION_PRIMARY_TERM);
        } catch (CorruptIndexException | IndexFormatTooOldException | IndexFormatTooNewException e) {
            tryReportOldVersionError(path, channel);
            throw new TranslogCorruptedException(path.toString(), "translog header corrupted", e);
        }
        if (version == VERSION_CHECKSUMS) {
            throw new IllegalStateException("pre-2.0 translog found [" + path + "]");
        }
        // Read the translogUUID
        final int uuidLen = in.readInt();
        if (uuidLen > channel.size()) {
            throw new TranslogCorruptedException(path.toString(), "UUID length can't be larger than the translog");
        }
        if (uuidLen <= 0) {
            throw new TranslogCorruptedException(path.toString(), "UUID length must be positive");
        }
        final BytesRef uuid = new BytesRef(uuidLen);
        uuid.length = uuidLen;
        in.read(uuid.bytes, uuid.offset, uuid.length);
        // Read the primary term
        final long primaryTerm;
        if (version == VERSION_PRIMARY_TERM) {
            primaryTerm = in.readLong();
        } else {
            // This can be dropped with 5.0 as BWC with older versions is not required anymore
            assert version == VERSION_CHECKPOINTS : "Unknown header version [" + version + "]";
            primaryTerm = UNASSIGNED_PRIMARY_TERM;
        }
        // Verify the checksum (can be always verified on >= 5.0 as version must be primary term based without BWC)
        if (version >= VERSION_PRIMARY_TERM) {
            Translog.verifyChecksum(in);
        }
        assert primaryTerm >= 0 : "Primary term must be non-negative [" + primaryTerm + "]; translog path [" + path + "]";
        final int headerSizeInBytes = headerSizeInBytes(version, uuid.length);
        assert channel.position() == headerSizeInBytes : "Header is not fully read; header size [" + headerSizeInBytes + "], position [" + channel.position() + "]";
        // verify UUID only after checksum, to ensure that UUID is not corrupted
        final BytesRef expectedUUID = new BytesRef(translogUUID);
        if (uuid.bytesEquals(expectedUUID) == false) {
            throw new TranslogCorruptedException(path.toString(), "expected shard UUID " + expectedUUID + " but got: " + uuid + " this translog file belongs to a different translog");
        }
        return new TranslogHeader(translogUUID, primaryTerm, headerSizeInBytes);
    } catch (EOFException e) {
        throw new TranslogCorruptedException(path.toString(), "translog header truncated", e);
    }
}
Also used : CorruptIndexException(org.apache.lucene.index.CorruptIndexException) InputStreamDataInput(org.apache.lucene.store.InputStreamDataInput) IndexFormatTooOldException(org.apache.lucene.index.IndexFormatTooOldException) EOFException(java.io.EOFException) IndexFormatTooNewException(org.apache.lucene.index.IndexFormatTooNewException) BytesRef(org.apache.lucene.util.BytesRef) InputStreamStreamInput(org.elasticsearch.common.io.stream.InputStreamStreamInput)

Example 88 with CorruptIndexException

use of org.apache.lucene.index.CorruptIndexException in project crate by crate.

the class Store method failIfCorrupted.

private static void failIfCorrupted(Directory directory) throws IOException {
    final String[] files = directory.listAll();
    List<CorruptIndexException> ex = new ArrayList<>();
    for (String file : files) {
        if (file.startsWith(CORRUPTED_MARKER_NAME_PREFIX)) {
            try (ChecksumIndexInput input = directory.openChecksumInput(file, IOContext.READONCE)) {
                CodecUtil.checkHeader(input, CODEC, CORRUPTED_MARKER_CODEC_VERSION, CORRUPTED_MARKER_CODEC_VERSION);
                final int size = input.readVInt();
                final byte[] buffer = new byte[size];
                input.readBytes(buffer, 0, buffer.length);
                StreamInput in = StreamInput.wrap(buffer);
                Exception t = in.readException();
                if (t instanceof CorruptIndexException) {
                    ex.add((CorruptIndexException) t);
                } else {
                    ex.add(new CorruptIndexException(t.getMessage(), "preexisting_corruption", t));
                }
                CodecUtil.checkFooter(input);
            }
        }
    }
    if (ex.isEmpty() == false) {
        ExceptionsHelper.rethrowAndSuppress(ex);
    }
}
Also used : ChecksumIndexInput(org.apache.lucene.store.ChecksumIndexInput) ArrayList(java.util.ArrayList) StreamInput(org.elasticsearch.common.io.stream.StreamInput) CorruptIndexException(org.apache.lucene.index.CorruptIndexException) IndexNotFoundException(org.apache.lucene.index.IndexNotFoundException) NoSuchFileException(java.nio.file.NoSuchFileException) IndexFormatTooNewException(org.apache.lucene.index.IndexFormatTooNewException) AlreadyClosedException(org.apache.lucene.store.AlreadyClosedException) CorruptIndexException(org.apache.lucene.index.CorruptIndexException) ShardLockObtainFailedException(org.elasticsearch.env.ShardLockObtainFailedException) EOFException(java.io.EOFException) FileNotFoundException(java.io.FileNotFoundException) UncheckedIOException(java.io.UncheckedIOException) IOException(java.io.IOException) IndexFormatTooOldException(org.apache.lucene.index.IndexFormatTooOldException)

Example 89 with CorruptIndexException

use of org.apache.lucene.index.CorruptIndexException in project crate by crate.

the class ClusterDisruptionIT method testSendingShardFailure.

// simulate handling of sending shard failure during an isolation
@Test
public void testSendingShardFailure() throws Exception {
    List<String> nodes = startCluster(3);
    String masterNode = internalCluster().getMasterName();
    List<String> nonMasterNodes = nodes.stream().filter(node -> !node.equals(masterNode)).collect(Collectors.toList());
    String nonMasterNode = randomFrom(nonMasterNodes);
    execute("create table t (id int primary key, x string) clustered into 3 shards with (number_of_replicas = 2)");
    ensureGreen();
    String nonMasterNodeId = internalCluster().clusterService(nonMasterNode).localNode().getId();
    // fail a random shard
    ShardRouting failedShard = randomFrom(clusterService().state().getRoutingNodes().node(nonMasterNodeId).shardsWithState(ShardRoutingState.STARTED));
    ShardStateAction service = internalCluster().getInstance(ShardStateAction.class, nonMasterNode);
    CountDownLatch latch = new CountDownLatch(1);
    AtomicBoolean success = new AtomicBoolean();
    String isolatedNode = randomBoolean() ? masterNode : nonMasterNode;
    NetworkDisruption.TwoPartitions partitions = isolateNode(isolatedNode);
    // we cannot use the NetworkUnresponsive disruption type here as it will swallow the "shard failed" request, calling neither
    // onSuccess nor onFailure on the provided listener.
    NetworkLinkDisruptionType disruptionType = new NetworkDisruption.NetworkDisconnect();
    NetworkDisruption networkDisruption = new NetworkDisruption(partitions, disruptionType);
    setDisruptionScheme(networkDisruption);
    networkDisruption.startDisrupting();
    service.localShardFailed(failedShard, "simulated", new CorruptIndexException("simulated", (String) null), new ActionListener<>() {

        @Override
        public void onResponse(Void aVoid) {
            success.set(true);
            latch.countDown();
        }

        @Override
        public void onFailure(Exception e) {
            success.set(false);
            latch.countDown();
            assert false;
        }
    });
    if (isolatedNode.equals(nonMasterNode)) {
        assertNoMaster(nonMasterNode);
    } else {
        ensureStableCluster(2, nonMasterNode);
    }
    // heal the partition
    networkDisruption.removeAndEnsureHealthy(internalCluster());
    // the cluster should stabilize
    ensureStableCluster(3);
    latch.await();
    // the listener should be notified
    assertTrue(success.get());
    // the failed shard should be gone
    List<ShardRouting> shards = clusterService().state().getRoutingTable().allShards(toIndexName(sqlExecutor.getCurrentSchema(), "t", null));
    for (ShardRouting shard : shards) {
        assertThat(shard.allocationId(), not(equalTo(failedShard.allocationId())));
    }
}
Also used : ElasticsearchException(org.elasticsearch.ElasticsearchException) ShardRouting(org.elasticsearch.cluster.routing.ShardRouting) InternalTestCluster(org.elasticsearch.test.InternalTestCluster) NetworkDisruption(org.elasticsearch.test.disruption.NetworkDisruption) ConcurrentCollections(org.elasticsearch.common.util.concurrent.ConcurrentCollections) Matchers.not(org.hamcrest.Matchers.not) AtomicBoolean(java.util.concurrent.atomic.AtomicBoolean) SQLIntegrationTestCase(io.crate.integrationtests.SQLIntegrationTestCase) ShardRoutingState(org.elasticsearch.cluster.routing.ShardRoutingState) ParameterizedMessage(org.apache.logging.log4j.message.ParameterizedMessage) AtomicReference(java.util.concurrent.atomic.AtomicReference) CorruptIndexException(org.apache.lucene.index.CorruptIndexException) ArrayList(java.util.ArrayList) ClusterState(org.elasticsearch.cluster.ClusterState) Matchers.everyItem(org.hamcrest.Matchers.everyItem) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) IndicesService(org.elasticsearch.indices.IndicesService) Bridge(org.elasticsearch.test.disruption.NetworkDisruption.Bridge) ServiceDisruptionScheme(org.elasticsearch.test.disruption.ServiceDisruptionScheme) DuplicateKeyException(io.crate.exceptions.DuplicateKeyException) Matchers.greaterThanOrEqualTo(org.hamcrest.Matchers.greaterThanOrEqualTo) TestLogging(org.elasticsearch.test.junit.annotations.TestLogging) Semaphore(java.util.concurrent.Semaphore) IndexShard(org.elasticsearch.index.shard.IndexShard) NetworkLinkDisruptionType(org.elasticsearch.test.disruption.NetworkDisruption.NetworkLinkDisruptionType) Collection(java.util.Collection) ConcurrentHashMap(java.util.concurrent.ConcurrentHashMap) Plugin(org.elasticsearch.plugins.Plugin) Set(java.util.Set) NoShardAvailableActionException(org.elasticsearch.action.NoShardAvailableActionException) Test(org.junit.Test) Collectors(java.util.stream.Collectors) Murmur3HashFunction(org.elasticsearch.cluster.routing.Murmur3HashFunction) TimeUnit(java.util.concurrent.TimeUnit) InternalSettingsPlugin(org.elasticsearch.test.InternalSettingsPlugin) CountDownLatch(java.util.concurrent.CountDownLatch) IndexShardTestCase(org.elasticsearch.index.shard.IndexShardTestCase) List(java.util.List) IndexParts.toIndexName(io.crate.metadata.IndexParts.toIndexName) ESIntegTestCase(org.elasticsearch.test.ESIntegTestCase) ShardStateAction(org.elasticsearch.cluster.action.shard.ShardStateAction) Matchers.equalTo(org.hamcrest.Matchers.equalTo) TimeValue(io.crate.common.unit.TimeValue) Matchers.is(org.hamcrest.Matchers.is) Collections(java.util.Collections) Matchers.in(org.hamcrest.Matchers.in) ActionListener(org.elasticsearch.action.ActionListener) CopyOnWriteArrayList(java.util.concurrent.CopyOnWriteArrayList) NetworkLinkDisruptionType(org.elasticsearch.test.disruption.NetworkDisruption.NetworkLinkDisruptionType) CorruptIndexException(org.apache.lucene.index.CorruptIndexException) ShardStateAction(org.elasticsearch.cluster.action.shard.ShardStateAction) CountDownLatch(java.util.concurrent.CountDownLatch) ElasticsearchException(org.elasticsearch.ElasticsearchException) CorruptIndexException(org.apache.lucene.index.CorruptIndexException) DuplicateKeyException(io.crate.exceptions.DuplicateKeyException) NoShardAvailableActionException(org.elasticsearch.action.NoShardAvailableActionException) AtomicBoolean(java.util.concurrent.atomic.AtomicBoolean) ShardRouting(org.elasticsearch.cluster.routing.ShardRouting) NetworkDisruption(org.elasticsearch.test.disruption.NetworkDisruption) Test(org.junit.Test)

Example 90 with CorruptIndexException

use of org.apache.lucene.index.CorruptIndexException in project crate by crate.

the class BlobStoreRepository method restoreShard.

@Override
public void restoreShard(Store store, SnapshotId snapshotId, IndexId indexId, ShardId snapshotShardId, RecoveryState recoveryState, ActionListener<Void> listener) {
    final ShardId shardId = store.shardId();
    final ActionListener<Void> restoreListener = ActionListener.delegateResponse(listener, (l, e) -> l.onFailure(new IndexShardRestoreFailedException(shardId, "failed to restore snapshot [" + snapshotId + "]", e)));
    final Executor executor = threadPool.executor(ThreadPool.Names.SNAPSHOT);
    final BlobContainer container = shardContainer(indexId, snapshotShardId);
    executor.execute(ActionRunnable.wrap(restoreListener, l -> {
        final BlobStoreIndexShardSnapshot snapshot = loadShardSnapshot(container, snapshotId);
        final SnapshotFiles snapshotFiles = new SnapshotFiles(snapshot.snapshot(), snapshot.indexFiles());
        new FileRestoreContext(metadata.name(), shardId, snapshotId, recoveryState) {

            @Override
            protected void restoreFiles(List<BlobStoreIndexShardSnapshot.FileInfo> filesToRecover, Store store, ActionListener<Void> listener) {
                if (filesToRecover.isEmpty()) {
                    listener.onResponse(null);
                } else {
                    // Start as many workers as fit into the snapshot pool at once at the most
                    int maxPoolSize = executor instanceof ThreadPoolExecutor ? ((ThreadPoolExecutor) executor).getMaximumPoolSize() : 1;
                    final int workers = Math.min(maxPoolSize, snapshotFiles.indexFiles().size());
                    final BlockingQueue<BlobStoreIndexShardSnapshot.FileInfo> files = new LinkedBlockingQueue<>(filesToRecover);
                    final ActionListener<Void> allFilesListener = fileQueueListener(files, workers, ActionListener.map(listener, v -> null));
                    // restore the files from the snapshot to the Lucene store
                    for (int i = 0; i < workers; ++i) {
                        executor.execute(ActionRunnable.run(allFilesListener, () -> {
                            store.incRef();
                            try {
                                BlobStoreIndexShardSnapshot.FileInfo fileToRecover;
                                while ((fileToRecover = files.poll(0L, TimeUnit.MILLISECONDS)) != null) {
                                    restoreFile(fileToRecover, store);
                                }
                            } finally {
                                store.decRef();
                            }
                        }));
                    }
                }
            }

            private void restoreFile(BlobStoreIndexShardSnapshot.FileInfo fileInfo, Store store) throws IOException {
                boolean success = false;
                try (InputStream stream = maybeRateLimit(new SlicedInputStream(fileInfo.numberOfParts()) {

                    @Override
                    protected InputStream openSlice(long slice) throws IOException {
                        return container.readBlob(fileInfo.partName(slice));
                    }
                }, restoreRateLimiter, restoreRateLimitingTimeInNanos)) {
                    try (IndexOutput indexOutput = store.createVerifyingOutput(fileInfo.physicalName(), fileInfo.metadata(), IOContext.DEFAULT)) {
                        final byte[] buffer = new byte[BUFFER_SIZE];
                        int length;
                        while ((length = stream.read(buffer)) > 0) {
                            indexOutput.writeBytes(buffer, 0, length);
                            recoveryState.getIndex().addRecoveredBytesToFile(fileInfo.physicalName(), length);
                        }
                        Store.verify(indexOutput);
                        indexOutput.close();
                        store.directory().sync(Collections.singleton(fileInfo.physicalName()));
                        success = true;
                    } catch (CorruptIndexException | IndexFormatTooOldException | IndexFormatTooNewException ex) {
                        try {
                            store.markStoreCorrupted(ex);
                        } catch (IOException e) {
                            LOGGER.warn("store cannot be marked as corrupted", e);
                        }
                        throw ex;
                    } finally {
                        if (success == false) {
                            store.deleteQuiet(fileInfo.physicalName());
                        }
                    }
                }
            }
        }.restore(snapshotFiles, store, l);
    }));
}
Also used : ShardId(org.elasticsearch.index.shard.ShardId) SnapshotFiles(org.elasticsearch.index.snapshots.blobstore.SnapshotFiles) IndexShardSnapshotFailedException(org.elasticsearch.index.snapshots.IndexShardSnapshotFailedException) ByteSizeUnit(org.elasticsearch.common.unit.ByteSizeUnit) IndexFormatTooNewException(org.apache.lucene.index.IndexFormatTooNewException) IndexMetadata(org.elasticsearch.cluster.metadata.IndexMetadata) AllocationService(org.elasticsearch.cluster.routing.allocation.AllocationService) ClusterState(org.elasticsearch.cluster.ClusterState) ClusterStateUpdateTask(org.elasticsearch.cluster.ClusterStateUpdateTask) Map(java.util.Map) BlobContainer(org.elasticsearch.common.blobstore.BlobContainer) RateLimitingInputStream(org.elasticsearch.index.snapshots.blobstore.RateLimitingInputStream) IOContext(org.apache.lucene.store.IOContext) InvalidArgumentException(io.crate.exceptions.InvalidArgumentException) SnapshotDeletionsInProgress(org.elasticsearch.cluster.SnapshotDeletionsInProgress) UUIDs(org.elasticsearch.common.UUIDs) Set(java.util.Set) BlockingQueue(java.util.concurrent.BlockingQueue) StandardCharsets(java.nio.charset.StandardCharsets) AbstractRunnable(org.elasticsearch.common.util.concurrent.AbstractRunnable) Stream(java.util.stream.Stream) Logger(org.apache.logging.log4j.Logger) InputStreamIndexInput(org.elasticsearch.common.lucene.store.InputStreamIndexInput) BlobStore(org.elasticsearch.common.blobstore.BlobStore) SnapshotException(org.elasticsearch.snapshots.SnapshotException) FileInfo.canonicalName(org.elasticsearch.index.snapshots.blobstore.BlobStoreIndexShardSnapshot.FileInfo.canonicalName) IndexCommit(org.apache.lucene.index.IndexCommit) XContentFactory(org.elasticsearch.common.xcontent.XContentFactory) SnapshotId(org.elasticsearch.snapshots.SnapshotId) Tuple(io.crate.common.collections.Tuple) ShardGenerations(org.elasticsearch.repositories.ShardGenerations) ClusterService(org.elasticsearch.cluster.service.ClusterService) SnapshotShardFailure(org.elasticsearch.snapshots.SnapshotShardFailure) BytesStreamOutput(org.elasticsearch.common.io.stream.BytesStreamOutput) LoggingDeprecationHandler(org.elasticsearch.common.xcontent.LoggingDeprecationHandler) ArrayList(java.util.ArrayList) BytesArray(org.elasticsearch.common.bytes.BytesArray) Metadata(org.elasticsearch.cluster.metadata.Metadata) DiscoveryNode(org.elasticsearch.cluster.node.DiscoveryNode) Store(org.elasticsearch.index.store.Store) Nullable(javax.annotation.Nullable) LongStream(java.util.stream.LongStream) IndexInput(org.apache.lucene.store.IndexInput) SetOnce(org.apache.lucene.util.SetOnce) Executor(java.util.concurrent.Executor) IOException(java.io.IOException) XContentParser(org.elasticsearch.common.xcontent.XContentParser) AtomicLong(java.util.concurrent.atomic.AtomicLong) CounterMetric(org.elasticsearch.common.metrics.CounterMetric) ActionListener(org.elasticsearch.action.ActionListener) FsBlobContainer(org.elasticsearch.common.blobstore.fs.FsBlobContainer) SnapshotMissingException(org.elasticsearch.snapshots.SnapshotMissingException) NoSuchFileException(java.nio.file.NoSuchFileException) ConcurrentSnapshotExecutionException(org.elasticsearch.snapshots.ConcurrentSnapshotExecutionException) SnapshotInfo(org.elasticsearch.snapshots.SnapshotInfo) CorruptIndexException(org.apache.lucene.index.CorruptIndexException) StoreFileMetadata(org.elasticsearch.index.store.StoreFileMetadata) RepositoryMetadata(org.elasticsearch.cluster.metadata.RepositoryMetadata) Settings(org.elasticsearch.common.settings.Settings) Locale(java.util.Locale) Streams(org.elasticsearch.common.io.Streams) ThreadPool(org.elasticsearch.threadpool.ThreadPool) IndexShardRestoreFailedException(org.elasticsearch.index.snapshots.IndexShardRestoreFailedException) ActionRunnable(org.elasticsearch.action.ActionRunnable) StepListener(org.elasticsearch.action.StepListener) NamedXContentRegistry(org.elasticsearch.common.xcontent.NamedXContentRegistry) RepositoryException(org.elasticsearch.repositories.RepositoryException) ByteSizeValue(org.elasticsearch.common.unit.ByteSizeValue) NotXContentException(org.elasticsearch.common.compress.NotXContentException) Setting(org.elasticsearch.common.settings.Setting) Collection(java.util.Collection) ConcurrentHashMap(java.util.concurrent.ConcurrentHashMap) BlobMetadata(org.elasticsearch.common.blobstore.BlobMetadata) BytesReference(org.elasticsearch.common.bytes.BytesReference) LinkedBlockingQueue(java.util.concurrent.LinkedBlockingQueue) Collectors(java.util.stream.Collectors) IndexShardSnapshotException(org.elasticsearch.index.snapshots.IndexShardSnapshotException) MapperService(org.elasticsearch.index.mapper.MapperService) List(java.util.List) BlobStoreIndexShardSnapshot(org.elasticsearch.index.snapshots.blobstore.BlobStoreIndexShardSnapshot) Version(org.elasticsearch.Version) RecoveryState(org.elasticsearch.indices.recovery.RecoveryState) RepositoryData(org.elasticsearch.repositories.RepositoryData) ThreadPoolExecutor(java.util.concurrent.ThreadPoolExecutor) XContentType(org.elasticsearch.common.xcontent.XContentType) IndexShardSnapshotStatus(org.elasticsearch.index.snapshots.IndexShardSnapshotStatus) Index(org.elasticsearch.index.Index) Lucene(org.elasticsearch.common.lucene.Lucene) ParameterizedMessage(org.apache.logging.log4j.message.ParameterizedMessage) IndexId(org.elasticsearch.repositories.IndexId) FilterInputStream(java.io.FilterInputStream) RepositoriesMetadata(org.elasticsearch.cluster.metadata.RepositoriesMetadata) RepositoryVerificationException(org.elasticsearch.repositories.RepositoryVerificationException) BlobPath(org.elasticsearch.common.blobstore.BlobPath) IndexOutput(org.apache.lucene.store.IndexOutput) Numbers(org.elasticsearch.common.Numbers) Repository(org.elasticsearch.repositories.Repository) SnapshotsService(org.elasticsearch.snapshots.SnapshotsService) GroupedActionListener(org.elasticsearch.action.support.GroupedActionListener) IndexFormatTooOldException(org.apache.lucene.index.IndexFormatTooOldException) AbstractLifecycleComponent(org.elasticsearch.common.component.AbstractLifecycleComponent) TimeUnit(java.util.concurrent.TimeUnit) Consumer(java.util.function.Consumer) ExceptionsHelper(org.elasticsearch.ExceptionsHelper) SlicedInputStream(org.elasticsearch.index.snapshots.blobstore.SlicedInputStream) SnapshotsInProgress(org.elasticsearch.cluster.SnapshotsInProgress) BlobStoreIndexShardSnapshots(org.elasticsearch.index.snapshots.blobstore.BlobStoreIndexShardSnapshots) Collections(java.util.Collections) LogManager(org.apache.logging.log4j.LogManager) RepositoryOperation(org.elasticsearch.repositories.RepositoryOperation) Snapshot(org.elasticsearch.snapshots.Snapshot) RateLimiter(org.apache.lucene.store.RateLimiter) InputStream(java.io.InputStream) BlobStoreIndexShardSnapshot(org.elasticsearch.index.snapshots.blobstore.BlobStoreIndexShardSnapshot) IndexShardRestoreFailedException(org.elasticsearch.index.snapshots.IndexShardRestoreFailedException) BlobStore(org.elasticsearch.common.blobstore.BlobStore) Store(org.elasticsearch.index.store.Store) LinkedBlockingQueue(java.util.concurrent.LinkedBlockingQueue) ShardId(org.elasticsearch.index.shard.ShardId) SnapshotFiles(org.elasticsearch.index.snapshots.blobstore.SnapshotFiles) Executor(java.util.concurrent.Executor) ThreadPoolExecutor(java.util.concurrent.ThreadPoolExecutor) IndexFormatTooOldException(org.apache.lucene.index.IndexFormatTooOldException) ArrayList(java.util.ArrayList) List(java.util.List) RateLimitingInputStream(org.elasticsearch.index.snapshots.blobstore.RateLimitingInputStream) FilterInputStream(java.io.FilterInputStream) SlicedInputStream(org.elasticsearch.index.snapshots.blobstore.SlicedInputStream) InputStream(java.io.InputStream) CorruptIndexException(org.apache.lucene.index.CorruptIndexException) IndexOutput(org.apache.lucene.store.IndexOutput) IOException(java.io.IOException) ActionListener(org.elasticsearch.action.ActionListener) GroupedActionListener(org.elasticsearch.action.support.GroupedActionListener) BlobContainer(org.elasticsearch.common.blobstore.BlobContainer) FsBlobContainer(org.elasticsearch.common.blobstore.fs.FsBlobContainer) ThreadPoolExecutor(java.util.concurrent.ThreadPoolExecutor) SlicedInputStream(org.elasticsearch.index.snapshots.blobstore.SlicedInputStream) IndexFormatTooNewException(org.apache.lucene.index.IndexFormatTooNewException)

Aggregations

CorruptIndexException (org.apache.lucene.index.CorruptIndexException)93 IndexFormatTooNewException (org.apache.lucene.index.IndexFormatTooNewException)35 IndexFormatTooOldException (org.apache.lucene.index.IndexFormatTooOldException)35 Directory (org.apache.lucene.store.Directory)26 IOException (java.io.IOException)25 ChecksumIndexInput (org.apache.lucene.store.ChecksumIndexInput)24 IndexInput (org.apache.lucene.store.IndexInput)24 IndexOutput (org.apache.lucene.store.IndexOutput)23 ArrayList (java.util.ArrayList)16 RAMDirectory (org.apache.lucene.store.RAMDirectory)15 BytesRef (org.apache.lucene.util.BytesRef)14 FileNotFoundException (java.io.FileNotFoundException)12 ShardId (org.elasticsearch.index.shard.ShardId)12 NoSuchFileException (java.nio.file.NoSuchFileException)11 IOContext (org.apache.lucene.store.IOContext)11 EOFException (java.io.EOFException)10 HashMap (java.util.HashMap)10 AlreadyClosedException (org.apache.lucene.store.AlreadyClosedException)10 FilterDirectory (org.apache.lucene.store.FilterDirectory)10 Document (org.apache.lucene.document.Document)8