Search in sources :

Example 1 with EngineConfig

use of org.elasticsearch.index.engine.EngineConfig in project elasticsearch by elastic.

the class IndexShard method internalPerformTranslogRecovery.

private void internalPerformTranslogRecovery(boolean skipTranslogRecovery, boolean indexExists, long maxUnsafeAutoIdTimestamp) throws IOException {
    if (state != IndexShardState.RECOVERING) {
        throw new IndexShardNotRecoveringException(shardId, state);
    }
    recoveryState.setStage(RecoveryState.Stage.VERIFY_INDEX);
    // also check here, before we apply the translog
    if (Booleans.isTrue(checkIndexOnStartup)) {
        try {
            checkIndex();
        } catch (IOException ex) {
            throw new RecoveryFailedException(recoveryState, "check index failed", ex);
        }
    }
    recoveryState.setStage(RecoveryState.Stage.TRANSLOG);
    final EngineConfig.OpenMode openMode;
    /* by default we recover and index and replay the translog but if the index
         * doesn't exist we create everything from the scratch. Yet, if the index
         * doesn't exist we don't need to worry about the skipTranslogRecovery since
         * there is no translog on a non-existing index.
         * The skipTranslogRecovery invariant is used if we do remote recovery since
         * there the translog isn't local but on the remote host, hence we can skip it.
         */
    if (indexExists == false) {
        openMode = EngineConfig.OpenMode.CREATE_INDEX_AND_TRANSLOG;
    } else if (skipTranslogRecovery) {
        openMode = EngineConfig.OpenMode.OPEN_INDEX_CREATE_TRANSLOG;
    } else {
        openMode = EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG;
    }
    final EngineConfig config = newEngineConfig(openMode, maxUnsafeAutoIdTimestamp);
    // we disable deletes since we allow for operations to be executed against the shard while recovering
    // but we need to make sure we don't loose deletes until we are done recovering
    config.setEnableGcDeletes(false);
    Engine newEngine = createNewEngine(config);
    verifyNotClosed();
    if (openMode == EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG) {
        // We set active because we are now writing operations to the engine; this way, if we go idle after some time and become inactive,
        // we still give sync'd flush a chance to run:
        active.set(true);
        newEngine.recoverFromTranslog();
    }
}
Also used : RecoveryFailedException(org.elasticsearch.indices.recovery.RecoveryFailedException) IOException(java.io.IOException) EngineConfig(org.elasticsearch.index.engine.EngineConfig) Engine(org.elasticsearch.index.engine.Engine)

Example 2 with EngineConfig

use of org.elasticsearch.index.engine.EngineConfig in project crate by crate.

the class IndexingMemoryControllerTests method configWithRefreshListener.

EngineConfig configWithRefreshListener(EngineConfig config, ReferenceManager.RefreshListener listener) {
    final List<ReferenceManager.RefreshListener> internalRefreshListener = new ArrayList<>(config.getInternalRefreshListener());
    ;
    internalRefreshListener.add(listener);
    return new EngineConfig(config.getShardId(), config.getAllocationId(), config.getThreadPool(), config.getIndexSettings(), config.getStore(), config.getMergePolicy(), config.getAnalyzer(), new CodecService(null, logger), config.getEventListener(), config.getQueryCache(), config.getQueryCachingPolicy(), config.getTranslogConfig(), config.getFlushMergesAfter(), config.getExternalRefreshListener(), internalRefreshListener, config.getCircuitBreakerService(), config.getGlobalCheckpointSupplier(), config.retentionLeasesSupplier(), config.getPrimaryTermSupplier(), config.getTombstoneDocSupplier());
}
Also used : CodecService(org.elasticsearch.index.codec.CodecService) ArrayList(java.util.ArrayList) EngineConfig(org.elasticsearch.index.engine.EngineConfig)

Example 3 with EngineConfig

use of org.elasticsearch.index.engine.EngineConfig in project elasticsearch by elastic.

the class RefreshListenersTests method setupListeners.

@Before
public void setupListeners() throws Exception {
    // Setup dependencies of the listeners
    maxListeners = randomIntBetween(1, 1000);
    listeners = new RefreshListeners(() -> maxListeners, () -> engine.refresh("too-many-listeners"), // Immediately run listeners rather than adding them to the listener thread pool like IndexShard does to simplify the test.
    Runnable::run, logger);
    // Now setup the InternalEngine which is much more complicated because we aren't mocking anything
    threadPool = new TestThreadPool(getTestName());
    IndexSettings indexSettings = IndexSettingsModule.newIndexSettings("index", Settings.EMPTY);
    ShardId shardId = new ShardId(new Index("index", "_na_"), 1);
    Directory directory = newDirectory();
    DirectoryService directoryService = new DirectoryService(shardId, indexSettings) {

        @Override
        public Directory newDirectory() throws IOException {
            return directory;
        }
    };
    store = new Store(shardId, indexSettings, directoryService, new DummyShardLock(shardId));
    IndexWriterConfig iwc = newIndexWriterConfig();
    TranslogConfig translogConfig = new TranslogConfig(shardId, createTempDir("translog"), indexSettings, BigArrays.NON_RECYCLING_INSTANCE);
    Engine.EventListener eventListener = new Engine.EventListener() {

        @Override
        public void onFailedEngine(String reason, @Nullable Exception e) {
        // we don't need to notify anybody in this test
        }
    };
    TranslogHandler translogHandler = new TranslogHandler(xContentRegistry(), shardId.getIndexName(), logger);
    EngineConfig config = new EngineConfig(EngineConfig.OpenMode.CREATE_INDEX_AND_TRANSLOG, shardId, threadPool, indexSettings, null, store, new SnapshotDeletionPolicy(new KeepOnlyLastCommitDeletionPolicy()), newMergePolicy(), iwc.getAnalyzer(), iwc.getSimilarity(), new CodecService(null, logger), eventListener, translogHandler, IndexSearcher.getDefaultQueryCache(), IndexSearcher.getDefaultQueryCachingPolicy(), translogConfig, TimeValue.timeValueMinutes(5), listeners, IndexRequest.UNSET_AUTO_GENERATED_TIMESTAMP);
    engine = new InternalEngine(config);
    listeners.setTranslog(engine.getTranslog());
}
Also used : KeepOnlyLastCommitDeletionPolicy(org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy) TranslogConfig(org.elasticsearch.index.translog.TranslogConfig) IndexSettings(org.elasticsearch.index.IndexSettings) Store(org.elasticsearch.index.store.Store) Index(org.elasticsearch.index.Index) DirectoryService(org.elasticsearch.index.store.DirectoryService) TestThreadPool(org.elasticsearch.threadpool.TestThreadPool) IOException(java.io.IOException) SnapshotDeletionPolicy(org.apache.lucene.index.SnapshotDeletionPolicy) InternalEngine(org.elasticsearch.index.engine.InternalEngine) TranslogHandler(org.elasticsearch.index.engine.InternalEngineTests.TranslogHandler) CodecService(org.elasticsearch.index.codec.CodecService) DummyShardLock(org.elasticsearch.test.DummyShardLock) EngineConfig(org.elasticsearch.index.engine.EngineConfig) Nullable(org.elasticsearch.common.Nullable) Engine(org.elasticsearch.index.engine.Engine) InternalEngine(org.elasticsearch.index.engine.InternalEngine) Directory(org.apache.lucene.store.Directory) IndexWriterConfig(org.apache.lucene.index.IndexWriterConfig) Before(org.junit.Before)

Example 4 with EngineConfig

use of org.elasticsearch.index.engine.EngineConfig in project elasticsearch by elastic.

the class RecoveryDuringReplicationTests method testWaitForPendingSeqNo.

@TestLogging("_root:DEBUG,org.elasticsearch.action.bulk:TRACE,org.elasticsearch.action.get:TRACE," + "org.elasticsearch.discovery:TRACE," + "org.elasticsearch.cluster.service:TRACE,org.elasticsearch.indices.recovery:TRACE," + "org.elasticsearch.indices.cluster:TRACE,org.elasticsearch.index.shard:TRACE," + "org.elasticsearch.index.seqno:TRACE")
public void testWaitForPendingSeqNo() throws Exception {
    IndexMetaData metaData = buildIndexMetaData(1);
    final int pendingDocs = randomIntBetween(1, 5);
    final AtomicReference<Semaphore> blockIndexingOnPrimary = new AtomicReference<>();
    final CountDownLatch blockedIndexers = new CountDownLatch(pendingDocs);
    try (ReplicationGroup shards = new ReplicationGroup(metaData) {

        @Override
        protected EngineFactory getEngineFactory(ShardRouting routing) {
            if (routing.primary()) {
                return new EngineFactory() {

                    @Override
                    public Engine newReadWriteEngine(EngineConfig config) {
                        return InternalEngineTests.createInternalEngine((directory, writerConfig) -> new IndexWriter(directory, writerConfig) {

                            @Override
                            public long addDocument(Iterable<? extends IndexableField> doc) throws IOException {
                                Semaphore block = blockIndexingOnPrimary.get();
                                if (block != null) {
                                    blockedIndexers.countDown();
                                    try {
                                        block.acquire();
                                    } catch (InterruptedException e) {
                                        throw new AssertionError("unexpectedly interrupted", e);
                                    }
                                }
                                return super.addDocument(doc);
                            }
                        }, null, config);
                    }

                    @Override
                    public Engine newReadOnlyEngine(EngineConfig config) {
                        throw new UnsupportedOperationException();
                    }
                };
            } else {
                return null;
            }
        }
    }) {
        shards.startAll();
        int docs = shards.indexDocs(randomIntBetween(1, 10));
        IndexShard replica = shards.getReplicas().get(0);
        shards.removeReplica(replica);
        closeShards(replica);
        docs += pendingDocs;
        final Semaphore pendingDocsSemaphore = new Semaphore(pendingDocs);
        blockIndexingOnPrimary.set(pendingDocsSemaphore);
        blockIndexingOnPrimary.get().acquire(pendingDocs);
        CountDownLatch pendingDocsDone = new CountDownLatch(pendingDocs);
        for (int i = 0; i < pendingDocs; i++) {
            final String id = "pending_" + i;
            threadPool.generic().submit(() -> {
                try {
                    shards.index(new IndexRequest(index.getName(), "type", id).source("{}", XContentType.JSON));
                } catch (Exception e) {
                    throw new AssertionError(e);
                } finally {
                    pendingDocsDone.countDown();
                }
            });
        }
        // wait for the pending ops to "hang"
        blockedIndexers.await();
        blockIndexingOnPrimary.set(null);
        // index some more
        docs += shards.indexDocs(randomInt(5));
        IndexShard newReplica = shards.addReplicaWithExistingPath(replica.shardPath(), replica.routingEntry().currentNodeId());
        CountDownLatch recoveryStart = new CountDownLatch(1);
        AtomicBoolean preparedForTranslog = new AtomicBoolean(false);
        final Future<Void> recoveryFuture = shards.asyncRecoverReplica(newReplica, (indexShard, node) -> {
            recoveryStart.countDown();
            return new RecoveryTarget(indexShard, node, recoveryListener, l -> {
            }) {

                @Override
                public void prepareForTranslogOperations(int totalTranslogOps, long maxUnsafeAutoIdTimestamp) throws IOException {
                    preparedForTranslog.set(true);
                    super.prepareForTranslogOperations(totalTranslogOps, maxUnsafeAutoIdTimestamp);
                }
            };
        });
        recoveryStart.await();
        for (int i = 0; i < pendingDocs; i++) {
            assertFalse((pendingDocs - i) + " pending operations, recovery should wait", preparedForTranslog.get());
            pendingDocsSemaphore.release();
        }
        pendingDocsDone.await();
        // now recovery can finish
        recoveryFuture.get();
        assertThat(newReplica.recoveryState().getIndex().fileDetails(), empty());
        assertThat(newReplica.recoveryState().getTranslog().recoveredOperations(), equalTo(docs));
        shards.assertAllEqual(docs);
    }
}
Also used : Semaphore(java.util.concurrent.Semaphore) RecoveryTarget(org.elasticsearch.indices.recovery.RecoveryTarget) IndexRequest(org.elasticsearch.action.index.IndexRequest) EngineFactory(org.elasticsearch.index.engine.EngineFactory) IndexShard(org.elasticsearch.index.shard.IndexShard) AtomicReference(java.util.concurrent.atomic.AtomicReference) IOException(java.io.IOException) CountDownLatch(java.util.concurrent.CountDownLatch) IOException(java.io.IOException) IndexMetaData(org.elasticsearch.cluster.metadata.IndexMetaData) AtomicBoolean(java.util.concurrent.atomic.AtomicBoolean) IndexWriter(org.apache.lucene.index.IndexWriter) EngineConfig(org.elasticsearch.index.engine.EngineConfig) ShardRouting(org.elasticsearch.cluster.routing.ShardRouting) TestLogging(org.elasticsearch.test.junit.annotations.TestLogging)

Example 5 with EngineConfig

use of org.elasticsearch.index.engine.EngineConfig in project crate by crate.

the class IndexShard method innerOpenEngineAndTranslog.

private void innerOpenEngineAndTranslog(LongSupplier globalCheckpointSupplier) throws IOException {
    assert Thread.holdsLock(mutex) == false : "opening engine under mutex";
    if (state != IndexShardState.RECOVERING) {
        throw new IndexShardNotRecoveringException(shardId, state);
    }
    final EngineConfig config = newEngineConfig(globalCheckpointSupplier);
    // we disable deletes since we allow for operations to be executed against the shard while recovering
    // but we need to make sure we don't loose deletes until we are done recovering
    config.setEnableGcDeletes(false);
    updateRetentionLeasesOnReplica(loadRetentionLeases());
    assert recoveryState.getRecoverySource().expectEmptyRetentionLeases() == false || getRetentionLeases().leases().isEmpty() : "expected empty set of retention leases with recovery source [" + recoveryState.getRecoverySource() + "] but got " + getRetentionLeases();
    synchronized (engineMutex) {
        assert currentEngineReference.get() == null : "engine is running";
        verifyNotClosed();
        // we must create a new engine under mutex (see IndexShard#snapshotStoreMetadata).
        final Engine newEngine = engineFactory.newReadWriteEngine(config);
        onNewEngine(newEngine);
        currentEngineReference.set(newEngine);
        // We set active because we are now writing operations to the engine; this way,
        // if we go idle after some time and become inactive, we still give sync'd flush a chance to run.
        active.set(true);
    }
    // time elapses after the engine is created above (pulling the config settings) until we set the engine reference, during
    // which settings changes could possibly have happened, so here we forcefully push any config changes to the new engine.
    onSettingsChanged();
    assert assertSequenceNumbersInCommit();
    assert recoveryState.getStage() == RecoveryState.Stage.TRANSLOG : "TRANSLOG stage expected but was: " + recoveryState.getStage();
}
Also used : EngineConfig(org.elasticsearch.index.engine.EngineConfig) ReadOnlyEngine(org.elasticsearch.index.engine.ReadOnlyEngine) Engine(org.elasticsearch.index.engine.Engine)

Aggregations

EngineConfig (org.elasticsearch.index.engine.EngineConfig)5 IOException (java.io.IOException)3 Engine (org.elasticsearch.index.engine.Engine)3 CodecService (org.elasticsearch.index.codec.CodecService)2 ArrayList (java.util.ArrayList)1 CountDownLatch (java.util.concurrent.CountDownLatch)1 Semaphore (java.util.concurrent.Semaphore)1 AtomicBoolean (java.util.concurrent.atomic.AtomicBoolean)1 AtomicReference (java.util.concurrent.atomic.AtomicReference)1 IndexWriter (org.apache.lucene.index.IndexWriter)1 IndexWriterConfig (org.apache.lucene.index.IndexWriterConfig)1 KeepOnlyLastCommitDeletionPolicy (org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy)1 SnapshotDeletionPolicy (org.apache.lucene.index.SnapshotDeletionPolicy)1 Directory (org.apache.lucene.store.Directory)1 IndexRequest (org.elasticsearch.action.index.IndexRequest)1 IndexMetaData (org.elasticsearch.cluster.metadata.IndexMetaData)1 ShardRouting (org.elasticsearch.cluster.routing.ShardRouting)1 Nullable (org.elasticsearch.common.Nullable)1 Index (org.elasticsearch.index.Index)1 IndexSettings (org.elasticsearch.index.IndexSettings)1