Search in sources :

Example 1 with ReplicationTracker

use of org.elasticsearch.index.seqno.ReplicationTracker in project crate by crate.

the class EngineTestCase method config.

public EngineConfig config(IndexSettings indexSettings, Store store, Path translogPath, MergePolicy mergePolicy, ReferenceManager.RefreshListener externalRefreshListener, ReferenceManager.RefreshListener internalRefreshListener, @Nullable LongSupplier maybeGlobalCheckpointSupplier, @Nullable Supplier<RetentionLeases> maybeRetentionLeasesSupplier) {
    IndexWriterConfig iwc = newIndexWriterConfig();
    TranslogConfig translogConfig = new TranslogConfig(shardId, translogPath, indexSettings, BigArrays.NON_RECYCLING_INSTANCE);
    Engine.EventListener eventListener = new Engine.EventListener() {

        @Override
        public void onFailedEngine(String reason, @Nullable Exception e) {
        // we don't need to notify anybody in this test
        }
    };
    final List<ReferenceManager.RefreshListener> extRefreshListenerList = externalRefreshListener == null ? emptyList() : Collections.singletonList(externalRefreshListener);
    final List<ReferenceManager.RefreshListener> intRefreshListenerList = internalRefreshListener == null ? emptyList() : Collections.singletonList(internalRefreshListener);
    final LongSupplier globalCheckpointSupplier;
    final Supplier<RetentionLeases> retentionLeasesSupplier;
    if (maybeGlobalCheckpointSupplier == null) {
        assert maybeRetentionLeasesSupplier == null;
        final ReplicationTracker replicationTracker = new ReplicationTracker(shardId, allocationId.getId(), indexSettings, randomNonNegativeLong(), SequenceNumbers.NO_OPS_PERFORMED, update -> {
        }, () -> 0L, (leases, listener) -> {
        }, () -> SafeCommitInfo.EMPTY);
        globalCheckpointSupplier = replicationTracker;
        retentionLeasesSupplier = replicationTracker::getRetentionLeases;
    } else {
        assert maybeRetentionLeasesSupplier != null;
        globalCheckpointSupplier = maybeGlobalCheckpointSupplier;
        retentionLeasesSupplier = maybeRetentionLeasesSupplier;
    }
    return new EngineConfig(shardId, allocationId.getId(), threadPool, indexSettings, store, mergePolicy, iwc.getAnalyzer(), new CodecService(null, logger), eventListener, IndexSearcher.getDefaultQueryCache(), IndexSearcher.getDefaultQueryCachingPolicy(), translogConfig, TimeValue.timeValueMinutes(5), extRefreshListenerList, intRefreshListenerList, new NoneCircuitBreakerService(), globalCheckpointSupplier, retentionLeasesSupplier, primaryTerm, tombstoneDocSupplier());
}
Also used : TranslogConfig(org.elasticsearch.index.translog.TranslogConfig) AlreadyClosedException(org.apache.lucene.store.AlreadyClosedException) IOException(java.io.IOException) RetentionLeases(org.elasticsearch.index.seqno.RetentionLeases) ReplicationTracker(org.elasticsearch.index.seqno.ReplicationTracker) CodecService(org.elasticsearch.index.codec.CodecService) LongSupplier(java.util.function.LongSupplier) Nullable(javax.annotation.Nullable) LiveIndexWriterConfig(org.apache.lucene.index.LiveIndexWriterConfig) IndexWriterConfig(org.apache.lucene.index.IndexWriterConfig) NoneCircuitBreakerService(org.elasticsearch.indices.breaker.NoneCircuitBreakerService)

Example 2 with ReplicationTracker

use of org.elasticsearch.index.seqno.ReplicationTracker in project crate by crate.

the class IndexShard method resetEngineToGlobalCheckpoint.

/**
 * Rollback the current engine to the safe commit, then replay local translog up to the global checkpoint.
 */
void resetEngineToGlobalCheckpoint() throws IOException {
    assert Thread.holdsLock(mutex) == false : "resetting engine under mutex";
    assert getActiveOperationsCount() == OPERATIONS_BLOCKED : "resetting engine without blocking operations; active operations are [" + getActiveOperations() + ']';
    // persist the global checkpoint to disk
    sync();
    final SeqNoStats seqNoStats = seqNoStats();
    final TranslogStats translogStats = translogStats();
    // flush to make sure the latest commit, which will be opened by the read-only engine, includes all operations.
    flush(new FlushRequest().waitIfOngoing(true));
    SetOnce<Engine> newEngineReference = new SetOnce<>();
    final long globalCheckpoint = getLastKnownGlobalCheckpoint();
    assert globalCheckpoint == getLastSyncedGlobalCheckpoint();
    synchronized (engineMutex) {
        verifyNotClosed();
        // we must create both new read-only engine and new read-write engine under engineMutex to ensure snapshotStoreMetadata,
        // acquireXXXCommit and close works.
        final Engine readOnlyEngine = new ReadOnlyEngine(newEngineConfig(replicationTracker), seqNoStats, translogStats, false, Function.identity()) {

            @Override
            public IndexCommitRef acquireLastIndexCommit(boolean flushFirst) {
                synchronized (engineMutex) {
                    if (newEngineReference.get() == null) {
                        throw new AlreadyClosedException("engine was closed");
                    }
                    // ignore flushFirst since we flushed above and we do not want to interfere with ongoing translog replay
                    return newEngineReference.get().acquireLastIndexCommit(false);
                }
            }

            @Override
            public IndexCommitRef acquireSafeIndexCommit() {
                synchronized (engineMutex) {
                    if (newEngineReference.get() == null) {
                        throw new AlreadyClosedException("engine was closed");
                    }
                    return newEngineReference.get().acquireSafeIndexCommit();
                }
            }

            @Override
            public void close() throws IOException {
                assert Thread.holdsLock(engineMutex);
                Engine newEngine = newEngineReference.get();
                if (newEngine == currentEngineReference.get()) {
                    // we successfully installed the new engine so do not close it.
                    newEngine = null;
                }
                IOUtils.close(super::close, newEngine);
            }
        };
        IOUtils.close(currentEngineReference.getAndSet(readOnlyEngine));
        newEngineReference.set(engineFactory.newReadWriteEngine(newEngineConfig(replicationTracker)));
        onNewEngine(newEngineReference.get());
    }
    final Engine.TranslogRecoveryRunner translogRunner = (engine, snapshot) -> runTranslogRecovery(engine, snapshot, Engine.Operation.Origin.LOCAL_RESET, () -> {
    // TODO: add a dedicate recovery stats for the reset translog
    });
    newEngineReference.get().recoverFromTranslog(translogRunner, globalCheckpoint);
    newEngineReference.get().refresh("reset_engine");
    synchronized (engineMutex) {
        verifyNotClosed();
        IOUtils.close(currentEngineReference.getAndSet(newEngineReference.get()));
        // We set active because we are now writing operations to the engine; this way,
        // if we go idle after some time and become inactive, we still give sync'd flush a chance to run.
        active.set(true);
    }
    // time elapses after the engine is created above (pulling the config settings) until we set the engine reference, during
    // which settings changes could possibly have happened, so here we forcefully push any config changes to the new engine.
    onSettingsChanged();
}
Also used : Query(org.apache.lucene.search.Query) UpgradeRequest(org.elasticsearch.action.admin.indices.upgrade.post.UpgradeRequest) LongSupplier(java.util.function.LongSupplier) BigArrays(org.elasticsearch.common.util.BigArrays) IndexMetadata(org.elasticsearch.cluster.metadata.IndexMetadata) Term(org.apache.lucene.index.Term) AlreadyClosedException(org.apache.lucene.store.AlreadyClosedException) RecoveryStats(org.elasticsearch.index.recovery.RecoveryStats) ReferenceManager(org.apache.lucene.search.ReferenceManager) SeqNoStats(org.elasticsearch.index.seqno.SeqNoStats) UsageTrackingQueryCachingPolicy(org.apache.lucene.search.UsageTrackingQueryCachingPolicy) EngineConfig(org.elasticsearch.index.engine.EngineConfig) WriteStateException(org.elasticsearch.gateway.WriteStateException) IndexNotFoundException(org.elasticsearch.index.IndexNotFoundException) Map(java.util.Map) ObjectLongMap(com.carrotsearch.hppc.ObjectLongMap) QueryCachingPolicy(org.apache.lucene.search.QueryCachingPolicy) CheckedRunnable(org.elasticsearch.common.CheckedRunnable) EnumSet(java.util.EnumSet) PeerRecoveryTargetService(org.elasticsearch.indices.recovery.PeerRecoveryTargetService) Set(java.util.Set) StandardCharsets(java.nio.charset.StandardCharsets) ClosedByInterruptException(java.nio.channels.ClosedByInterruptException) Booleans(io.crate.common.Booleans) CountDownLatch(java.util.concurrent.CountDownLatch) RecoverySource(org.elasticsearch.cluster.routing.RecoverySource) AbstractRunnable(org.elasticsearch.common.util.concurrent.AbstractRunnable) Exceptions(io.crate.exceptions.Exceptions) Logger(org.apache.logging.log4j.Logger) RestStatus(org.elasticsearch.rest.RestStatus) ReplicationTracker(org.elasticsearch.index.seqno.ReplicationTracker) CopyOnWriteArrayList(java.util.concurrent.CopyOnWriteArrayList) ThreadInterruptedException(org.apache.lucene.util.ThreadInterruptedException) IndexCommit(org.apache.lucene.index.IndexCommit) StoreStats(org.elasticsearch.index.store.StoreStats) Tuple(io.crate.common.collections.Tuple) RecoveryFailedException(org.elasticsearch.indices.recovery.RecoveryFailedException) BytesStreamOutput(org.elasticsearch.common.io.stream.BytesStreamOutput) IndexModule(org.elasticsearch.index.IndexModule) CodecService(org.elasticsearch.index.codec.CodecService) ArrayList(java.util.ArrayList) CircuitBreakerService(org.elasticsearch.indices.breaker.CircuitBreakerService) XContentHelper(org.elasticsearch.common.xcontent.XContentHelper) RetentionLease(org.elasticsearch.index.seqno.RetentionLease) RetentionLeases(org.elasticsearch.index.seqno.RetentionLeases) IndexCache(org.elasticsearch.index.cache.IndexCache) Store(org.elasticsearch.index.store.Store) BiConsumer(java.util.function.BiConsumer) StreamSupport(java.util.stream.StreamSupport) IndicesService(org.elasticsearch.indices.IndicesService) TranslogConfig(org.elasticsearch.index.translog.TranslogConfig) Nullable(javax.annotation.Nullable) EngineException(org.elasticsearch.index.engine.EngineException) SourceToParse(org.elasticsearch.index.mapper.SourceToParse) AsyncIOProcessor(org.elasticsearch.common.util.concurrent.AsyncIOProcessor) SequenceNumbers(org.elasticsearch.index.seqno.SequenceNumbers) SetOnce(org.apache.lucene.util.SetOnce) IdFieldMapper(org.elasticsearch.index.mapper.IdFieldMapper) IndexService(org.elasticsearch.index.IndexService) IOUtils(io.crate.common.io.IOUtils) IOException(java.io.IOException) ParsedDocument(org.elasticsearch.index.mapper.ParsedDocument) RepositoriesService(org.elasticsearch.repositories.RepositoriesService) Segment(org.elasticsearch.index.engine.Segment) AtomicLong(java.util.concurrent.atomic.AtomicLong) ReplicationResponse(org.elasticsearch.action.support.replication.ReplicationResponse) CounterMetric(org.elasticsearch.common.metrics.CounterMetric) ActionListener(org.elasticsearch.action.ActionListener) ElasticsearchException(org.elasticsearch.ElasticsearchException) SafeCommitInfo(org.elasticsearch.index.engine.SafeCommitInfo) TimeoutException(java.util.concurrent.TimeoutException) SnapshotRecoverySource(org.elasticsearch.cluster.routing.RecoverySource.SnapshotRecoverySource) VersionType(org.elasticsearch.index.VersionType) StoreFileMetadata(org.elasticsearch.index.store.StoreFileMetadata) Settings(org.elasticsearch.common.settings.Settings) ResyncTask(org.elasticsearch.index.shard.PrimaryReplicaSyncer.ResyncTask) Locale(java.util.Locale) ThreadPool(org.elasticsearch.threadpool.ThreadPool) ActionRunnable(org.elasticsearch.action.ActionRunnable) Releasable(org.elasticsearch.common.lease.Releasable) ByteSizeValue(org.elasticsearch.common.unit.ByteSizeValue) RefreshFailedEngineException(org.elasticsearch.index.engine.RefreshFailedEngineException) CheckIndex(org.apache.lucene.index.CheckIndex) UNASSIGNED_SEQ_NO(org.elasticsearch.index.seqno.SequenceNumbers.UNASSIGNED_SEQ_NO) IndexShardRoutingTable(org.elasticsearch.cluster.routing.IndexShardRoutingTable) Collectors(java.util.stream.Collectors) SegmentInfos(org.apache.lucene.index.SegmentInfos) ReadOnlyEngine(org.elasticsearch.index.engine.ReadOnlyEngine) Engine(org.elasticsearch.index.engine.Engine) Objects(java.util.Objects) MapperService(org.elasticsearch.index.mapper.MapperService) TranslogStats(org.elasticsearch.index.translog.TranslogStats) List(java.util.List) Version(org.elasticsearch.Version) MeanMetric(org.elasticsearch.common.metrics.MeanMetric) RetentionLeaseStats(org.elasticsearch.index.seqno.RetentionLeaseStats) MappingMetadata(org.elasticsearch.cluster.metadata.MappingMetadata) IndicesClusterStateService(org.elasticsearch.indices.cluster.IndicesClusterStateService) RecoveryState(org.elasticsearch.indices.recovery.RecoveryState) RetentionLeaseSyncer(org.elasticsearch.index.seqno.RetentionLeaseSyncer) TimeValue(io.crate.common.unit.TimeValue) Optional(java.util.Optional) ShardRouting(org.elasticsearch.cluster.routing.ShardRouting) CommitStats(org.elasticsearch.index.engine.CommitStats) AtomicBoolean(java.util.concurrent.atomic.AtomicBoolean) CheckedConsumer(org.elasticsearch.common.CheckedConsumer) Index(org.elasticsearch.index.Index) Lucene(org.elasticsearch.common.lucene.Lucene) AtomicReference(java.util.concurrent.atomic.AtomicReference) Function(java.util.function.Function) ParameterizedMessage(org.apache.logging.log4j.message.ParameterizedMessage) HashSet(java.util.HashSet) ForceMergeRequest(org.elasticsearch.action.admin.indices.forcemerge.ForceMergeRequest) RootObjectMapper(org.elasticsearch.index.mapper.RootObjectMapper) MetadataSnapshot(org.elasticsearch.index.store.Store.MetadataSnapshot) IndexSettings(org.elasticsearch.index.IndexSettings) Mapping(org.elasticsearch.index.mapper.Mapping) DocumentMapper(org.elasticsearch.index.mapper.DocumentMapper) PrintStream(java.io.PrintStream) Repository(org.elasticsearch.repositories.Repository) Uid(org.elasticsearch.index.mapper.Uid) RecoveryTarget(org.elasticsearch.indices.recovery.RecoveryTarget) EngineFactory(org.elasticsearch.index.engine.EngineFactory) IndexingMemoryController(org.elasticsearch.indices.IndexingMemoryController) TimeUnit(java.util.concurrent.TimeUnit) Consumer(java.util.function.Consumer) ExceptionsHelper(org.elasticsearch.ExceptionsHelper) FlushRequest(org.elasticsearch.action.admin.indices.flush.FlushRequest) Closeable(java.io.Closeable) Assertions(org.elasticsearch.Assertions) Translog(org.elasticsearch.index.translog.Translog) Collections(java.util.Collections) RunOnce(org.elasticsearch.common.util.concurrent.RunOnce) SeqNoStats(org.elasticsearch.index.seqno.SeqNoStats) FlushRequest(org.elasticsearch.action.admin.indices.flush.FlushRequest) SetOnce(org.apache.lucene.util.SetOnce) ReadOnlyEngine(org.elasticsearch.index.engine.ReadOnlyEngine) TranslogStats(org.elasticsearch.index.translog.TranslogStats) AlreadyClosedException(org.apache.lucene.store.AlreadyClosedException) ReadOnlyEngine(org.elasticsearch.index.engine.ReadOnlyEngine) Engine(org.elasticsearch.index.engine.Engine)

Example 3 with ReplicationTracker

use of org.elasticsearch.index.seqno.ReplicationTracker in project crate by crate.

the class NoOpEngineTests method testTrimUnreferencedTranslogFiles.

@Test
public void testTrimUnreferencedTranslogFiles() throws Exception {
    final ReplicationTracker tracker = (ReplicationTracker) engine.config().getGlobalCheckpointSupplier();
    ShardRouting routing = TestShardRouting.newShardRouting("test", shardId.id(), "node", null, true, ShardRoutingState.STARTED, allocationId);
    IndexShardRoutingTable table = new IndexShardRoutingTable.Builder(shardId).addShard(routing).build();
    tracker.updateFromMaster(1L, Collections.singleton(allocationId.getId()), table);
    tracker.activatePrimaryMode(SequenceNumbers.NO_OPS_PERFORMED);
    engine.onSettingsChanged(TimeValue.MINUS_ONE, ByteSizeValue.ZERO, randomNonNegativeLong());
    final int numDocs = scaledRandomIntBetween(10, 3000);
    int totalTranslogOps = 0;
    for (int i = 0; i < numDocs; i++) {
        totalTranslogOps++;
        engine.index(indexForDoc(createParsedDoc(Integer.toString(i), null)));
        tracker.updateLocalCheckpoint(allocationId.getId(), i);
        if (rarely()) {
            totalTranslogOps = 0;
            engine.flush();
        }
        if (randomBoolean()) {
            engine.rollTranslogGeneration();
        }
    }
    // prevent translog from trimming so we can test trimUnreferencedFiles in NoOpEngine.
    final Translog.Snapshot snapshot = engine.getTranslog().newSnapshot();
    engine.flush(true, true);
    engine.close();
    final NoOpEngine noOpEngine = new NoOpEngine(noOpConfig(INDEX_SETTINGS, store, primaryTranslogDir, tracker));
    assertThat(noOpEngine.getTranslogStats().estimatedNumberOfOperations(), equalTo(totalTranslogOps));
    noOpEngine.trimUnreferencedTranslogFiles();
    assertThat(noOpEngine.getTranslogStats().estimatedNumberOfOperations(), equalTo(0));
    assertThat(noOpEngine.getTranslogStats().getUncommittedOperations(), equalTo(0));
    assertThat(noOpEngine.getTranslogStats().getTranslogSizeInBytes(), equalTo((long) Translog.DEFAULT_HEADER_SIZE_IN_BYTES));
    snapshot.close();
    noOpEngine.close();
}
Also used : IndexShardRoutingTable(org.elasticsearch.cluster.routing.IndexShardRoutingTable) ReplicationTracker(org.elasticsearch.index.seqno.ReplicationTracker) ShardRouting(org.elasticsearch.cluster.routing.ShardRouting) TestShardRouting(org.elasticsearch.cluster.routing.TestShardRouting) Translog(org.elasticsearch.index.translog.Translog) Test(org.junit.Test)

Example 4 with ReplicationTracker

use of org.elasticsearch.index.seqno.ReplicationTracker in project crate by crate.

the class NoOpEngineTests method testNoopAfterRegularEngine.

@Test
public void testNoopAfterRegularEngine() throws IOException {
    int docs = randomIntBetween(1, 10);
    ReplicationTracker tracker = (ReplicationTracker) engine.config().getGlobalCheckpointSupplier();
    ShardRouting routing = TestShardRouting.newShardRouting("test", shardId.id(), "node", null, true, ShardRoutingState.STARTED, allocationId);
    IndexShardRoutingTable table = new IndexShardRoutingTable.Builder(shardId).addShard(routing).build();
    tracker.updateFromMaster(1L, Collections.singleton(allocationId.getId()), table);
    tracker.activatePrimaryMode(SequenceNumbers.NO_OPS_PERFORMED);
    for (int i = 0; i < docs; i++) {
        ParsedDocument doc = testParsedDocument("" + i, null, testDocumentWithTextField(), B_1, null);
        engine.index(indexForDoc(doc));
        tracker.updateLocalCheckpoint(allocationId.getId(), i);
    }
    engine.flush(true, true);
    long localCheckpoint = engine.getPersistedLocalCheckpoint();
    long maxSeqNo = engine.getSeqNoStats(100L).getMaxSeqNo();
    engine.close();
    final NoOpEngine noOpEngine = new NoOpEngine(noOpConfig(INDEX_SETTINGS, store, primaryTranslogDir, tracker));
    assertThat(noOpEngine.getPersistedLocalCheckpoint(), equalTo(localCheckpoint));
    assertThat(noOpEngine.getSeqNoStats(100L).getMaxSeqNo(), equalTo(maxSeqNo));
    try (Engine.IndexCommitRef ref = noOpEngine.acquireLastIndexCommit(false)) {
        try (IndexReader reader = DirectoryReader.open(ref.getIndexCommit())) {
            assertThat(reader.numDocs(), equalTo(docs));
        }
    }
    noOpEngine.close();
}
Also used : IndexShardRoutingTable(org.elasticsearch.cluster.routing.IndexShardRoutingTable) ParsedDocument(org.elasticsearch.index.mapper.ParsedDocument) ReplicationTracker(org.elasticsearch.index.seqno.ReplicationTracker) IndexReader(org.apache.lucene.index.IndexReader) ShardRouting(org.elasticsearch.cluster.routing.ShardRouting) TestShardRouting(org.elasticsearch.cluster.routing.TestShardRouting) Test(org.junit.Test)

Example 5 with ReplicationTracker

use of org.elasticsearch.index.seqno.ReplicationTracker in project crate by crate.

the class InternalEngineTests method testSeqNoAndCheckpoints.

@Test
public void testSeqNoAndCheckpoints() throws IOException, InterruptedException {
    final int opCount = randomIntBetween(1, 256);
    long primarySeqNo = SequenceNumbers.NO_OPS_PERFORMED;
    final String[] ids = new String[] { "1", "2", "3" };
    final Set<String> indexedIds = new HashSet<>();
    long localCheckpoint = SequenceNumbers.NO_OPS_PERFORMED;
    long replicaLocalCheckpoint = SequenceNumbers.NO_OPS_PERFORMED;
    final long globalCheckpoint;
    long maxSeqNo = SequenceNumbers.NO_OPS_PERFORMED;
    IOUtils.close(store, engine);
    store = createStore();
    InternalEngine initialEngine = null;
    try {
        initialEngine = createEngine(defaultSettings, store, createTempDir(), newLogMergePolicy(), null);
        final ShardRouting primary = TestShardRouting.newShardRouting("test", shardId.id(), "node1", null, true, ShardRoutingState.STARTED, allocationId);
        final ShardRouting initializingReplica = TestShardRouting.newShardRouting(shardId, "node2", false, ShardRoutingState.INITIALIZING);
        ReplicationTracker gcpTracker = (ReplicationTracker) initialEngine.config().getGlobalCheckpointSupplier();
        gcpTracker.updateFromMaster(1L, new HashSet<>(Collections.singletonList(primary.allocationId().getId())), new IndexShardRoutingTable.Builder(shardId).addShard(primary).build());
        gcpTracker.activatePrimaryMode(primarySeqNo);
        if (defaultSettings.isSoftDeleteEnabled()) {
            final CountDownLatch countDownLatch = new CountDownLatch(1);
            gcpTracker.addPeerRecoveryRetentionLease(initializingReplica.currentNodeId(), SequenceNumbers.NO_OPS_PERFORMED, ActionListener.wrap(countDownLatch::countDown));
            countDownLatch.await(5, TimeUnit.SECONDS);
        }
        gcpTracker.updateFromMaster(2L, new HashSet<>(Collections.singletonList(primary.allocationId().getId())), new IndexShardRoutingTable.Builder(shardId).addShard(primary).addShard(initializingReplica).build());
        gcpTracker.initiateTracking(initializingReplica.allocationId().getId());
        gcpTracker.markAllocationIdAsInSync(initializingReplica.allocationId().getId(), replicaLocalCheckpoint);
        final ShardRouting replica = initializingReplica.moveToStarted();
        gcpTracker.updateFromMaster(3L, new HashSet<>(Arrays.asList(primary.allocationId().getId(), replica.allocationId().getId())), new IndexShardRoutingTable.Builder(shardId).addShard(primary).addShard(replica).build());
        for (int op = 0; op < opCount; op++) {
            final String id;
            // mostly index, sometimes delete
            if (rarely() && indexedIds.isEmpty() == false) {
                // we have some docs indexed, so delete one of them
                id = randomFrom(indexedIds);
                final Engine.Delete delete = new Engine.Delete(id, newUid(id), UNASSIGNED_SEQ_NO, primaryTerm.get(), rarely() ? 100 : Versions.MATCH_ANY, VersionType.INTERNAL, PRIMARY, System.nanoTime(), UNASSIGNED_SEQ_NO, 0);
                final Engine.DeleteResult result = initialEngine.delete(delete);
                if (result.getResultType() == Engine.Result.Type.SUCCESS) {
                    assertThat(result.getSeqNo(), equalTo(primarySeqNo + 1));
                    assertThat(initialEngine.getSeqNoStats(-1).getMaxSeqNo(), equalTo(primarySeqNo + 1));
                    indexedIds.remove(id);
                    primarySeqNo++;
                } else {
                    assertThat(result.getSeqNo(), equalTo(UNASSIGNED_SEQ_NO));
                    assertThat(initialEngine.getSeqNoStats(-1).getMaxSeqNo(), equalTo(primarySeqNo));
                }
            } else {
                // index a document
                id = randomFrom(ids);
                ParsedDocument doc = testParsedDocument(id, null, testDocumentWithTextField(), SOURCE, null);
                final Engine.Index index = new Engine.Index(newUid(doc), doc, UNASSIGNED_SEQ_NO, primaryTerm.get(), rarely() ? 100 : Versions.MATCH_ANY, VersionType.INTERNAL, PRIMARY, System.nanoTime(), -1, false, UNASSIGNED_SEQ_NO, 0);
                final Engine.IndexResult result = initialEngine.index(index);
                if (result.getResultType() == Engine.Result.Type.SUCCESS) {
                    assertThat(result.getSeqNo(), equalTo(primarySeqNo + 1));
                    assertThat(initialEngine.getSeqNoStats(-1).getMaxSeqNo(), equalTo(primarySeqNo + 1));
                    indexedIds.add(id);
                    primarySeqNo++;
                } else {
                    assertThat(result.getSeqNo(), equalTo(UNASSIGNED_SEQ_NO));
                    assertThat(initialEngine.getSeqNoStats(-1).getMaxSeqNo(), equalTo(primarySeqNo));
                }
            }
            // to advance persisted local checkpoint
            initialEngine.syncTranslog();
            if (randomInt(10) < 3) {
                // only update rarely as we do it every doc
                replicaLocalCheckpoint = randomIntBetween(Math.toIntExact(replicaLocalCheckpoint), Math.toIntExact(primarySeqNo));
            }
            gcpTracker.updateLocalCheckpoint(primary.allocationId().getId(), initialEngine.getPersistedLocalCheckpoint());
            gcpTracker.updateLocalCheckpoint(initializingReplica.allocationId().getId(), replicaLocalCheckpoint);
            if (rarely()) {
                localCheckpoint = primarySeqNo;
                maxSeqNo = primarySeqNo;
                initialEngine.flush(true, true);
            }
        }
        logger.info("localcheckpoint {}, global {}", replicaLocalCheckpoint, primarySeqNo);
        globalCheckpoint = gcpTracker.getGlobalCheckpoint();
        assertEquals(primarySeqNo, initialEngine.getSeqNoStats(-1).getMaxSeqNo());
        assertEquals(primarySeqNo, initialEngine.getPersistedLocalCheckpoint());
        assertThat(globalCheckpoint, equalTo(replicaLocalCheckpoint));
        assertThat(Long.parseLong(initialEngine.commitStats().getUserData().get(SequenceNumbers.LOCAL_CHECKPOINT_KEY)), equalTo(localCheckpoint));
        // to guarantee the global checkpoint is written to the translog checkpoint
        initialEngine.getTranslog().sync();
        assertThat(initialEngine.getTranslog().getLastSyncedGlobalCheckpoint(), equalTo(globalCheckpoint));
        assertThat(Long.parseLong(initialEngine.commitStats().getUserData().get(SequenceNumbers.MAX_SEQ_NO)), equalTo(maxSeqNo));
    } finally {
        IOUtils.close(initialEngine);
    }
    try (InternalEngine recoveringEngine = new InternalEngine(initialEngine.config())) {
        recoveringEngine.recoverFromTranslog(translogHandler, Long.MAX_VALUE);
        assertEquals(primarySeqNo, recoveringEngine.getSeqNoStats(-1).getMaxSeqNo());
        assertThat(Long.parseLong(recoveringEngine.commitStats().getUserData().get(SequenceNumbers.LOCAL_CHECKPOINT_KEY)), equalTo(primarySeqNo));
        assertThat(recoveringEngine.getTranslog().getLastSyncedGlobalCheckpoint(), equalTo(globalCheckpoint));
        assertThat(Long.parseLong(recoveringEngine.commitStats().getUserData().get(SequenceNumbers.MAX_SEQ_NO)), // we have assigned sequence numbers to should be in the commit
        equalTo(primarySeqNo));
        assertThat(recoveringEngine.getProcessedLocalCheckpoint(), equalTo(primarySeqNo));
        assertThat(recoveringEngine.getPersistedLocalCheckpoint(), equalTo(primarySeqNo));
        assertThat(recoveringEngine.getSeqNoStats(-1).getMaxSeqNo(), equalTo(primarySeqNo));
        assertThat(generateNewSeqNo(recoveringEngine), equalTo(primarySeqNo + 1));
    }
}
Also used : IndexShardRoutingTable(org.elasticsearch.cluster.routing.IndexShardRoutingTable) Matchers.containsString(org.hamcrest.Matchers.containsString) CountDownLatch(java.util.concurrent.CountDownLatch) LongPoint(org.apache.lucene.document.LongPoint) ParsedDocument(org.elasticsearch.index.mapper.ParsedDocument) ReplicationTracker(org.elasticsearch.index.seqno.ReplicationTracker) TestShardRouting(org.elasticsearch.cluster.routing.TestShardRouting) ShardRouting(org.elasticsearch.cluster.routing.ShardRouting) HashSet(java.util.HashSet) Test(org.junit.Test)

Aggregations

ReplicationTracker (org.elasticsearch.index.seqno.ReplicationTracker)5 IndexShardRoutingTable (org.elasticsearch.cluster.routing.IndexShardRoutingTable)4 ShardRouting (org.elasticsearch.cluster.routing.ShardRouting)4 TestShardRouting (org.elasticsearch.cluster.routing.TestShardRouting)3 ParsedDocument (org.elasticsearch.index.mapper.ParsedDocument)3 Test (org.junit.Test)3 IOException (java.io.IOException)2 HashSet (java.util.HashSet)2 CountDownLatch (java.util.concurrent.CountDownLatch)2 LongSupplier (java.util.function.LongSupplier)2 Translog (org.elasticsearch.index.translog.Translog)2 ObjectLongMap (com.carrotsearch.hppc.ObjectLongMap)1 Booleans (io.crate.common.Booleans)1 Tuple (io.crate.common.collections.Tuple)1 IOUtils (io.crate.common.io.IOUtils)1 TimeValue (io.crate.common.unit.TimeValue)1 Exceptions (io.crate.exceptions.Exceptions)1 Closeable (java.io.Closeable)1 PrintStream (java.io.PrintStream)1 ClosedByInterruptException (java.nio.channels.ClosedByInterruptException)1