Search in sources :

Example 1 with DummyShardLock

use of org.opensearch.test.DummyShardLock in project OpenSearch by opensearch-project.

the class AllocationIdIT method testFailedRecoveryOnAllocateStalePrimaryRequiresAnotherAllocateStalePrimary.

public void testFailedRecoveryOnAllocateStalePrimaryRequiresAnotherAllocateStalePrimary() throws Exception {
    /*
         * Allocation id is put on start of shard while historyUUID is adjusted after recovery is done.
         *
         * If during execution of AllocateStalePrimary a proper allocation id is stored in allocation id set and recovery is failed
         * shard restart skips the stage where historyUUID is changed.
         *
         * That leads to situation where allocated stale primary and its replica belongs to the same historyUUID and
         * replica will receive operations after local checkpoint while documents before checkpoints could be significant different.
         *
         * Therefore, on AllocateStalePrimary we put some fake allocation id (no real one could be generated like that)
         * and any failure during recovery requires extra AllocateStalePrimary command to be executed.
         */
    // initial set up
    final String indexName = "index42";
    final String master = internalCluster().startMasterOnlyNode();
    String node1 = internalCluster().startNode();
    createIndex(indexName, Settings.builder().put(IndexMetadata.SETTING_NUMBER_OF_SHARDS, 1).put(IndexMetadata.SETTING_NUMBER_OF_REPLICAS, 1).put(IndexSettings.INDEX_CHECK_ON_STARTUP.getKey(), "checksum").build());
    final int numDocs = indexDocs(indexName, "foo", "bar");
    final IndexSettings indexSettings = getIndexSettings(indexName, node1);
    final Set<String> allocationIds = getAllocationIds(indexName);
    final ShardId shardId = new ShardId(resolveIndex(indexName), 0);
    final Path indexPath = getIndexPath(node1, shardId);
    assertThat(allocationIds, hasSize(1));
    final String historyUUID = historyUUID(node1, indexName);
    String node2 = internalCluster().startNode();
    ensureGreen(indexName);
    internalCluster().assertSameDocIdsOnShards();
    // initial set up is done
    Settings node1DataPathSettings = internalCluster().dataPathSettings(node1);
    Settings node2DataPathSettings = internalCluster().dataPathSettings(node2);
    internalCluster().stopRandomNode(InternalTestCluster.nameFilter(node1));
    // index more docs to node2 that marks node1 as stale
    int numExtraDocs = indexDocs(indexName, "foo", "bar2");
    assertHitCount(client(node2).prepareSearch(indexName).setQuery(matchAllQuery()).get(), numDocs + numExtraDocs);
    internalCluster().stopRandomNode(InternalTestCluster.nameFilter(node2));
    // create fake corrupted marker on node1
    putFakeCorruptionMarker(indexSettings, shardId, indexPath);
    // thanks to master node1 is out of sync
    node1 = internalCluster().startNode(node1DataPathSettings);
    // there is only _stale_ primary
    checkNoValidShardCopy(indexName, shardId);
    // allocate stale primary
    client(node1).admin().cluster().prepareReroute().add(new AllocateStalePrimaryAllocationCommand(indexName, 0, node1, true)).get();
    // allocation fails due to corruption marker
    assertBusy(() -> {
        final ClusterState state = client().admin().cluster().prepareState().get().getState();
        final ShardRouting shardRouting = state.routingTable().index(indexName).shard(shardId.id()).primaryShard();
        assertThat(shardRouting.state(), equalTo(ShardRoutingState.UNASSIGNED));
        assertThat(shardRouting.unassignedInfo().getReason(), equalTo(UnassignedInfo.Reason.ALLOCATION_FAILED));
    });
    internalCluster().stopRandomNode(InternalTestCluster.nameFilter(node1));
    try (Store store = new Store(shardId, indexSettings, new NIOFSDirectory(indexPath), new DummyShardLock(shardId))) {
        store.removeCorruptionMarker();
    }
    node1 = internalCluster().startNode(node1DataPathSettings);
    // index is red: no any shard is allocated (allocation id is a fake id that does not match to anything)
    checkHealthStatus(indexName, ClusterHealthStatus.RED);
    checkNoValidShardCopy(indexName, shardId);
    // no any valid shard is there; have to invoke AllocateStalePrimary again
    client().admin().cluster().prepareReroute().add(new AllocateStalePrimaryAllocationCommand(indexName, 0, node1, true)).get();
    ensureYellow(indexName);
    // bring node2 back
    node2 = internalCluster().startNode(node2DataPathSettings);
    ensureGreen(indexName);
    assertThat(historyUUID(node1, indexName), not(equalTo(historyUUID)));
    assertThat(historyUUID(node1, indexName), equalTo(historyUUID(node2, indexName)));
    internalCluster().assertSameDocIdsOnShards();
}
Also used : ShardPath(org.opensearch.index.shard.ShardPath) Path(java.nio.file.Path) ClusterState(org.opensearch.cluster.ClusterState) NIOFSDirectory(org.apache.lucene.store.NIOFSDirectory) AllocateStalePrimaryAllocationCommand(org.opensearch.cluster.routing.allocation.command.AllocateStalePrimaryAllocationCommand) IndexSettings(org.opensearch.index.IndexSettings) Store(org.opensearch.index.store.Store) ShardId(org.opensearch.index.shard.ShardId) DummyShardLock(org.opensearch.test.DummyShardLock) Settings(org.opensearch.common.settings.Settings) IndexSettings(org.opensearch.index.IndexSettings)

Example 2 with DummyShardLock

use of org.opensearch.test.DummyShardLock in project OpenSearch by opensearch-project.

the class IndexShardIT method testLockTryingToDelete.

public void testLockTryingToDelete() throws Exception {
    createIndex("test");
    ensureGreen();
    NodeEnvironment env = getInstanceFromNode(NodeEnvironment.class);
    ClusterService cs = getInstanceFromNode(ClusterService.class);
    final Index index = cs.state().metadata().index("test").getIndex();
    Path[] shardPaths = env.availableShardPaths(new ShardId(index, 0));
    logger.info("--> paths: [{}]", (Object) shardPaths);
    // Should not be able to acquire the lock because it's already open
    try {
        NodeEnvironment.acquireFSLockForPaths(IndexSettingsModule.newIndexSettings("test", Settings.EMPTY), shardPaths);
        fail("should not have been able to acquire the lock");
    } catch (LockObtainFailedException e) {
        assertTrue("msg: " + e.getMessage(), e.getMessage().contains("unable to acquire write.lock"));
    }
    // Test without the regular shard lock to assume we can acquire it
    // (worst case, meaning that the shard lock could be acquired and
    // we're green to delete the shard's directory)
    ShardLock sLock = new DummyShardLock(new ShardId(index, 0));
    try {
        env.deleteShardDirectoryUnderLock(sLock, IndexSettingsModule.newIndexSettings("test", Settings.EMPTY));
        fail("should not have been able to delete the directory");
    } catch (LockObtainFailedException e) {
        assertTrue("msg: " + e.getMessage(), e.getMessage().contains("unable to acquire write.lock"));
    }
}
Also used : Path(java.nio.file.Path) ClusterService(org.opensearch.cluster.service.ClusterService) NodeEnvironment(org.opensearch.env.NodeEnvironment) LockObtainFailedException(org.apache.lucene.store.LockObtainFailedException) Index(org.opensearch.index.Index) DummyShardLock(org.opensearch.test.DummyShardLock) ShardLock(org.opensearch.env.ShardLock) DummyShardLock(org.opensearch.test.DummyShardLock)

Example 3 with DummyShardLock

use of org.opensearch.test.DummyShardLock in project OpenSearch by opensearch-project.

the class StoreTests method testCleanupFromSnapshot.

public void testCleanupFromSnapshot() throws IOException {
    final ShardId shardId = new ShardId("index", "_na_", 1);
    Store store = new Store(shardId, INDEX_SETTINGS, StoreTests.newDirectory(random()), new DummyShardLock(shardId));
    // this time random codec....
    IndexWriterConfig indexWriterConfig = newIndexWriterConfig(random(), new MockAnalyzer(random())).setCodec(TestUtil.getDefaultCodec());
    // we keep all commits and that allows us clean based on multiple snapshots
    indexWriterConfig.setIndexDeletionPolicy(NoDeletionPolicy.INSTANCE);
    IndexWriter writer = new IndexWriter(store.directory(), indexWriterConfig);
    int docs = 1 + random().nextInt(100);
    int numCommits = 0;
    for (int i = 0; i < docs; i++) {
        if (i > 0 && randomIntBetween(0, 10) == 0) {
            writer.commit();
            numCommits++;
        }
        Document doc = new Document();
        doc.add(new TextField("id", "" + i, random().nextBoolean() ? Field.Store.YES : Field.Store.NO));
        doc.add(new TextField("body", TestUtil.randomRealisticUnicodeString(random()), random().nextBoolean() ? Field.Store.YES : Field.Store.NO));
        doc.add(new SortedDocValuesField("dv", new BytesRef(TestUtil.randomRealisticUnicodeString(random()))));
        writer.addDocument(doc);
    }
    if (numCommits < 1) {
        writer.commit();
        Document doc = new Document();
        doc.add(new TextField("id", "" + docs++, random().nextBoolean() ? Field.Store.YES : Field.Store.NO));
        doc.add(new TextField("body", TestUtil.randomRealisticUnicodeString(random()), random().nextBoolean() ? Field.Store.YES : Field.Store.NO));
        doc.add(new SortedDocValuesField("dv", new BytesRef(TestUtil.randomRealisticUnicodeString(random()))));
        writer.addDocument(doc);
    }
    Store.MetadataSnapshot firstMeta = store.getMetadata(null);
    if (random().nextBoolean()) {
        for (int i = 0; i < docs; i++) {
            if (random().nextBoolean()) {
                Document doc = new Document();
                doc.add(new TextField("id", "" + i, random().nextBoolean() ? Field.Store.YES : Field.Store.NO));
                doc.add(new TextField("body", TestUtil.randomRealisticUnicodeString(random()), random().nextBoolean() ? Field.Store.YES : Field.Store.NO));
                writer.updateDocument(new Term("id", "" + i), doc);
            }
        }
    }
    writer.commit();
    writer.close();
    Store.MetadataSnapshot secondMeta = store.getMetadata(null);
    if (randomBoolean()) {
        store.cleanupAndVerify("test", firstMeta);
        String[] strings = store.directory().listAll();
        int numNotFound = 0;
        for (String file : strings) {
            if (file.startsWith("extra")) {
                continue;
            }
            assertTrue(firstMeta.contains(file) || file.equals("write.lock"));
            if (secondMeta.contains(file) == false) {
                numNotFound++;
            }
        }
        assertTrue("at least one file must not be in here since we have two commits?", numNotFound > 0);
    } else {
        store.cleanupAndVerify("test", secondMeta);
        String[] strings = store.directory().listAll();
        int numNotFound = 0;
        for (String file : strings) {
            if (file.startsWith("extra")) {
                continue;
            }
            assertTrue(file, secondMeta.contains(file) || file.equals("write.lock"));
            if (firstMeta.contains(file) == false) {
                numNotFound++;
            }
        }
        assertTrue("at least one file must not be in here since we have two commits?", numNotFound > 0);
    }
    deleteContent(store.directory());
    IOUtils.close(store);
}
Also used : Term(org.apache.lucene.index.Term) Matchers.containsString(org.hamcrest.Matchers.containsString) Document(org.apache.lucene.document.Document) ShardId(org.opensearch.index.shard.ShardId) MockAnalyzer(org.apache.lucene.analysis.MockAnalyzer) IndexWriter(org.apache.lucene.index.IndexWriter) SortedDocValuesField(org.apache.lucene.document.SortedDocValuesField) TextField(org.apache.lucene.document.TextField) DummyShardLock(org.opensearch.test.DummyShardLock) BytesRef(org.apache.lucene.util.BytesRef) IndexWriterConfig(org.apache.lucene.index.IndexWriterConfig)

Example 4 with DummyShardLock

use of org.opensearch.test.DummyShardLock in project OpenSearch by opensearch-project.

the class StoreTests method testOnCloseCallback.

public void testOnCloseCallback() throws IOException {
    final ShardId shardId = new ShardId(new Index(randomRealisticUnicodeOfCodepointLengthBetween(1, 10), "_na_"), randomIntBetween(0, 100));
    final AtomicInteger count = new AtomicInteger(0);
    final ShardLock lock = new DummyShardLock(shardId);
    Store store = new Store(shardId, INDEX_SETTINGS, StoreTests.newDirectory(random()), lock, theLock -> {
        assertEquals(shardId, theLock.getShardId());
        assertEquals(lock, theLock);
        count.incrementAndGet();
    });
    assertEquals(count.get(), 0);
    final int iters = randomIntBetween(1, 10);
    for (int i = 0; i < iters; i++) {
        store.close();
    }
    assertEquals(count.get(), 1);
}
Also used : ShardId(org.opensearch.index.shard.ShardId) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) Index(org.opensearch.index.Index) DummyShardLock(org.opensearch.test.DummyShardLock) DummyShardLock(org.opensearch.test.DummyShardLock) ShardLock(org.opensearch.env.ShardLock)

Example 5 with DummyShardLock

use of org.opensearch.test.DummyShardLock in project OpenSearch by opensearch-project.

the class StoreTests method testStoreStats.

public void testStoreStats() throws IOException {
    final ShardId shardId = new ShardId("index", "_na_", 1);
    Settings settings = Settings.builder().put(IndexMetadata.SETTING_VERSION_CREATED, org.opensearch.Version.CURRENT).put(Store.INDEX_STORE_STATS_REFRESH_INTERVAL_SETTING.getKey(), TimeValue.timeValueMinutes(0)).build();
    Store store = new Store(shardId, IndexSettingsModule.newIndexSettings("index", settings), StoreTests.newDirectory(random()), new DummyShardLock(shardId));
    long initialStoreSize = 0;
    for (String extraFiles : store.directory().listAll()) {
        assertTrue("expected extraFS file but got: " + extraFiles, extraFiles.startsWith("extra"));
        initialStoreSize += store.directory().fileLength(extraFiles);
    }
    final long reservedBytes = randomBoolean() ? StoreStats.UNKNOWN_RESERVED_BYTES : randomLongBetween(0L, Integer.MAX_VALUE);
    StoreStats stats = store.stats(reservedBytes);
    assertEquals(initialStoreSize, stats.getSize().getBytes());
    assertEquals(reservedBytes, stats.getReservedSize().getBytes());
    stats.add(null);
    assertEquals(initialStoreSize, stats.getSize().getBytes());
    assertEquals(reservedBytes, stats.getReservedSize().getBytes());
    final long otherStatsBytes = randomLongBetween(0L, Integer.MAX_VALUE);
    final long otherStatsReservedBytes = randomBoolean() ? StoreStats.UNKNOWN_RESERVED_BYTES : randomLongBetween(0L, Integer.MAX_VALUE);
    stats.add(new StoreStats(otherStatsBytes, otherStatsReservedBytes));
    assertEquals(initialStoreSize + otherStatsBytes, stats.getSize().getBytes());
    assertEquals(Math.max(reservedBytes, 0L) + Math.max(otherStatsReservedBytes, 0L), stats.getReservedSize().getBytes());
    Directory dir = store.directory();
    final long length;
    try (IndexOutput output = dir.createOutput("foo.bar", IOContext.DEFAULT)) {
        int iters = scaledRandomIntBetween(10, 100);
        for (int i = 0; i < iters; i++) {
            BytesRef bytesRef = new BytesRef(TestUtil.randomRealisticUnicodeString(random(), 10, 1024));
            output.writeBytes(bytesRef.bytes, bytesRef.offset, bytesRef.length);
        }
        length = output.getFilePointer();
    }
    assertTrue(numNonExtraFiles(store) > 0);
    stats = store.stats(0L);
    assertEquals(stats.getSizeInBytes(), length + initialStoreSize);
    deleteContent(store.directory());
    IOUtils.close(store);
}
Also used : ShardId(org.opensearch.index.shard.ShardId) IndexOutput(org.apache.lucene.store.IndexOutput) DummyShardLock(org.opensearch.test.DummyShardLock) Matchers.containsString(org.hamcrest.Matchers.containsString) Settings(org.opensearch.common.settings.Settings) IndexSettings(org.opensearch.index.IndexSettings) BytesRef(org.apache.lucene.util.BytesRef) Directory(org.apache.lucene.store.Directory) ByteBuffersDirectory(org.apache.lucene.store.ByteBuffersDirectory) FilterDirectory(org.apache.lucene.store.FilterDirectory) NIOFSDirectory(org.apache.lucene.store.NIOFSDirectory)

Aggregations

DummyShardLock (org.opensearch.test.DummyShardLock)20 ShardId (org.opensearch.index.shard.ShardId)16 Matchers.containsString (org.hamcrest.Matchers.containsString)10 IndexWriter (org.apache.lucene.index.IndexWriter)8 BytesRef (org.apache.lucene.util.BytesRef)8 IndexSettings (org.opensearch.index.IndexSettings)8 Document (org.apache.lucene.document.Document)7 TextField (org.apache.lucene.document.TextField)7 IndexWriterConfig (org.apache.lucene.index.IndexWriterConfig)7 Directory (org.apache.lucene.store.Directory)7 IOException (java.io.IOException)6 MockAnalyzer (org.apache.lucene.analysis.MockAnalyzer)6 SortedDocValuesField (org.apache.lucene.document.SortedDocValuesField)6 Term (org.apache.lucene.index.Term)6 Path (java.nio.file.Path)5 FilterDirectory (org.apache.lucene.store.FilterDirectory)5 Settings (org.opensearch.common.settings.Settings)5 Store (org.opensearch.index.store.Store)5 ArrayList (java.util.ArrayList)4 StringField (org.apache.lucene.document.StringField)4