Search in sources :

Example 21 with SnapshotId

use of org.elasticsearch.snapshots.SnapshotId in project elasticsearch by elastic.

the class PrimaryShardAllocatorTests method getRestoreRoutingAllocation.

private RoutingAllocation getRestoreRoutingAllocation(AllocationDeciders allocationDeciders, String... allocIds) {
    MetaData metaData = MetaData.builder().put(IndexMetaData.builder(shardId.getIndexName()).settings(settings(Version.CURRENT)).numberOfShards(1).numberOfReplicas(0).putInSyncAllocationIds(0, Sets.newHashSet(allocIds))).build();
    final Snapshot snapshot = new Snapshot("test", new SnapshotId("test", UUIDs.randomBase64UUID()));
    RoutingTable routingTable = RoutingTable.builder().addAsRestore(metaData.index(shardId.getIndex()), new SnapshotRecoverySource(snapshot, Version.CURRENT, shardId.getIndexName())).build();
    ClusterState state = ClusterState.builder(org.elasticsearch.cluster.ClusterName.CLUSTER_NAME_SETTING.getDefault(Settings.EMPTY)).metaData(metaData).routingTable(routingTable).nodes(DiscoveryNodes.builder().add(node1).add(node2).add(node3)).build();
    return new RoutingAllocation(allocationDeciders, new RoutingNodes(state, false), state, null, System.nanoTime(), false);
}
Also used : Snapshot(org.elasticsearch.snapshots.Snapshot) SnapshotId(org.elasticsearch.snapshots.SnapshotId) ClusterState(org.elasticsearch.cluster.ClusterState) SnapshotRecoverySource(org.elasticsearch.cluster.routing.RecoverySource.SnapshotRecoverySource) RoutingTable(org.elasticsearch.cluster.routing.RoutingTable) RoutingNodes(org.elasticsearch.cluster.routing.RoutingNodes) MetaData(org.elasticsearch.cluster.metadata.MetaData) IndexMetaData(org.elasticsearch.cluster.metadata.IndexMetaData) RoutingAllocation(org.elasticsearch.cluster.routing.allocation.RoutingAllocation)

Example 22 with SnapshotId

use of org.elasticsearch.snapshots.SnapshotId in project elasticsearch by elastic.

the class IndexShardTests method testRestoreShard.

public void testRestoreShard() throws IOException {
    final IndexShard source = newStartedShard(true);
    IndexShard target = newStartedShard(true);
    indexDoc(source, "test", "0");
    if (randomBoolean()) {
        source.refresh("test");
    }
    indexDoc(target, "test", "1");
    target.refresh("test");
    assertDocs(target, new Uid("test", "1"));
    // only flush source
    flushShard(source);
    final ShardRouting origRouting = target.routingEntry();
    ShardRouting routing = ShardRoutingHelper.reinitPrimary(origRouting);
    final Snapshot snapshot = new Snapshot("foo", new SnapshotId("bar", UUIDs.randomBase64UUID()));
    routing = ShardRoutingHelper.newWithRestoreSource(routing, new RecoverySource.SnapshotRecoverySource(snapshot, Version.CURRENT, "test"));
    target = reinitShard(target, routing);
    Store sourceStore = source.store();
    Store targetStore = target.store();
    DiscoveryNode localNode = new DiscoveryNode("foo", buildNewFakeTransportAddress(), emptyMap(), emptySet(), Version.CURRENT);
    target.markAsRecovering("store", new RecoveryState(routing, localNode, null));
    assertTrue(target.restoreFromRepository(new RestoreOnlyRepository("test") {

        @Override
        public void restoreShard(IndexShard shard, SnapshotId snapshotId, Version version, IndexId indexId, ShardId snapshotShardId, RecoveryState recoveryState) {
            try {
                cleanLuceneIndex(targetStore.directory());
                for (String file : sourceStore.directory().listAll()) {
                    if (file.equals("write.lock") || file.startsWith("extra")) {
                        continue;
                    }
                    targetStore.directory().copyFrom(sourceStore.directory(), file, file, IOContext.DEFAULT);
                }
            } catch (Exception ex) {
                throw new RuntimeException(ex);
            }
        }
    }));
    target.updateRoutingEntry(routing.moveToStarted());
    assertDocs(target, new Uid("test", "0"));
    closeShards(source, target);
}
Also used : IndexId(org.elasticsearch.repositories.IndexId) DiscoveryNode(org.elasticsearch.cluster.node.DiscoveryNode) Store(org.elasticsearch.index.store.Store) Matchers.containsString(org.hamcrest.Matchers.containsString) AlreadyClosedException(org.apache.lucene.store.AlreadyClosedException) EngineException(org.elasticsearch.index.engine.EngineException) IOException(java.io.IOException) BrokenBarrierException(java.util.concurrent.BrokenBarrierException) ExecutionException(java.util.concurrent.ExecutionException) CorruptIndexException(org.apache.lucene.index.CorruptIndexException) Uid(org.elasticsearch.index.mapper.Uid) Snapshot(org.elasticsearch.snapshots.Snapshot) SnapshotId(org.elasticsearch.snapshots.SnapshotId) Version(org.elasticsearch.Version) TestShardRouting(org.elasticsearch.cluster.routing.TestShardRouting) ShardRouting(org.elasticsearch.cluster.routing.ShardRouting) RecoveryState(org.elasticsearch.indices.recovery.RecoveryState)

Example 23 with SnapshotId

use of org.elasticsearch.snapshots.SnapshotId in project elasticsearch by elastic.

the class RepositoryUpgradabilityIT method testRepositoryWorksWithCrossVersions.

/**
     * This tests that a repository can inter-operate with snapshots that both have and don't have a UUID,
     * namely when a repository was created in an older version with snapshots created in the old format
     * (only snapshot name, no UUID) and then the repository is loaded into newer versions where subsequent
     * snapshots have a name and a UUID.
     */
public void testRepositoryWorksWithCrossVersions() throws Exception {
    final List<String> repoVersions = listRepoVersions();
    // run the test for each supported version
    for (final String version : repoVersions) {
        final String repoName = "test-repo-" + version;
        logger.info("-->  creating repository [{}] for version [{}]", repoName, version);
        createRepository(version, repoName);
        logger.info("--> get the snapshots");
        final String originalIndex = "index-" + version;
        final Set<String> indices = Sets.newHashSet(originalIndex);
        final Set<SnapshotInfo> snapshotInfos = Sets.newHashSet(getSnapshots(repoName));
        assertThat(snapshotInfos.size(), equalTo(1));
        SnapshotInfo originalSnapshot = snapshotInfos.iterator().next();
        if (Version.fromString(version).before(Version.V_5_0_0_alpha1)) {
            assertThat(originalSnapshot.snapshotId(), equalTo(new SnapshotId("test_1", "test_1")));
        } else {
            assertThat(originalSnapshot.snapshotId().getName(), equalTo("test_1"));
            // it's a random UUID now
            assertNotNull(originalSnapshot.snapshotId().getUUID());
        }
        assertThat(Sets.newHashSet(originalSnapshot.indices()), equalTo(indices));
        logger.info("--> restore the original snapshot");
        final Set<String> restoredIndices = Sets.newHashSet(restoreSnapshot(repoName, originalSnapshot.snapshotId().getName()));
        assertThat(restoredIndices, equalTo(indices));
        // make sure it has documents
        for (final String searchIdx : restoredIndices) {
            assertThat(client().prepareSearch(searchIdx).setSize(0).get().getHits().getTotalHits(), greaterThan(0L));
        }
        // delete so we can restore again later
        deleteIndices(restoredIndices);
        final String snapshotName2 = "test_2";
        logger.info("--> take a new snapshot of the old index");
        final int addedDocSize = 10;
        for (int i = 0; i < addedDocSize; i++) {
            index(originalIndex, "doc", Integer.toString(i), "foo", "new-bar-" + i);
        }
        refresh();
        snapshotInfos.add(createSnapshot(repoName, snapshotName2));
        logger.info("--> get the snapshots with the newly created snapshot [{}]", snapshotName2);
        Set<SnapshotInfo> snapshotInfosFromRepo = Sets.newHashSet(getSnapshots(repoName));
        assertThat(snapshotInfosFromRepo, equalTo(snapshotInfos));
        snapshotInfosFromRepo.forEach(snapshotInfo -> {
            assertThat(Sets.newHashSet(snapshotInfo.indices()), equalTo(indices));
        });
        final String snapshotName3 = "test_3";
        final String indexName2 = "index2";
        logger.info("--> take a new snapshot with a new index");
        createIndex(indexName2);
        indices.add(indexName2);
        for (int i = 0; i < addedDocSize; i++) {
            index(indexName2, "doc", Integer.toString(i), "foo", "new-bar-" + i);
        }
        refresh();
        snapshotInfos.add(createSnapshot(repoName, snapshotName3));
        logger.info("--> get the snapshots with the newly created snapshot [{}]", snapshotName3);
        snapshotInfosFromRepo = Sets.newHashSet(getSnapshots(repoName));
        assertThat(snapshotInfosFromRepo, equalTo(snapshotInfos));
        snapshotInfosFromRepo.forEach(snapshotInfo -> {
            if (snapshotInfo.snapshotId().getName().equals(snapshotName3)) {
                assertThat(Sets.newHashSet(snapshotInfo.indices()), equalTo(indices));
            } else {
                assertThat(Sets.newHashSet(snapshotInfo.indices()), equalTo(Sets.newHashSet(originalIndex)));
            }
        });
        // clean up indices
        deleteIndices(indices);
        logger.info("--> restore the old snapshot again");
        Set<String> oldRestoredIndices = Sets.newHashSet(restoreSnapshot(repoName, originalSnapshot.snapshotId().getName()));
        assertThat(oldRestoredIndices, equalTo(Sets.newHashSet(originalIndex)));
        for (final String searchIdx : oldRestoredIndices) {
            assertThat(client().prepareSearch(searchIdx).setSize(0).get().getHits().getTotalHits(), greaterThanOrEqualTo((long) addedDocSize));
        }
        deleteIndices(oldRestoredIndices);
        logger.info("--> restore the new snapshot");
        Set<String> newSnapshotIndices = Sets.newHashSet(restoreSnapshot(repoName, snapshotName3));
        assertThat(newSnapshotIndices, equalTo(Sets.newHashSet(originalIndex, indexName2)));
        for (final String searchIdx : newSnapshotIndices) {
            assertThat(client().prepareSearch(searchIdx).setSize(0).get().getHits().getTotalHits(), greaterThanOrEqualTo((long) addedDocSize));
        }
        // clean up indices before starting again
        deleteIndices(newSnapshotIndices);
    }
}
Also used : SnapshotId(org.elasticsearch.snapshots.SnapshotId) SnapshotInfo(org.elasticsearch.snapshots.SnapshotInfo)

Example 24 with SnapshotId

use of org.elasticsearch.snapshots.SnapshotId in project elasticsearch by elastic.

the class NodeVersionAllocationDeciderTests method testRestoreDoesNotAllocateSnapshotOnOlderNodes.

public void testRestoreDoesNotAllocateSnapshotOnOlderNodes() {
    final DiscoveryNode newNode = new DiscoveryNode("newNode", buildNewFakeTransportAddress(), emptyMap(), MASTER_DATA_ROLES, Version.CURRENT);
    final DiscoveryNode oldNode1 = new DiscoveryNode("oldNode1", buildNewFakeTransportAddress(), emptyMap(), MASTER_DATA_ROLES, VersionUtils.getPreviousVersion());
    final DiscoveryNode oldNode2 = new DiscoveryNode("oldNode2", buildNewFakeTransportAddress(), emptyMap(), MASTER_DATA_ROLES, VersionUtils.getPreviousVersion());
    int numberOfShards = randomIntBetween(1, 3);
    final IndexMetaData.Builder indexMetaData = IndexMetaData.builder("test").settings(settings(Version.CURRENT)).numberOfShards(numberOfShards).numberOfReplicas(randomIntBetween(0, 3));
    for (int i = 0; i < numberOfShards; i++) {
        indexMetaData.putInSyncAllocationIds(i, Collections.singleton("_test_"));
    }
    MetaData metaData = MetaData.builder().put(indexMetaData).build();
    ClusterState state = ClusterState.builder(ClusterName.CLUSTER_NAME_SETTING.getDefault(Settings.EMPTY)).metaData(metaData).routingTable(RoutingTable.builder().addAsRestore(metaData.index("test"), new SnapshotRecoverySource(new Snapshot("rep1", new SnapshotId("snp1", UUIDs.randomBase64UUID())), Version.CURRENT, "test")).build()).nodes(DiscoveryNodes.builder().add(newNode).add(oldNode1).add(oldNode2)).build();
    AllocationDeciders allocationDeciders = new AllocationDeciders(Settings.EMPTY, Arrays.asList(new ReplicaAfterPrimaryActiveAllocationDecider(Settings.EMPTY), new NodeVersionAllocationDecider(Settings.EMPTY)));
    AllocationService strategy = new MockAllocationService(Settings.EMPTY, allocationDeciders, new TestGatewayAllocator(), new BalancedShardsAllocator(Settings.EMPTY), EmptyClusterInfoService.INSTANCE);
    state = strategy.reroute(state, new AllocationCommands(), true, false).getClusterState();
    // Make sure that primary shards are only allocated on the new node
    for (int i = 0; i < numberOfShards; i++) {
        assertEquals("newNode", state.routingTable().index("test").getShards().get(i).primaryShard().currentNodeId());
    }
}
Also used : TestGatewayAllocator(org.elasticsearch.test.gateway.TestGatewayAllocator) ClusterState(org.elasticsearch.cluster.ClusterState) DiscoveryNode(org.elasticsearch.cluster.node.DiscoveryNode) BalancedShardsAllocator(org.elasticsearch.cluster.routing.allocation.allocator.BalancedShardsAllocator) AllocationDeciders(org.elasticsearch.cluster.routing.allocation.decider.AllocationDeciders) ReplicaAfterPrimaryActiveAllocationDecider(org.elasticsearch.cluster.routing.allocation.decider.ReplicaAfterPrimaryActiveAllocationDecider) IndexMetaData(org.elasticsearch.cluster.metadata.IndexMetaData) AllocationCommands(org.elasticsearch.cluster.routing.allocation.command.AllocationCommands) Snapshot(org.elasticsearch.snapshots.Snapshot) SnapshotId(org.elasticsearch.snapshots.SnapshotId) SnapshotRecoverySource(org.elasticsearch.cluster.routing.RecoverySource.SnapshotRecoverySource) MetaData(org.elasticsearch.cluster.metadata.MetaData) IndexMetaData(org.elasticsearch.cluster.metadata.IndexMetaData) NodeVersionAllocationDecider(org.elasticsearch.cluster.routing.allocation.decider.NodeVersionAllocationDecider)

Example 25 with SnapshotId

use of org.elasticsearch.snapshots.SnapshotId in project elasticsearch by elastic.

the class ClusterSerializationTests method testSnapshotDeletionsInProgressSerialization.

public void testSnapshotDeletionsInProgressSerialization() throws Exception {
    boolean includeRestore = randomBoolean();
    ClusterState.Builder builder = ClusterState.builder(ClusterState.EMPTY_STATE).putCustom(SnapshotDeletionsInProgress.TYPE, SnapshotDeletionsInProgress.newInstance(new SnapshotDeletionsInProgress.Entry(new Snapshot("repo1", new SnapshotId("snap1", UUIDs.randomBase64UUID())), randomNonNegativeLong(), randomNonNegativeLong())));
    if (includeRestore) {
        builder.putCustom(RestoreInProgress.TYPE, new RestoreInProgress(new RestoreInProgress.Entry(new Snapshot("repo2", new SnapshotId("snap2", UUIDs.randomBase64UUID())), RestoreInProgress.State.STARTED, Collections.singletonList("index_name"), ImmutableOpenMap.of())));
    }
    ClusterState clusterState = builder.incrementVersion().build();
    Diff<ClusterState> diffs = clusterState.diff(ClusterState.EMPTY_STATE);
    // serialize with current version
    BytesStreamOutput outStream = new BytesStreamOutput();
    diffs.writeTo(outStream);
    StreamInput inStream = outStream.bytes().streamInput();
    inStream = new NamedWriteableAwareStreamInput(inStream, new NamedWriteableRegistry(ClusterModule.getNamedWriteables()));
    Diff<ClusterState> serializedDiffs = ClusterState.readDiffFrom(inStream, clusterState.nodes().getLocalNode());
    ClusterState stateAfterDiffs = serializedDiffs.apply(ClusterState.EMPTY_STATE);
    assertThat(stateAfterDiffs.custom(RestoreInProgress.TYPE), includeRestore ? notNullValue() : nullValue());
    assertThat(stateAfterDiffs.custom(SnapshotDeletionsInProgress.TYPE), notNullValue());
    // serialize with old version
    outStream = new BytesStreamOutput();
    outStream.setVersion(Version.CURRENT.minimumCompatibilityVersion());
    diffs.writeTo(outStream);
    inStream = outStream.bytes().streamInput();
    inStream = new NamedWriteableAwareStreamInput(inStream, new NamedWriteableRegistry(ClusterModule.getNamedWriteables()));
    serializedDiffs = ClusterState.readDiffFrom(inStream, clusterState.nodes().getLocalNode());
    stateAfterDiffs = serializedDiffs.apply(ClusterState.EMPTY_STATE);
    assertThat(stateAfterDiffs.custom(RestoreInProgress.TYPE), includeRestore ? notNullValue() : nullValue());
    assertThat(stateAfterDiffs.custom(SnapshotDeletionsInProgress.TYPE), nullValue());
    // remove the custom and try serializing again with old version
    clusterState = ClusterState.builder(clusterState).removeCustom(SnapshotDeletionsInProgress.TYPE).incrementVersion().build();
    outStream = new BytesStreamOutput();
    diffs.writeTo(outStream);
    inStream = outStream.bytes().streamInput();
    inStream = new NamedWriteableAwareStreamInput(inStream, new NamedWriteableRegistry(ClusterModule.getNamedWriteables()));
    serializedDiffs = ClusterState.readDiffFrom(inStream, clusterState.nodes().getLocalNode());
    stateAfterDiffs = serializedDiffs.apply(stateAfterDiffs);
    assertThat(stateAfterDiffs.custom(RestoreInProgress.TYPE), includeRestore ? notNullValue() : nullValue());
    assertThat(stateAfterDiffs.custom(SnapshotDeletionsInProgress.TYPE), nullValue());
}
Also used : NamedWriteableRegistry(org.elasticsearch.common.io.stream.NamedWriteableRegistry) Snapshot(org.elasticsearch.snapshots.Snapshot) SnapshotId(org.elasticsearch.snapshots.SnapshotId) ClusterState(org.elasticsearch.cluster.ClusterState) RestoreInProgress(org.elasticsearch.cluster.RestoreInProgress) NamedWriteableAwareStreamInput(org.elasticsearch.common.io.stream.NamedWriteableAwareStreamInput) StreamInput(org.elasticsearch.common.io.stream.StreamInput) NamedWriteableAwareStreamInput(org.elasticsearch.common.io.stream.NamedWriteableAwareStreamInput) BytesStreamOutput(org.elasticsearch.common.io.stream.BytesStreamOutput)

Aggregations

SnapshotId (org.elasticsearch.snapshots.SnapshotId)27 Snapshot (org.elasticsearch.snapshots.Snapshot)13 ArrayList (java.util.ArrayList)11 ClusterState (org.elasticsearch.cluster.ClusterState)9 IndexMetaData (org.elasticsearch.cluster.metadata.IndexMetaData)7 MetaData (org.elasticsearch.cluster.metadata.MetaData)7 SnapshotRecoverySource (org.elasticsearch.cluster.routing.RecoverySource.SnapshotRecoverySource)6 IndexId (org.elasticsearch.repositories.IndexId)5 IOException (java.io.IOException)4 HashSet (java.util.HashSet)4 RoutingTable (org.elasticsearch.cluster.routing.RoutingTable)4 RepositoryData (org.elasticsearch.repositories.RepositoryData)4 SnapshotInfo (org.elasticsearch.snapshots.SnapshotInfo)4 HashMap (java.util.HashMap)3 LinkedHashSet (java.util.LinkedHashSet)3 Set (java.util.Set)3 DiscoveryNode (org.elasticsearch.cluster.node.DiscoveryNode)3 IntHashSet (com.carrotsearch.hppc.IntHashSet)2 Arrays (java.util.Arrays)2 Collections (java.util.Collections)2