Search in sources :

Example 1 with ReplicationFactor

use of org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor in project ozone by apache.

the class RatisPipelineProvider method create.

@Override
public synchronized Pipeline create(RatisReplicationConfig replicationConfig) throws IOException {
    if (exceedPipelineNumberLimit(replicationConfig)) {
        throw new SCMException("Ratis pipeline number meets the limit: " + pipelineNumberLimit + " replicationConfig : " + replicationConfig, SCMException.ResultCodes.FAILED_TO_FIND_SUITABLE_NODE);
    }
    List<DatanodeDetails> dns;
    final ReplicationFactor factor = replicationConfig.getReplicationFactor();
    switch(factor) {
        case ONE:
            dns = pickNodesNotUsed(replicationConfig, minRatisVolumeSizeBytes, containerSizeBytes);
            break;
        case THREE:
            dns = placementPolicy.chooseDatanodes(null, null, factor.getNumber(), minRatisVolumeSizeBytes, containerSizeBytes);
            break;
        default:
            throw new IllegalStateException("Unknown factor: " + factor.name());
    }
    DatanodeDetails suggestedLeader = leaderChoosePolicy.chooseLeader(dns);
    Pipeline pipeline = Pipeline.newBuilder().setId(PipelineID.randomId()).setState(PipelineState.ALLOCATED).setReplicationConfig(RatisReplicationConfig.getInstance(factor)).setNodes(dns).setSuggestedLeaderId(suggestedLeader != null ? suggestedLeader.getUuid() : null).build();
    // Send command to datanodes to create pipeline
    final CreatePipelineCommand createCommand = suggestedLeader != null ? new CreatePipelineCommand(pipeline.getId(), pipeline.getType(), factor, dns, suggestedLeader) : new CreatePipelineCommand(pipeline.getId(), pipeline.getType(), factor, dns);
    createCommand.setTerm(scmContext.getTermOfLeader());
    dns.forEach(node -> {
        LOG.info("Sending CreatePipelineCommand for pipeline:{} to datanode:{}", pipeline.getId(), node.getUuidString());
        eventPublisher.fireEvent(SCMEvents.DATANODE_COMMAND, new CommandForDatanode<>(node.getUuid(), createCommand));
    });
    return pipeline;
}
Also used : ReplicationFactor(org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor) DatanodeDetails(org.apache.hadoop.hdds.protocol.DatanodeDetails) CreatePipelineCommand(org.apache.hadoop.ozone.protocol.commands.CreatePipelineCommand) SCMException(org.apache.hadoop.hdds.scm.exceptions.SCMException)

Example 2 with ReplicationFactor

use of org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor in project ozone by apache.

the class TestRatisPipelineProvider method testCreatePipelinesWhenNotEnoughSpace.

@Test
public void testCreatePipelinesWhenNotEnoughSpace() throws Exception {
    String expectedErrorSubstring = "Unable to find enough" + " nodes that meet the space requirement";
    // Use large enough container or metadata sizes that no node will have
    // enough space to hold one.
    OzoneConfiguration largeContainerConf = new OzoneConfiguration();
    largeContainerConf.set(OZONE_SCM_CONTAINER_SIZE, "100TB");
    init(1, largeContainerConf);
    for (ReplicationFactor factor : ReplicationFactor.values()) {
        try {
            provider.create(RatisReplicationConfig.getInstance(factor));
            Assert.fail("Expected SCMException for large container size with " + "replication factor " + factor.toString());
        } catch (SCMException ex) {
            Assert.assertTrue(ex.getMessage().contains(expectedErrorSubstring));
        }
    }
    OzoneConfiguration largeMetadataConf = new OzoneConfiguration();
    largeMetadataConf.set(OZONE_DATANODE_RATIS_VOLUME_FREE_SPACE_MIN, "100TB");
    init(1, largeMetadataConf);
    for (ReplicationFactor factor : ReplicationFactor.values()) {
        try {
            provider.create(RatisReplicationConfig.getInstance(factor));
            Assert.fail("Expected SCMException for large metadata size with " + "replication factor " + factor.toString());
        } catch (SCMException ex) {
            Assert.assertTrue(ex.getMessage().contains(expectedErrorSubstring));
        }
    }
    cleanup();
}
Also used : ReplicationFactor(org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor) OzoneConfiguration(org.apache.hadoop.hdds.conf.OzoneConfiguration) SCMException(org.apache.hadoop.hdds.scm.exceptions.SCMException) Test(org.junit.Test)

Aggregations

ReplicationFactor (org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor)2 SCMException (org.apache.hadoop.hdds.scm.exceptions.SCMException)2 OzoneConfiguration (org.apache.hadoop.hdds.conf.OzoneConfiguration)1 DatanodeDetails (org.apache.hadoop.hdds.protocol.DatanodeDetails)1 CreatePipelineCommand (org.apache.hadoop.ozone.protocol.commands.CreatePipelineCommand)1 Test (org.junit.Test)1