Search in sources :

Example 1 with MiniOzoneCluster

use of org.apache.hadoop.ozone.MiniOzoneCluster in project ozone by apache.

the class TestOzoneContainer method testCloseContainer.

@Test
public void testCloseContainer() throws Exception {
    MiniOzoneCluster cluster = null;
    XceiverClientGrpc client = null;
    ContainerProtos.ContainerCommandResponseProto response;
    ContainerProtos.ContainerCommandRequestProto writeChunkRequest, putBlockRequest, request;
    try {
        OzoneConfiguration conf = newOzoneConfiguration();
        conf.set(HddsConfigKeys.OZONE_METADATA_DIRS, tempFolder.getRoot().getPath());
        client = createClientForTesting(conf);
        cluster = MiniOzoneCluster.newBuilder(conf).setRandomContainerPort(false).build();
        cluster.waitForClusterToBeReady();
        client.connect();
        long containerID = ContainerTestHelper.getTestContainerID();
        createContainerForTesting(client, containerID);
        writeChunkRequest = writeChunkForContainer(client, containerID, 1024);
        putBlockRequest = ContainerTestHelper.getPutBlockRequest(client.getPipeline(), writeChunkRequest.getWriteChunk());
        // Put block before closing.
        response = client.sendCommand(putBlockRequest);
        Assert.assertNotNull(response);
        Assert.assertEquals(ContainerProtos.Result.SUCCESS, response.getResult());
        // Close the container.
        request = ContainerTestHelper.getCloseContainer(client.getPipeline(), containerID);
        response = client.sendCommand(request);
        Assert.assertNotNull(response);
        Assert.assertEquals(ContainerProtos.Result.SUCCESS, response.getResult());
        // Assert that none of the write  operations are working after close.
        // Write chunks should fail now.
        response = client.sendCommand(writeChunkRequest);
        Assert.assertNotNull(response);
        Assert.assertEquals(ContainerProtos.Result.CLOSED_CONTAINER_IO, response.getResult());
        // Read chunk must work on a closed container.
        request = ContainerTestHelper.getReadChunkRequest(client.getPipeline(), writeChunkRequest.getWriteChunk());
        response = client.sendCommand(request);
        Assert.assertNotNull(response);
        Assert.assertEquals(ContainerProtos.Result.SUCCESS, response.getResult());
        // Put block will fail on a closed container.
        response = client.sendCommand(putBlockRequest);
        Assert.assertNotNull(response);
        Assert.assertEquals(ContainerProtos.Result.CLOSED_CONTAINER_IO, response.getResult());
        // Get block must work on the closed container.
        request = ContainerTestHelper.getBlockRequest(client.getPipeline(), putBlockRequest.getPutBlock());
        response = client.sendCommand(request);
        int chunksCount = putBlockRequest.getPutBlock().getBlockData().getChunksCount();
        ContainerTestHelper.verifyGetBlock(request, response, chunksCount);
        response = client.sendCommand(request);
        Assert.assertNotNull(response);
        Assert.assertEquals(ContainerProtos.Result.CLOSED_CONTAINER_IO, response.getResult());
    } finally {
        if (client != null) {
            client.close();
        }
        if (cluster != null) {
            cluster.shutdown();
        }
    }
}
Also used : ContainerProtos(org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos) MiniOzoneCluster(org.apache.hadoop.ozone.MiniOzoneCluster) XceiverClientGrpc(org.apache.hadoop.hdds.scm.XceiverClientGrpc) OzoneConfiguration(org.apache.hadoop.hdds.conf.OzoneConfiguration) Test(org.junit.Test)

Example 2 with MiniOzoneCluster

use of org.apache.hadoop.ozone.MiniOzoneCluster in project ozone by apache.

the class TestOzoneContainer method testBothGetandPutSmallFile.

@Test
public void testBothGetandPutSmallFile() throws Exception {
    MiniOzoneCluster cluster = null;
    XceiverClientGrpc client = null;
    try {
        OzoneConfiguration conf = newOzoneConfiguration();
        conf.set(HddsConfigKeys.OZONE_METADATA_DIRS, tempFolder.getRoot().getPath());
        client = createClientForTesting(conf);
        cluster = MiniOzoneCluster.newBuilder(conf).setRandomContainerPort(false).build();
        cluster.waitForClusterToBeReady();
        long containerID = ContainerTestHelper.getTestContainerID();
        runTestBothGetandPutSmallFile(containerID, client);
    } finally {
        if (cluster != null) {
            cluster.shutdown();
        }
    }
}
Also used : MiniOzoneCluster(org.apache.hadoop.ozone.MiniOzoneCluster) XceiverClientGrpc(org.apache.hadoop.hdds.scm.XceiverClientGrpc) OzoneConfiguration(org.apache.hadoop.hdds.conf.OzoneConfiguration) Test(org.junit.Test)

Example 3 with MiniOzoneCluster

use of org.apache.hadoop.ozone.MiniOzoneCluster in project ozone by apache.

the class TestOzoneFileInterfaces method init.

public void init() throws Exception {
    OzoneConfiguration conf = getOzoneConfiguration();
    conf.set(OMConfigKeys.OZONE_DEFAULT_BUCKET_LAYOUT, BucketLayout.LEGACY.name());
    MiniOzoneCluster newCluster = MiniOzoneCluster.newBuilder(conf).setNumDatanodes(3).build();
    newCluster.waitForClusterToBeReady();
    setCluster(newCluster);
}
Also used : MiniOzoneCluster(org.apache.hadoop.ozone.MiniOzoneCluster) OzoneConfiguration(org.apache.hadoop.hdds.conf.OzoneConfiguration)

Example 4 with MiniOzoneCluster

use of org.apache.hadoop.ozone.MiniOzoneCluster in project ozone by apache.

the class TestOzoneContainer method testOzoneContainerViaDataNode.

@Test
public void testOzoneContainerViaDataNode() throws Exception {
    MiniOzoneCluster cluster = null;
    try {
        long containerID = ContainerTestHelper.getTestContainerID();
        OzoneConfiguration conf = newOzoneConfiguration();
        // Start ozone container Via Datanode create.
        Pipeline pipeline = MockPipeline.createSingleNodePipeline();
        conf.setInt(OzoneConfigKeys.DFS_CONTAINER_IPC_PORT, pipeline.getFirstNode().getPort(DatanodeDetails.Port.Name.STANDALONE).getValue());
        cluster = MiniOzoneCluster.newBuilder(conf).setRandomContainerPort(false).build();
        cluster.waitForClusterToBeReady();
        // This client talks to ozone container via datanode.
        XceiverClientGrpc client = new XceiverClientGrpc(pipeline, conf);
        runTestOzoneContainerViaDataNode(containerID, client);
    } finally {
        if (cluster != null) {
            cluster.shutdown();
        }
    }
}
Also used : MiniOzoneCluster(org.apache.hadoop.ozone.MiniOzoneCluster) XceiverClientGrpc(org.apache.hadoop.hdds.scm.XceiverClientGrpc) OzoneConfiguration(org.apache.hadoop.hdds.conf.OzoneConfiguration) MockPipeline(org.apache.hadoop.hdds.scm.pipeline.MockPipeline) Pipeline(org.apache.hadoop.hdds.scm.pipeline.Pipeline) Test(org.junit.Test)

Example 5 with MiniOzoneCluster

use of org.apache.hadoop.ozone.MiniOzoneCluster in project ozone by apache.

the class TestOzoneContainer method testOzoneContainerStart.

@Test
public void testOzoneContainerStart() throws Exception {
    OzoneConfiguration conf = newOzoneConfiguration();
    MiniOzoneCluster cluster = null;
    OzoneContainer container = null;
    try {
        cluster = MiniOzoneCluster.newBuilder(conf).build();
        cluster.waitForClusterToBeReady();
        Pipeline pipeline = MockPipeline.createSingleNodePipeline();
        conf.set(HDDS_DATANODE_DIR_KEY, tempFolder.getRoot().getPath());
        conf.setInt(OzoneConfigKeys.DFS_CONTAINER_IPC_PORT, pipeline.getFirstNode().getPort(DatanodeDetails.Port.Name.STANDALONE).getValue());
        conf.setBoolean(OzoneConfigKeys.DFS_CONTAINER_IPC_RANDOM_PORT, false);
        DatanodeDetails datanodeDetails = randomDatanodeDetails();
        StateContext context = Mockito.mock(StateContext.class);
        DatanodeStateMachine dsm = Mockito.mock(DatanodeStateMachine.class);
        Mockito.when(dsm.getDatanodeDetails()).thenReturn(datanodeDetails);
        Mockito.when(context.getParent()).thenReturn(dsm);
        container = new OzoneContainer(datanodeDetails, conf, context, null);
        String clusterId = UUID.randomUUID().toString();
        container.start(clusterId);
        try {
            container.start(clusterId);
        } catch (Exception e) {
            Assert.fail();
        }
        container.stop();
        try {
            container.stop();
        } catch (Exception e) {
            Assert.fail();
        }
    } finally {
        if (container != null) {
            container.stop();
        }
        if (cluster != null) {
            cluster.shutdown();
        }
    }
}
Also used : MockDatanodeDetails.randomDatanodeDetails(org.apache.hadoop.hdds.protocol.MockDatanodeDetails.randomDatanodeDetails) DatanodeDetails(org.apache.hadoop.hdds.protocol.DatanodeDetails) MiniOzoneCluster(org.apache.hadoop.ozone.MiniOzoneCluster) StateContext(org.apache.hadoop.ozone.container.common.statemachine.StateContext) OzoneConfiguration(org.apache.hadoop.hdds.conf.OzoneConfiguration) DatanodeStateMachine(org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine) MockPipeline(org.apache.hadoop.hdds.scm.pipeline.MockPipeline) Pipeline(org.apache.hadoop.hdds.scm.pipeline.Pipeline) Test(org.junit.Test)

Aggregations

OzoneConfiguration (org.apache.hadoop.hdds.conf.OzoneConfiguration)9 MiniOzoneCluster (org.apache.hadoop.ozone.MiniOzoneCluster)9 Test (org.junit.Test)8 XceiverClientGrpc (org.apache.hadoop.hdds.scm.XceiverClientGrpc)6 MockPipeline (org.apache.hadoop.hdds.scm.pipeline.MockPipeline)3 Pipeline (org.apache.hadoop.hdds.scm.pipeline.Pipeline)3 DatanodeDetails (org.apache.hadoop.hdds.protocol.DatanodeDetails)2 MockDatanodeDetails.randomDatanodeDetails (org.apache.hadoop.hdds.protocol.MockDatanodeDetails.randomDatanodeDetails)2 ContainerProtos (org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos)2 DatanodeStateMachine (org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine)2 StateContext (org.apache.hadoop.ozone.container.common.statemachine.StateContext)2 IOException (java.io.IOException)1 AccessControlException (org.apache.hadoop.security.AccessControlException)1 UserGroupInformation (org.apache.hadoop.security.UserGroupInformation)1