Search in sources :

Example 26 with DiskBalancerDataNode

use of org.apache.hadoop.hdfs.server.diskbalancer.datamodel.DiskBalancerDataNode in project hadoop by apache.

the class TestPlanner method testGreedyPlannerThresholdTest.

@Test
public void testGreedyPlannerThresholdTest() throws Exception {
    NullConnector nullConnector = new NullConnector();
    DiskBalancerCluster cluster = new DiskBalancerCluster(nullConnector);
    DiskBalancerDataNode node = new DiskBalancerDataNode(UUID.randomUUID().toString());
    DiskBalancerVolume volume1 = createVolume("volume100", 1000, 100);
    DiskBalancerVolume volume2 = createVolume("volume0-1", 300, 0);
    DiskBalancerVolume volume3 = createVolume("volume0-2", 300, 0);
    node.addVolume(volume1);
    node.addVolume(volume2);
    node.addVolume(volume3);
    nullConnector.addNode(node);
    cluster.readClusterInfo();
    Assert.assertEquals(1, cluster.getNodes().size());
    GreedyPlanner planner = new GreedyPlanner(10.0f, node);
    NodePlan plan = new NodePlan(node.getDataNodeName(), node.getDataNodePort());
    planner.balanceVolumeSet(node, node.getVolumeSets().get("SSD"), plan);
    //We should see NO moves since the total data on the volume100
    // is less than or equal to threashold value that we pass, which is 10%
    assertEquals(0, plan.getVolumeSetPlans().size());
    // for this new planner we are passing 1% as as threshold value
    // hence planner must move data if possible.
    GreedyPlanner newPlanner = new GreedyPlanner(01.0f, node);
    NodePlan newPlan = new NodePlan(node.getDataNodeName(), node.getDataNodePort());
    newPlanner.balanceVolumeSet(node, node.getVolumeSets().get("SSD"), newPlan);
    assertEquals(2, newPlan.getVolumeSetPlans().size());
    // Move size should say move 19 GB
    // Here is how the math works out.
    // TotalCapacity = 1000 + 300 + 300 = 1600 GB
    // TotolUsed = 100
    // Expected data% on each disk = 0.0625
    // On Disk (volume0-1) = 300 * 0.0625 - 18.75 -- We round it up
    // in the display string -- hence 18.8 GB, it will be same on volume 2 too.
    // since they are equal sized disks with same used capacity
    Step step = newPlan.getVolumeSetPlans().get(0);
    assertEquals("volume100", step.getSourceVolume().getPath());
    assertTrue(step.getSizeString(step.getBytesToMove()).matches("18.[6|7|8] G"));
    step = newPlan.getVolumeSetPlans().get(1);
    assertEquals("volume100", step.getSourceVolume().getPath());
    assertTrue(step.getSizeString(step.getBytesToMove()).matches("18.[6|7|8] G"));
}
Also used : NodePlan(org.apache.hadoop.hdfs.server.diskbalancer.planner.NodePlan) DiskBalancerCluster(org.apache.hadoop.hdfs.server.diskbalancer.datamodel.DiskBalancerCluster) NullConnector(org.apache.hadoop.hdfs.server.diskbalancer.connectors.NullConnector) DiskBalancerVolume(org.apache.hadoop.hdfs.server.diskbalancer.datamodel.DiskBalancerVolume) GreedyPlanner(org.apache.hadoop.hdfs.server.diskbalancer.planner.GreedyPlanner) Step(org.apache.hadoop.hdfs.server.diskbalancer.planner.Step) DiskBalancerDataNode(org.apache.hadoop.hdfs.server.diskbalancer.datamodel.DiskBalancerDataNode) Test(org.junit.Test)

Example 27 with DiskBalancerDataNode

use of org.apache.hadoop.hdfs.server.diskbalancer.datamodel.DiskBalancerDataNode in project hadoop by apache.

the class TestPlanner method testGreedyPlannerMoveFromSingleDisk.

@Test
public void testGreedyPlannerMoveFromSingleDisk() throws Exception {
    NullConnector nullConnector = new NullConnector();
    DiskBalancerCluster cluster = new DiskBalancerCluster(nullConnector);
    DiskBalancerDataNode node = new DiskBalancerDataNode(UUID.randomUUID().toString());
    // All disks have same capacity of data
    DiskBalancerVolume volume1 = createVolume("volume100", 200, 100);
    DiskBalancerVolume volume2 = createVolume("volume0-1", 200, 0);
    DiskBalancerVolume volume3 = createVolume("volume0-2", 200, 0);
    node.addVolume(volume1);
    node.addVolume(volume2);
    node.addVolume(volume3);
    nullConnector.addNode(node);
    cluster.readClusterInfo();
    Assert.assertEquals(1, cluster.getNodes().size());
    GreedyPlanner planner = new GreedyPlanner(10.0f, node);
    NodePlan plan = new NodePlan(node.getDataNodeName(), node.getDataNodePort());
    planner.balanceVolumeSet(node, node.getVolumeSets().get("SSD"), plan);
    // We should see 2 move plans. One from volume100 to volume0-1
    // and another from volume100 to volume0-2
    assertEquals(2, plan.getVolumeSetPlans().size());
    Step step = plan.getVolumeSetPlans().get(0);
    assertEquals("volume100", step.getSourceVolume().getPath());
    assertTrue(step.getSizeString(step.getBytesToMove()).matches("33.[2|3|4] G"));
    step = plan.getVolumeSetPlans().get(1);
    assertEquals("volume100", step.getSourceVolume().getPath());
    assertTrue(step.getSizeString(step.getBytesToMove()).matches("33.[2|3|4] G"));
}
Also used : NodePlan(org.apache.hadoop.hdfs.server.diskbalancer.planner.NodePlan) DiskBalancerCluster(org.apache.hadoop.hdfs.server.diskbalancer.datamodel.DiskBalancerCluster) NullConnector(org.apache.hadoop.hdfs.server.diskbalancer.connectors.NullConnector) DiskBalancerVolume(org.apache.hadoop.hdfs.server.diskbalancer.datamodel.DiskBalancerVolume) GreedyPlanner(org.apache.hadoop.hdfs.server.diskbalancer.planner.GreedyPlanner) Step(org.apache.hadoop.hdfs.server.diskbalancer.planner.Step) DiskBalancerDataNode(org.apache.hadoop.hdfs.server.diskbalancer.datamodel.DiskBalancerDataNode) Test(org.junit.Test)

Aggregations

DiskBalancerDataNode (org.apache.hadoop.hdfs.server.diskbalancer.datamodel.DiskBalancerDataNode)27 Test (org.junit.Test)19 DiskBalancerVolume (org.apache.hadoop.hdfs.server.diskbalancer.datamodel.DiskBalancerVolume)16 DiskBalancerCluster (org.apache.hadoop.hdfs.server.diskbalancer.datamodel.DiskBalancerCluster)13 NodePlan (org.apache.hadoop.hdfs.server.diskbalancer.planner.NodePlan)12 GreedyPlanner (org.apache.hadoop.hdfs.server.diskbalancer.planner.GreedyPlanner)11 NullConnector (org.apache.hadoop.hdfs.server.diskbalancer.connectors.NullConnector)10 DiskBalancerVolumeSet (org.apache.hadoop.hdfs.server.diskbalancer.datamodel.DiskBalancerVolumeSet)7 Step (org.apache.hadoop.hdfs.server.diskbalancer.planner.Step)5 ClusterConnector (org.apache.hadoop.hdfs.server.diskbalancer.connectors.ClusterConnector)3 LinkedList (java.util.LinkedList)2 DiskBalancerException (org.apache.hadoop.hdfs.server.diskbalancer.DiskBalancerException)2 URI (java.net.URI)1 StrBuilder (org.apache.commons.lang.text.StrBuilder)1 Configuration (org.apache.hadoop.conf.Configuration)1 FSDataOutputStream (org.apache.hadoop.fs.FSDataOutputStream)1 Path (org.apache.hadoop.fs.Path)1 StorageType (org.apache.hadoop.fs.StorageType)1 HdfsConfiguration (org.apache.hadoop.hdfs.HdfsConfiguration)1 MiniDFSCluster (org.apache.hadoop.hdfs.MiniDFSCluster)1