Search in sources :

Example 1 with DecommissionManager

use of org.apache.hadoop.hdfs.server.blockmanagement.DecommissionManager in project hadoop by apache.

the class TestDecommission method testBlocksPerInterval.

@Test(timeout = 120000)
public void testBlocksPerInterval() throws Exception {
    org.apache.log4j.Logger.getLogger(DecommissionManager.class).setLevel(Level.TRACE);
    // Turn the blocks per interval way down
    getConf().setInt(DFSConfigKeys.DFS_NAMENODE_DECOMMISSION_BLOCKS_PER_INTERVAL_KEY, 3);
    // Disable the normal monitor runs
    getConf().setInt(DFSConfigKeys.DFS_NAMENODE_DECOMMISSION_INTERVAL_KEY, Integer.MAX_VALUE);
    startCluster(1, 3);
    final FileSystem fs = getCluster().getFileSystem();
    final DatanodeManager datanodeManager = getCluster().getNamesystem().getBlockManager().getDatanodeManager();
    final DecommissionManager decomManager = datanodeManager.getDecomManager();
    // Write a 3 block file, so each node has one block. Should scan 3 nodes.
    DFSTestUtil.createFile(fs, new Path("/file1"), 64, (short) 3, 0xBAD1DEA);
    doDecomCheck(datanodeManager, decomManager, 3);
    // Write another file, should only scan two
    DFSTestUtil.createFile(fs, new Path("/file2"), 64, (short) 3, 0xBAD1DEA);
    doDecomCheck(datanodeManager, decomManager, 2);
    // One more file, should only scan 1
    DFSTestUtil.createFile(fs, new Path("/file3"), 64, (short) 3, 0xBAD1DEA);
    doDecomCheck(datanodeManager, decomManager, 1);
    // blocks on each DN now exceeds limit, still scan at least one node
    DFSTestUtil.createFile(fs, new Path("/file4"), 64, (short) 3, 0xBAD1DEA);
    doDecomCheck(datanodeManager, decomManager, 1);
}
Also used : DecommissionManager(org.apache.hadoop.hdfs.server.blockmanagement.DecommissionManager) Path(org.apache.hadoop.fs.Path) DatanodeManager(org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager) FileSystem(org.apache.hadoop.fs.FileSystem) Test(org.junit.Test)

Example 2 with DecommissionManager

use of org.apache.hadoop.hdfs.server.blockmanagement.DecommissionManager in project hadoop by apache.

the class TestDecommission method testPendingNodes.

@Test(timeout = 120000)
public void testPendingNodes() throws Exception {
    org.apache.log4j.Logger.getLogger(DecommissionManager.class).setLevel(Level.TRACE);
    // Only allow one node to be decom'd at a time
    getConf().setInt(DFSConfigKeys.DFS_NAMENODE_DECOMMISSION_MAX_CONCURRENT_TRACKED_NODES, 1);
    // Disable the normal monitor runs
    getConf().setInt(DFSConfigKeys.DFS_NAMENODE_DECOMMISSION_INTERVAL_KEY, Integer.MAX_VALUE);
    startCluster(1, 3);
    final FileSystem fs = getCluster().getFileSystem();
    final DatanodeManager datanodeManager = getCluster().getNamesystem().getBlockManager().getDatanodeManager();
    final DecommissionManager decomManager = datanodeManager.getDecomManager();
    // Keep a file open to prevent decom from progressing
    HdfsDataOutputStream open1 = (HdfsDataOutputStream) fs.create(new Path("/openFile1"), (short) 3);
    // Flush and trigger block reports so the block definitely shows up on NN
    open1.write(123);
    open1.hflush();
    for (DataNode d : getCluster().getDataNodes()) {
        DataNodeTestUtils.triggerBlockReport(d);
    }
    // Decom two nodes, so one is still alive
    ArrayList<DatanodeInfo> decommissionedNodes = Lists.newArrayList();
    for (int i = 0; i < 2; i++) {
        final DataNode d = getCluster().getDataNodes().get(i);
        DatanodeInfo dn = takeNodeOutofService(0, d.getDatanodeUuid(), 0, decommissionedNodes, AdminStates.DECOMMISSION_INPROGRESS);
        decommissionedNodes.add(dn);
    }
    for (int i = 2; i >= 0; i--) {
        assertTrackedAndPending(decomManager, 0, i);
        BlockManagerTestUtil.recheckDecommissionState(datanodeManager);
    }
    // Close file, try to decom the last node, should get stuck in tracked
    open1.close();
    final DataNode d = getCluster().getDataNodes().get(2);
    DatanodeInfo dn = takeNodeOutofService(0, d.getDatanodeUuid(), 0, decommissionedNodes, AdminStates.DECOMMISSION_INPROGRESS);
    decommissionedNodes.add(dn);
    BlockManagerTestUtil.recheckDecommissionState(datanodeManager);
    assertTrackedAndPending(decomManager, 1, 0);
}
Also used : DecommissionManager(org.apache.hadoop.hdfs.server.blockmanagement.DecommissionManager) Path(org.apache.hadoop.fs.Path) DatanodeManager(org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager) DatanodeInfo(org.apache.hadoop.hdfs.protocol.DatanodeInfo) DataNode(org.apache.hadoop.hdfs.server.datanode.DataNode) FileSystem(org.apache.hadoop.fs.FileSystem) HdfsDataOutputStream(org.apache.hadoop.hdfs.client.HdfsDataOutputStream) Test(org.junit.Test)

Aggregations

FileSystem (org.apache.hadoop.fs.FileSystem)2 Path (org.apache.hadoop.fs.Path)2 DatanodeManager (org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager)2 DecommissionManager (org.apache.hadoop.hdfs.server.blockmanagement.DecommissionManager)2 Test (org.junit.Test)2 HdfsDataOutputStream (org.apache.hadoop.hdfs.client.HdfsDataOutputStream)1 DatanodeInfo (org.apache.hadoop.hdfs.protocol.DatanodeInfo)1 DataNode (org.apache.hadoop.hdfs.server.datanode.DataNode)1