Search in sources :

Example 6 with NumberReplicas

use of org.apache.hadoop.hdfs.server.blockmanagement.NumberReplicas in project hadoop by apache.

the class TestReadOnlySharedStorage method testNormalReplicaOffline.

/**
   * Verify that the NameNode is able to still use <tt>READ_ONLY_SHARED</tt> replicas even 
   * when the single NORMAL replica is offline (and the effective replication count is 0).
   */
@Test
public void testNormalReplicaOffline() throws Exception {
    // Stop the datanode hosting the NORMAL replica
    cluster.stopDataNode(normalDataNode.getXferAddr());
    // Force NameNode to detect that the datanode is down
    BlockManagerTestUtil.noticeDeadDatanode(cluster.getNameNode(), normalDataNode.getXferAddr());
    // The live replica count should now be zero (since the NORMAL replica is offline)
    NumberReplicas numberReplicas = blockManager.countNodes(storedBlock);
    assertThat(numberReplicas.liveReplicas(), is(0));
    // The block should be reported as under-replicated
    BlockManagerTestUtil.updateState(blockManager);
    assertThat(blockManager.getUnderReplicatedBlocksCount(), is(1L));
    // The BlockManager should be able to heal the replication count back to 1
    // by triggering an inter-datanode replication from one of the READ_ONLY_SHARED replicas
    BlockManagerTestUtil.computeAllPendingWork(blockManager);
    DFSTestUtil.waitForReplication(cluster, extendedBlock, 1, 1, 0);
    // There should now be 2 *locations* for the block, and 1 *replica*
    assertThat(getLocatedBlock().getLocations().length, is(2));
    validateNumberReplicas(1);
}
Also used : NumberReplicas(org.apache.hadoop.hdfs.server.blockmanagement.NumberReplicas) Test(org.junit.Test)

Example 7 with NumberReplicas

use of org.apache.hadoop.hdfs.server.blockmanagement.NumberReplicas in project hadoop by apache.

the class TestReadOnlySharedStorage method testReadOnlyReplicaCorrupt.

/**
   * Verify that corrupt <tt>READ_ONLY_SHARED</tt> replicas aren't counted 
   * towards the corrupt replicas total.
   */
@Test
public void testReadOnlyReplicaCorrupt() throws Exception {
    // "Corrupt" a READ_ONLY_SHARED replica by reporting it as a bad replica
    client.reportBadBlocks(new LocatedBlock[] { new LocatedBlock(extendedBlock, new DatanodeInfo[] { readOnlyDataNode }) });
    // There should now be only 1 *location* for the block as the READ_ONLY_SHARED is corrupt
    waitForLocations(1);
    // However, the corrupt READ_ONLY_SHARED replica should *not* affect the overall corrupt replicas count
    NumberReplicas numberReplicas = blockManager.countNodes(storedBlock);
    assertThat(numberReplicas.corruptReplicas(), is(0));
}
Also used : DatanodeInfo(org.apache.hadoop.hdfs.protocol.DatanodeInfo) LocatedBlock(org.apache.hadoop.hdfs.protocol.LocatedBlock) NumberReplicas(org.apache.hadoop.hdfs.server.blockmanagement.NumberReplicas) Test(org.junit.Test)

Aggregations

NumberReplicas (org.apache.hadoop.hdfs.server.blockmanagement.NumberReplicas)7 Test (org.junit.Test)4 LocatedBlock (org.apache.hadoop.hdfs.protocol.LocatedBlock)3 BlockInfo (org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo)3 IOException (java.io.IOException)2 DatanodeInfo (org.apache.hadoop.hdfs.protocol.DatanodeInfo)2 ExtendedBlock (org.apache.hadoop.hdfs.protocol.ExtendedBlock)2 BlockInfoStriped (org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStriped)2 BlockManager (org.apache.hadoop.hdfs.server.blockmanagement.BlockManager)2 DataNode (org.apache.hadoop.hdfs.server.datanode.DataNode)2 FileNotFoundException (java.io.FileNotFoundException)1 BitSet (java.util.BitSet)1 Path (org.apache.hadoop.fs.Path)1 UnresolvedLinkException (org.apache.hadoop.fs.UnresolvedLinkException)1 DistributedFileSystem (org.apache.hadoop.hdfs.DistributedFileSystem)1 HdfsConfiguration (org.apache.hadoop.hdfs.HdfsConfiguration)1 MiniDFSCluster (org.apache.hadoop.hdfs.MiniDFSCluster)1 Block (org.apache.hadoop.hdfs.protocol.Block)1 DatanodeID (org.apache.hadoop.hdfs.protocol.DatanodeID)1 LocatedBlocks (org.apache.hadoop.hdfs.protocol.LocatedBlocks)1