Search in sources :

Example 81 with Block

use of org.apache.hadoop.hdfs.protocol.Block in project hadoop by apache.

the class TestReadStripedFileWithDecoding method testInvalidateBlock.

@Test
public void testInvalidateBlock() throws IOException {
    final Path file = new Path("/invalidate");
    final int length = 10;
    final byte[] bytes = StripedFileTestUtil.generateBytes(length);
    DFSTestUtil.writeFile(fs, file, bytes);
    int dnIndex = findFirstDataNode(file, cellSize * dataBlocks);
    Assert.assertNotEquals(-1, dnIndex);
    LocatedStripedBlock slb = (LocatedStripedBlock) fs.getClient().getLocatedBlocks(file.toString(), 0, cellSize * dataBlocks).get(0);
    final LocatedBlock[] blks = StripedBlockUtil.parseStripedBlockGroup(slb, cellSize, dataBlocks, parityBlocks);
    final Block b = blks[0].getBlock().getLocalBlock();
    DataNode dn = cluster.getDataNodes().get(dnIndex);
    // disable the heartbeat from DN so that the invalidated block record is kept
    // in NameNode until heartbeat expires and NN mark the dn as dead
    DataNodeTestUtils.setHeartbeatsDisabledForTests(dn, true);
    try {
        // delete the file
        fs.delete(file, true);
        // check the block is added to invalidateBlocks
        final FSNamesystem fsn = cluster.getNamesystem();
        final BlockManager bm = fsn.getBlockManager();
        DatanodeDescriptor dnd = NameNodeAdapter.getDatanode(fsn, dn.getDatanodeId());
        Assert.assertTrue(bm.containsInvalidateBlock(blks[0].getLocations()[0], b) || dnd.containsInvalidateBlock(b));
    } finally {
        DataNodeTestUtils.setHeartbeatsDisabledForTests(dn, false);
    }
}
Also used : Path(org.apache.hadoop.fs.Path) DatanodeDescriptor(org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor) LocatedStripedBlock(org.apache.hadoop.hdfs.protocol.LocatedStripedBlock) DataNode(org.apache.hadoop.hdfs.server.datanode.DataNode) BlockManager(org.apache.hadoop.hdfs.server.blockmanagement.BlockManager) LocatedBlock(org.apache.hadoop.hdfs.protocol.LocatedBlock) ExtendedBlock(org.apache.hadoop.hdfs.protocol.ExtendedBlock) LocatedBlock(org.apache.hadoop.hdfs.protocol.LocatedBlock) Block(org.apache.hadoop.hdfs.protocol.Block) LocatedStripedBlock(org.apache.hadoop.hdfs.protocol.LocatedStripedBlock) FSNamesystem(org.apache.hadoop.hdfs.server.namenode.FSNamesystem) Test(org.junit.Test)

Example 82 with Block

use of org.apache.hadoop.hdfs.protocol.Block in project hadoop by apache.

the class TestPBHelper method getBlockWithLocations.

private static BlockWithLocations getBlockWithLocations(int bid, boolean isStriped) {
    final String[] datanodeUuids = { "dn1", "dn2", "dn3" };
    final String[] storageIDs = { "s1", "s2", "s3" };
    final StorageType[] storageTypes = { StorageType.DISK, StorageType.DISK, StorageType.DISK };
    final byte[] indices = { 0, 1, 2 };
    final short dataBlkNum = 6;
    BlockWithLocations blkLocs = new BlockWithLocations(new Block(bid, 0, 1), datanodeUuids, storageIDs, storageTypes);
    if (isStriped) {
        blkLocs = new StripedBlockWithLocations(blkLocs, indices, dataBlkNum, StripedFileTestUtil.getDefaultECPolicy().getCellSize());
    }
    return blkLocs;
}
Also used : StorageType(org.apache.hadoop.fs.StorageType) StripedBlockWithLocations(org.apache.hadoop.hdfs.server.protocol.BlocksWithLocations.StripedBlockWithLocations) StripedBlockWithLocations(org.apache.hadoop.hdfs.server.protocol.BlocksWithLocations.StripedBlockWithLocations) BlockWithLocations(org.apache.hadoop.hdfs.server.protocol.BlocksWithLocations.BlockWithLocations) Block(org.apache.hadoop.hdfs.protocol.Block) ExtendedBlock(org.apache.hadoop.hdfs.protocol.ExtendedBlock) RecoveringBlock(org.apache.hadoop.hdfs.server.protocol.BlockRecoveryCommand.RecoveringBlock) LocatedBlock(org.apache.hadoop.hdfs.protocol.LocatedBlock) ByteString(com.google.protobuf.ByteString)

Example 83 with Block

use of org.apache.hadoop.hdfs.protocol.Block in project hadoop by apache.

the class TestPBHelper method testConvertBlock.

@Test
public void testConvertBlock() {
    Block b = new Block(1, 100, 3);
    BlockProto bProto = PBHelperClient.convert(b);
    Block b2 = PBHelperClient.convert(bProto);
    assertEquals(b, b2);
}
Also used : BlockProto(org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.BlockProto) ExtendedBlockProto(org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.ExtendedBlockProto) LocatedBlockProto(org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.LocatedBlockProto) RecoveringBlockProto(org.apache.hadoop.hdfs.protocol.proto.HdfsServerProtos.RecoveringBlockProto) Block(org.apache.hadoop.hdfs.protocol.Block) ExtendedBlock(org.apache.hadoop.hdfs.protocol.ExtendedBlock) RecoveringBlock(org.apache.hadoop.hdfs.server.protocol.BlockRecoveryCommand.RecoveringBlock) LocatedBlock(org.apache.hadoop.hdfs.protocol.LocatedBlock) Test(org.junit.Test)

Example 84 with Block

use of org.apache.hadoop.hdfs.protocol.Block in project hadoop by apache.

the class TestBlockInfoStriped method testWrite.

@Test
public void testWrite() {
    long blkID = 1;
    long numBytes = 1;
    long generationStamp = 1;
    ByteBuffer byteBuffer = ByteBuffer.allocate(Long.SIZE / Byte.SIZE * 3);
    byteBuffer.putLong(blkID).putLong(numBytes).putLong(generationStamp);
    ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
    DataOutput out = new DataOutputStream(byteStream);
    BlockInfoStriped blk = new BlockInfoStriped(new Block(blkID, numBytes, generationStamp), testECPolicy);
    try {
        blk.write(out);
    } catch (Exception ex) {
        fail("testWrite error:" + ex.getMessage());
    }
    assertEquals(byteBuffer.array().length, byteStream.toByteArray().length);
    assertArrayEquals(byteBuffer.array(), byteStream.toByteArray());
}
Also used : DataOutput(java.io.DataOutput) DataOutputStream(java.io.DataOutputStream) Block(org.apache.hadoop.hdfs.protocol.Block) ByteArrayOutputStream(java.io.ByteArrayOutputStream) ByteBuffer(java.nio.ByteBuffer) Test(org.junit.Test)

Example 85 with Block

use of org.apache.hadoop.hdfs.protocol.Block in project hadoop by apache.

the class TestBlockInfoStriped method testAddStorageWithDifferentBlockGroup.

@Test(expected = IllegalArgumentException.class)
public void testAddStorageWithDifferentBlockGroup() {
    DatanodeStorageInfo storage = DFSTestUtil.createDatanodeStorageInfo("storageID", "127.0.0.1");
    BlockInfo diffGroup = new BlockInfoStriped(new Block(BASE_ID + 100), testECPolicy);
    info.addStorage(storage, diffGroup);
}
Also used : Block(org.apache.hadoop.hdfs.protocol.Block) Test(org.junit.Test)

Aggregations

Block (org.apache.hadoop.hdfs.protocol.Block)155 LocatedBlock (org.apache.hadoop.hdfs.protocol.LocatedBlock)79 Test (org.junit.Test)77 ExtendedBlock (org.apache.hadoop.hdfs.protocol.ExtendedBlock)74 Path (org.apache.hadoop.fs.Path)28 LocatedStripedBlock (org.apache.hadoop.hdfs.protocol.LocatedStripedBlock)26 IOException (java.io.IOException)24 BlockInfo (org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo)22 Configuration (org.apache.hadoop.conf.Configuration)20 ReceivedDeletedBlockInfo (org.apache.hadoop.hdfs.server.protocol.ReceivedDeletedBlockInfo)18 HdfsConfiguration (org.apache.hadoop.hdfs.HdfsConfiguration)17 BlockInfoContiguous (org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoContiguous)17 CachedBlock (org.apache.hadoop.hdfs.server.namenode.CachedBlock)17 BlockInfoStriped (org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStriped)15 MiniDFSCluster (org.apache.hadoop.hdfs.MiniDFSCluster)14 ArrayList (java.util.ArrayList)12 RecoveringBlock (org.apache.hadoop.hdfs.server.protocol.BlockRecoveryCommand.RecoveringBlock)11 DatanodeStorage (org.apache.hadoop.hdfs.server.protocol.DatanodeStorage)11 FsPermission (org.apache.hadoop.fs.permission.FsPermission)10 DatanodeRegistration (org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration)10