Search in sources :

Example 11 with BlockReportContext

use of org.apache.hadoop.hdfs.server.protocol.BlockReportContext in project hadoop by apache.

the class TestBlockHasMultipleReplicasOnSameDN method testBlockHasMultipleReplicasOnSameDN.

/**
   * Verify NameNode behavior when a given DN reports multiple replicas
   * of a given block.
   */
@Test
public void testBlockHasMultipleReplicasOnSameDN() throws IOException {
    String filename = makeFileName(GenericTestUtils.getMethodName());
    Path filePath = new Path(filename);
    // Write out a file with a few blocks.
    DFSTestUtil.createFile(fs, filePath, BLOCK_SIZE, BLOCK_SIZE * NUM_BLOCKS, BLOCK_SIZE, NUM_DATANODES, seed);
    // Get the block list for the file with the block locations.
    LocatedBlocks locatedBlocks = client.getLocatedBlocks(filePath.toString(), 0, BLOCK_SIZE * NUM_BLOCKS);
    // Generate a fake block report from one of the DataNodes, such
    // that it reports one copy of each block on either storage.
    DataNode dn = cluster.getDataNodes().get(0);
    DatanodeRegistration dnReg = dn.getDNRegistrationForBP(bpid);
    StorageBlockReport[] reports = new StorageBlockReport[cluster.getStoragesPerDatanode()];
    ArrayList<ReplicaInfo> blocks = new ArrayList<>();
    for (LocatedBlock locatedBlock : locatedBlocks.getLocatedBlocks()) {
        Block localBlock = locatedBlock.getBlock().getLocalBlock();
        blocks.add(new FinalizedReplica(localBlock, null, null));
    }
    Collections.sort(blocks);
    try (FsDatasetSpi.FsVolumeReferences volumes = dn.getFSDataset().getFsVolumeReferences()) {
        BlockListAsLongs bll = BlockListAsLongs.encode(blocks);
        for (int i = 0; i < cluster.getStoragesPerDatanode(); ++i) {
            DatanodeStorage dns = new DatanodeStorage(volumes.get(i).getStorageID());
            reports[i] = new StorageBlockReport(dns, bll);
        }
    }
    // Should not assert!
    cluster.getNameNodeRpc().blockReport(dnReg, bpid, reports, new BlockReportContext(1, 0, System.nanoTime(), 0L, true));
    // Get the block locations once again.
    locatedBlocks = client.getLocatedBlocks(filename, 0, BLOCK_SIZE * NUM_BLOCKS);
    // Make sure that each block has two replicas, one on each DataNode.
    for (LocatedBlock locatedBlock : locatedBlocks.getLocatedBlocks()) {
        DatanodeInfo[] locations = locatedBlock.getLocations();
        assertThat(locations.length, is((int) NUM_DATANODES));
        assertThat(locations[0].getDatanodeUuid(), not(locations[1].getDatanodeUuid()));
    }
}
Also used : Path(org.apache.hadoop.fs.Path) DatanodeInfo(org.apache.hadoop.hdfs.protocol.DatanodeInfo) LocatedBlocks(org.apache.hadoop.hdfs.protocol.LocatedBlocks) FsDatasetSpi(org.apache.hadoop.hdfs.server.datanode.fsdataset.FsDatasetSpi) StorageBlockReport(org.apache.hadoop.hdfs.server.protocol.StorageBlockReport) ArrayList(java.util.ArrayList) LocatedBlock(org.apache.hadoop.hdfs.protocol.LocatedBlock) DatanodeRegistration(org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration) BlockReportContext(org.apache.hadoop.hdfs.server.protocol.BlockReportContext) DatanodeStorage(org.apache.hadoop.hdfs.server.protocol.DatanodeStorage) BlockListAsLongs(org.apache.hadoop.hdfs.protocol.BlockListAsLongs) LocatedBlock(org.apache.hadoop.hdfs.protocol.LocatedBlock) Block(org.apache.hadoop.hdfs.protocol.Block) Test(org.junit.Test)

Example 12 with BlockReportContext

use of org.apache.hadoop.hdfs.server.protocol.BlockReportContext in project hadoop by apache.

the class TestLargeBlockReport method testBlockReportExceedsLengthLimit.

@Test
public void testBlockReportExceedsLengthLimit() throws Exception {
    initCluster();
    // Create a large enough report that we expect it will go beyond the RPC
    // server's length validation, and also protobuf length validation.
    StorageBlockReport[] reports = createReports(6000000);
    try {
        nnProxy.blockReport(bpRegistration, bpId, reports, new BlockReportContext(1, 0, reportId, fullBrLeaseId, sorted));
        fail("Should have failed because of the too long RPC data length");
    } catch (Exception e) {
    // Expected.  We can't reliably assert anything about the exception type
    // or the message.  The NameNode just disconnects, and the details are
    // buried in the NameNode log.
    }
}
Also used : BlockReportContext(org.apache.hadoop.hdfs.server.protocol.BlockReportContext) StorageBlockReport(org.apache.hadoop.hdfs.server.protocol.StorageBlockReport) Test(org.junit.Test)

Example 13 with BlockReportContext

use of org.apache.hadoop.hdfs.server.protocol.BlockReportContext in project hadoop by apache.

the class TestLargeBlockReport method testBlockReportSucceedsWithLargerLengthLimit.

@Test
public void testBlockReportSucceedsWithLargerLengthLimit() throws Exception {
    // 128 MB
    conf.setInt(IPC_MAXIMUM_DATA_LENGTH, 128 * 1024 * 1024);
    initCluster();
    StorageBlockReport[] reports = createReports(6000000);
    nnProxy.blockReport(bpRegistration, bpId, reports, new BlockReportContext(1, 0, reportId, fullBrLeaseId, sorted));
}
Also used : BlockReportContext(org.apache.hadoop.hdfs.server.protocol.BlockReportContext) StorageBlockReport(org.apache.hadoop.hdfs.server.protocol.StorageBlockReport) Test(org.junit.Test)

Aggregations

BlockReportContext (org.apache.hadoop.hdfs.server.protocol.BlockReportContext)13 Test (org.junit.Test)10 StorageBlockReport (org.apache.hadoop.hdfs.server.protocol.StorageBlockReport)8 DatanodeStorage (org.apache.hadoop.hdfs.server.protocol.DatanodeStorage)7 ArrayList (java.util.ArrayList)5 DatanodeRegistration (org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration)5 Block (org.apache.hadoop.hdfs.protocol.Block)4 BlockListAsLongs (org.apache.hadoop.hdfs.protocol.BlockListAsLongs)4 ReceivedDeletedBlockInfo (org.apache.hadoop.hdfs.server.protocol.ReceivedDeletedBlockInfo)4 AtomicReference (java.util.concurrent.atomic.AtomicReference)3 Configuration (org.apache.hadoop.conf.Configuration)3 LocatedBlock (org.apache.hadoop.hdfs.protocol.LocatedBlock)3 StorageReceivedDeletedBlocks (org.apache.hadoop.hdfs.server.protocol.StorageReceivedDeletedBlocks)3 IOException (java.io.IOException)2 Semaphore (java.util.concurrent.Semaphore)2 Path (org.apache.hadoop.fs.Path)2 MiniDFSCluster (org.apache.hadoop.hdfs.MiniDFSCluster)2 DatanodeID (org.apache.hadoop.hdfs.protocol.DatanodeID)2 DataNode (org.apache.hadoop.hdfs.server.datanode.DataNode)2 FinalizedReplica (org.apache.hadoop.hdfs.server.datanode.FinalizedReplica)2