Search in sources :

Example 21 with BlockListAsLongs

use of org.apache.hadoop.hdfs.protocol.BlockListAsLongs in project hadoop by apache.

the class TestBlockHasMultipleReplicasOnSameDN method testBlockHasMultipleReplicasOnSameDN.

/**
   * Verify NameNode behavior when a given DN reports multiple replicas
   * of a given block.
   */
@Test
public void testBlockHasMultipleReplicasOnSameDN() throws IOException {
    String filename = makeFileName(GenericTestUtils.getMethodName());
    Path filePath = new Path(filename);
    // Write out a file with a few blocks.
    DFSTestUtil.createFile(fs, filePath, BLOCK_SIZE, BLOCK_SIZE * NUM_BLOCKS, BLOCK_SIZE, NUM_DATANODES, seed);
    // Get the block list for the file with the block locations.
    LocatedBlocks locatedBlocks = client.getLocatedBlocks(filePath.toString(), 0, BLOCK_SIZE * NUM_BLOCKS);
    // Generate a fake block report from one of the DataNodes, such
    // that it reports one copy of each block on either storage.
    DataNode dn = cluster.getDataNodes().get(0);
    DatanodeRegistration dnReg = dn.getDNRegistrationForBP(bpid);
    StorageBlockReport[] reports = new StorageBlockReport[cluster.getStoragesPerDatanode()];
    ArrayList<ReplicaInfo> blocks = new ArrayList<>();
    for (LocatedBlock locatedBlock : locatedBlocks.getLocatedBlocks()) {
        Block localBlock = locatedBlock.getBlock().getLocalBlock();
        blocks.add(new FinalizedReplica(localBlock, null, null));
    }
    Collections.sort(blocks);
    try (FsDatasetSpi.FsVolumeReferences volumes = dn.getFSDataset().getFsVolumeReferences()) {
        BlockListAsLongs bll = BlockListAsLongs.encode(blocks);
        for (int i = 0; i < cluster.getStoragesPerDatanode(); ++i) {
            DatanodeStorage dns = new DatanodeStorage(volumes.get(i).getStorageID());
            reports[i] = new StorageBlockReport(dns, bll);
        }
    }
    // Should not assert!
    cluster.getNameNodeRpc().blockReport(dnReg, bpid, reports, new BlockReportContext(1, 0, System.nanoTime(), 0L, true));
    // Get the block locations once again.
    locatedBlocks = client.getLocatedBlocks(filename, 0, BLOCK_SIZE * NUM_BLOCKS);
    // Make sure that each block has two replicas, one on each DataNode.
    for (LocatedBlock locatedBlock : locatedBlocks.getLocatedBlocks()) {
        DatanodeInfo[] locations = locatedBlock.getLocations();
        assertThat(locations.length, is((int) NUM_DATANODES));
        assertThat(locations[0].getDatanodeUuid(), not(locations[1].getDatanodeUuid()));
    }
}
Also used : Path(org.apache.hadoop.fs.Path) DatanodeInfo(org.apache.hadoop.hdfs.protocol.DatanodeInfo) LocatedBlocks(org.apache.hadoop.hdfs.protocol.LocatedBlocks) FsDatasetSpi(org.apache.hadoop.hdfs.server.datanode.fsdataset.FsDatasetSpi) StorageBlockReport(org.apache.hadoop.hdfs.server.protocol.StorageBlockReport) ArrayList(java.util.ArrayList) LocatedBlock(org.apache.hadoop.hdfs.protocol.LocatedBlock) DatanodeRegistration(org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration) BlockReportContext(org.apache.hadoop.hdfs.server.protocol.BlockReportContext) DatanodeStorage(org.apache.hadoop.hdfs.server.protocol.DatanodeStorage) BlockListAsLongs(org.apache.hadoop.hdfs.protocol.BlockListAsLongs) LocatedBlock(org.apache.hadoop.hdfs.protocol.LocatedBlock) Block(org.apache.hadoop.hdfs.protocol.Block) Test(org.junit.Test)

Example 22 with BlockListAsLongs

use of org.apache.hadoop.hdfs.protocol.BlockListAsLongs in project hadoop by apache.

the class TestDnRespectsBlockReportSplitThreshold method verifyCapturedArguments.

private void verifyCapturedArguments(ArgumentCaptor<StorageBlockReport[]> captor, int expectedReportsPerCall, int expectedTotalBlockCount) {
    List<StorageBlockReport[]> listOfReports = captor.getAllValues();
    int numBlocksReported = 0;
    for (StorageBlockReport[] reports : listOfReports) {
        assertThat(reports.length, is(expectedReportsPerCall));
        for (StorageBlockReport report : reports) {
            BlockListAsLongs blockList = report.getBlocks();
            numBlocksReported += blockList.getNumberOfBlocks();
        }
    }
    assert (numBlocksReported >= expectedTotalBlockCount);
}
Also used : StorageBlockReport(org.apache.hadoop.hdfs.server.protocol.StorageBlockReport) BlockListAsLongs(org.apache.hadoop.hdfs.protocol.BlockListAsLongs)

Example 23 with BlockListAsLongs

use of org.apache.hadoop.hdfs.protocol.BlockListAsLongs in project hadoop by apache.

the class TestLargeBlockReport method createReports.

/**
   * Creates storage block reports, consisting of a single report with the
   * requested number of blocks.  The block data is fake, because the tests just
   * need to validate that the messages can pass correctly.  This intentionally
   * uses the old-style decoding method as a helper.  The test needs to cover
   * the new-style encoding technique.  Passing through that code path here
   * would trigger an exception before the test is ready to deal with it.
   *
   * @param numBlocks requested number of blocks
   * @return storage block reports
   */
private StorageBlockReport[] createReports(int numBlocks) {
    int longsPerBlock = 3;
    int blockListSize = 2 + numBlocks * longsPerBlock;
    List<Long> longs = new ArrayList<Long>(blockListSize);
    longs.add(Long.valueOf(numBlocks));
    longs.add(0L);
    for (int i = 0; i < blockListSize; ++i) {
        longs.add(Long.valueOf(i));
    }
    BlockListAsLongs blockList = BlockListAsLongs.decodeLongs(longs);
    StorageBlockReport[] reports = new StorageBlockReport[] { new StorageBlockReport(dnStorage, blockList) };
    return reports;
}
Also used : ArrayList(java.util.ArrayList) BlockListAsLongs(org.apache.hadoop.hdfs.protocol.BlockListAsLongs) StorageBlockReport(org.apache.hadoop.hdfs.server.protocol.StorageBlockReport)

Aggregations

BlockListAsLongs (org.apache.hadoop.hdfs.protocol.BlockListAsLongs)23 DatanodeStorage (org.apache.hadoop.hdfs.server.protocol.DatanodeStorage)12 Test (org.junit.Test)11 ArrayList (java.util.ArrayList)8 Map (java.util.Map)8 StorageBlockReport (org.apache.hadoop.hdfs.server.protocol.StorageBlockReport)8 Path (org.apache.hadoop.fs.Path)7 Block (org.apache.hadoop.hdfs.protocol.Block)7 ExtendedBlock (org.apache.hadoop.hdfs.protocol.ExtendedBlock)5 CoreMatchers.containsString (org.hamcrest.CoreMatchers.containsString)4 Matchers.anyString (org.mockito.Matchers.anyString)4 IOException (java.io.IOException)3 BlockReportReplica (org.apache.hadoop.hdfs.protocol.BlockListAsLongs.BlockReportReplica)3 LocatedBlock (org.apache.hadoop.hdfs.protocol.LocatedBlock)3 BlockReportContext (org.apache.hadoop.hdfs.server.protocol.BlockReportContext)3 AutoCloseableLock (org.apache.hadoop.util.AutoCloseableLock)3 ServiceException (com.google.protobuf.ServiceException)2 HashMap (java.util.HashMap)2 HashSet (java.util.HashSet)2 Configuration (org.apache.hadoop.conf.Configuration)2