Search in sources :

Example 1 with FoldedTreeSet

use of org.apache.hadoop.hdfs.util.FoldedTreeSet in project hadoop by apache.

the class BlockManager method processReport.

private Collection<Block> processReport(final DatanodeStorageInfo storageInfo, final BlockListAsLongs report, BlockReportContext context) throws IOException {
    // Normal case:
    // Modify the (block-->datanode) map, according to the difference
    // between the old and new block report.
    //
    Collection<BlockInfoToAdd> toAdd = new LinkedList<>();
    Collection<BlockInfo> toRemove = new TreeSet<>();
    Collection<Block> toInvalidate = new LinkedList<>();
    Collection<BlockToMarkCorrupt> toCorrupt = new LinkedList<>();
    Collection<StatefulBlockInfo> toUC = new LinkedList<>();
    boolean sorted = false;
    String strBlockReportId = "";
    if (context != null) {
        sorted = context.isSorted();
        strBlockReportId = Long.toHexString(context.getReportId());
    }
    Iterable<BlockReportReplica> sortedReport;
    if (!sorted) {
        blockLog.warn("BLOCK* processReport 0x{}: Report from the DataNode ({}) " + "is unsorted. This will cause overhead on the NameNode " + "which needs to sort the Full BR. Please update the " + "DataNode to the same version of Hadoop HDFS as the " + "NameNode ({}).", strBlockReportId, storageInfo.getDatanodeDescriptor().getDatanodeUuid(), VersionInfo.getVersion());
        Set<BlockReportReplica> set = new FoldedTreeSet<>();
        for (BlockReportReplica iblk : report) {
            set.add(new BlockReportReplica(iblk));
        }
        sortedReport = set;
    } else {
        sortedReport = report;
    }
    reportDiffSorted(storageInfo, sortedReport, toAdd, toRemove, toInvalidate, toCorrupt, toUC);
    DatanodeDescriptor node = storageInfo.getDatanodeDescriptor();
    // Process the blocks on each queue
    for (StatefulBlockInfo b : toUC) {
        addStoredBlockUnderConstruction(b, storageInfo);
    }
    for (BlockInfo b : toRemove) {
        removeStoredBlock(b, node);
    }
    int numBlocksLogged = 0;
    for (BlockInfoToAdd b : toAdd) {
        addStoredBlock(b.stored, b.reported, storageInfo, null, numBlocksLogged < maxNumBlocksToLog);
        numBlocksLogged++;
    }
    if (numBlocksLogged > maxNumBlocksToLog) {
        blockLog.info("BLOCK* processReport 0x{}: logged info for {} of {} " + "reported.", strBlockReportId, maxNumBlocksToLog, numBlocksLogged);
    }
    for (Block b : toInvalidate) {
        addToInvalidates(b, node);
    }
    for (BlockToMarkCorrupt b : toCorrupt) {
        markBlockAsCorrupt(b, storageInfo, node);
    }
    return toInvalidate;
}
Also used : BlockReportReplica(org.apache.hadoop.hdfs.protocol.BlockListAsLongs.BlockReportReplica) FoldedTreeSet(org.apache.hadoop.hdfs.util.FoldedTreeSet) LinkedList(java.util.LinkedList) TreeSet(java.util.TreeSet) FoldedTreeSet(org.apache.hadoop.hdfs.util.FoldedTreeSet) ReportedBlockInfo(org.apache.hadoop.hdfs.server.blockmanagement.PendingDataNodeMessages.ReportedBlockInfo) ReceivedDeletedBlockInfo(org.apache.hadoop.hdfs.server.protocol.ReceivedDeletedBlockInfo) Block(org.apache.hadoop.hdfs.protocol.Block) CachedBlock(org.apache.hadoop.hdfs.server.namenode.CachedBlock) LocatedBlock(org.apache.hadoop.hdfs.protocol.LocatedBlock) LocatedStripedBlock(org.apache.hadoop.hdfs.protocol.LocatedStripedBlock) ExtendedBlock(org.apache.hadoop.hdfs.protocol.ExtendedBlock)

Aggregations

LinkedList (java.util.LinkedList)1 TreeSet (java.util.TreeSet)1 Block (org.apache.hadoop.hdfs.protocol.Block)1 BlockReportReplica (org.apache.hadoop.hdfs.protocol.BlockListAsLongs.BlockReportReplica)1 ExtendedBlock (org.apache.hadoop.hdfs.protocol.ExtendedBlock)1 LocatedBlock (org.apache.hadoop.hdfs.protocol.LocatedBlock)1 LocatedStripedBlock (org.apache.hadoop.hdfs.protocol.LocatedStripedBlock)1 ReportedBlockInfo (org.apache.hadoop.hdfs.server.blockmanagement.PendingDataNodeMessages.ReportedBlockInfo)1 CachedBlock (org.apache.hadoop.hdfs.server.namenode.CachedBlock)1 ReceivedDeletedBlockInfo (org.apache.hadoop.hdfs.server.protocol.ReceivedDeletedBlockInfo)1 FoldedTreeSet (org.apache.hadoop.hdfs.util.FoldedTreeSet)1