Search in sources :

Example 6 with BlockType

use of org.apache.hadoop.hdfs.protocol.BlockType in project hadoop by apache.

the class FSDirWriteFileOp method validateAddBlock.

/**
   * Part I of getAdditionalBlock().
   * Analyze the state of the file under read lock to determine if the client
   * can add a new block, detect potential retries, lease mismatches,
   * and minimal replication of the penultimate block.
   *
   * Generate target DataNode locations for the new block,
   * but do not create the new block yet.
   */
static ValidateAddBlockResult validateAddBlock(FSNamesystem fsn, FSPermissionChecker pc, String src, long fileId, String clientName, ExtendedBlock previous, LocatedBlock[] onRetryBlock) throws IOException {
    final long blockSize;
    final short numTargets;
    final byte storagePolicyID;
    String clientMachine;
    final BlockType blockType;
    INodesInPath iip = fsn.dir.resolvePath(pc, src, fileId);
    FileState fileState = analyzeFileState(fsn, iip, fileId, clientName, previous, onRetryBlock);
    if (onRetryBlock[0] != null && onRetryBlock[0].getLocations().length > 0) {
        // Use the last block if it has locations.
        return null;
    }
    final INodeFile pendingFile = fileState.inode;
    if (!fsn.checkFileProgress(src, pendingFile, false)) {
        throw new NotReplicatedYetException("Not replicated yet: " + src);
    }
    if (pendingFile.getBlocks().length >= fsn.maxBlocksPerFile) {
        throw new IOException("File has reached the limit on maximum number of" + " blocks (" + DFSConfigKeys.DFS_NAMENODE_MAX_BLOCKS_PER_FILE_KEY + "): " + pendingFile.getBlocks().length + " >= " + fsn.maxBlocksPerFile);
    }
    blockSize = pendingFile.getPreferredBlockSize();
    clientMachine = pendingFile.getFileUnderConstructionFeature().getClientMachine();
    blockType = pendingFile.getBlockType();
    ErasureCodingPolicy ecPolicy = null;
    if (blockType == BlockType.STRIPED) {
        ecPolicy = FSDirErasureCodingOp.unprotectedGetErasureCodingPolicy(fsn, iip);
        numTargets = (short) (ecPolicy.getSchema().getNumDataUnits() + ecPolicy.getSchema().getNumParityUnits());
    } else {
        numTargets = pendingFile.getFileReplication();
    }
    storagePolicyID = pendingFile.getStoragePolicyID();
    return new ValidateAddBlockResult(blockSize, numTargets, storagePolicyID, clientMachine, blockType);
}
Also used : BlockType(org.apache.hadoop.hdfs.protocol.BlockType) ErasureCodingPolicy(org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy) IOException(java.io.IOException)

Example 7 with BlockType

use of org.apache.hadoop.hdfs.protocol.BlockType in project hadoop by apache.

the class TestPBHelper method testConvertBlockType.

@Test
public void testConvertBlockType() {
    BlockType bContiguous = BlockType.CONTIGUOUS;
    BlockTypeProto bContiguousProto = PBHelperClient.convert(bContiguous);
    BlockType bContiguous2 = PBHelperClient.convert(bContiguousProto);
    assertEquals(bContiguous, bContiguous2);
    BlockType bStriped = BlockType.STRIPED;
    BlockTypeProto bStripedProto = PBHelperClient.convert(bStriped);
    BlockType bStriped2 = PBHelperClient.convert(bStripedProto);
    assertEquals(bStriped, bStriped2);
}
Also used : BlockTypeProto(org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.BlockTypeProto) BlockType(org.apache.hadoop.hdfs.protocol.BlockType) Test(org.junit.Test)

Example 8 with BlockType

use of org.apache.hadoop.hdfs.protocol.BlockType in project hadoop by apache.

the class TestFSImage method testBlockTypeProtoDefaultsToContiguous.

@Test
public void testBlockTypeProtoDefaultsToContiguous() throws Exception {
    INodeSection.INodeFile.Builder builder = INodeSection.INodeFile.newBuilder();
    INodeSection.INodeFile inodeFile = builder.build();
    BlockType defaultBlockType = PBHelperClient.convert(inodeFile.getBlockType());
    assertEquals(defaultBlockType, BlockType.CONTIGUOUS);
}
Also used : INodeSection(org.apache.hadoop.hdfs.server.namenode.FsImageProto.INodeSection) BlockType(org.apache.hadoop.hdfs.protocol.BlockType) Test(org.junit.Test)

Aggregations

BlockType (org.apache.hadoop.hdfs.protocol.BlockType)8 ErasureCodingPolicy (org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy)3 IOException (java.io.IOException)2 LocatedBlock (org.apache.hadoop.hdfs.protocol.LocatedBlock)2 Test (org.junit.Test)2 ArrayList (java.util.ArrayList)1 Block (org.apache.hadoop.hdfs.protocol.Block)1 DatanodeInfo (org.apache.hadoop.hdfs.protocol.DatanodeInfo)1 ExtendedBlock (org.apache.hadoop.hdfs.protocol.ExtendedBlock)1 FSLimitException (org.apache.hadoop.hdfs.protocol.FSLimitException)1 BlockTypeProto (org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.BlockTypeProto)1 BlockInfo (org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo)1 DatanodeManager (org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager)1 DatanodeStorageInfo (org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo)1 INodeSection (org.apache.hadoop.hdfs.server.namenode.FsImageProto.INodeSection)1 Node (org.apache.hadoop.net.Node)1