use of org.apache.hadoop.hdfs.protocol.BlockType in project hadoop by apache.
the class FSDirWriteFileOp method validateAddBlock.
/**
* Part I of getAdditionalBlock().
* Analyze the state of the file under read lock to determine if the client
* can add a new block, detect potential retries, lease mismatches,
* and minimal replication of the penultimate block.
*
* Generate target DataNode locations for the new block,
* but do not create the new block yet.
*/
static ValidateAddBlockResult validateAddBlock(FSNamesystem fsn, FSPermissionChecker pc, String src, long fileId, String clientName, ExtendedBlock previous, LocatedBlock[] onRetryBlock) throws IOException {
final long blockSize;
final short numTargets;
final byte storagePolicyID;
String clientMachine;
final BlockType blockType;
INodesInPath iip = fsn.dir.resolvePath(pc, src, fileId);
FileState fileState = analyzeFileState(fsn, iip, fileId, clientName, previous, onRetryBlock);
if (onRetryBlock[0] != null && onRetryBlock[0].getLocations().length > 0) {
// Use the last block if it has locations.
return null;
}
final INodeFile pendingFile = fileState.inode;
if (!fsn.checkFileProgress(src, pendingFile, false)) {
throw new NotReplicatedYetException("Not replicated yet: " + src);
}
if (pendingFile.getBlocks().length >= fsn.maxBlocksPerFile) {
throw new IOException("File has reached the limit on maximum number of" + " blocks (" + DFSConfigKeys.DFS_NAMENODE_MAX_BLOCKS_PER_FILE_KEY + "): " + pendingFile.getBlocks().length + " >= " + fsn.maxBlocksPerFile);
}
blockSize = pendingFile.getPreferredBlockSize();
clientMachine = pendingFile.getFileUnderConstructionFeature().getClientMachine();
blockType = pendingFile.getBlockType();
ErasureCodingPolicy ecPolicy = null;
if (blockType == BlockType.STRIPED) {
ecPolicy = FSDirErasureCodingOp.unprotectedGetErasureCodingPolicy(fsn, iip);
numTargets = (short) (ecPolicy.getSchema().getNumDataUnits() + ecPolicy.getSchema().getNumParityUnits());
} else {
numTargets = pendingFile.getFileReplication();
}
storagePolicyID = pendingFile.getStoragePolicyID();
return new ValidateAddBlockResult(blockSize, numTargets, storagePolicyID, clientMachine, blockType);
}
use of org.apache.hadoop.hdfs.protocol.BlockType in project hadoop by apache.
the class TestPBHelper method testConvertBlockType.
@Test
public void testConvertBlockType() {
BlockType bContiguous = BlockType.CONTIGUOUS;
BlockTypeProto bContiguousProto = PBHelperClient.convert(bContiguous);
BlockType bContiguous2 = PBHelperClient.convert(bContiguousProto);
assertEquals(bContiguous, bContiguous2);
BlockType bStriped = BlockType.STRIPED;
BlockTypeProto bStripedProto = PBHelperClient.convert(bStriped);
BlockType bStriped2 = PBHelperClient.convert(bStripedProto);
assertEquals(bStriped, bStriped2);
}
use of org.apache.hadoop.hdfs.protocol.BlockType in project hadoop by apache.
the class TestFSImage method testBlockTypeProtoDefaultsToContiguous.
@Test
public void testBlockTypeProtoDefaultsToContiguous() throws Exception {
INodeSection.INodeFile.Builder builder = INodeSection.INodeFile.newBuilder();
INodeSection.INodeFile inodeFile = builder.build();
BlockType defaultBlockType = PBHelperClient.convert(inodeFile.getBlockType());
assertEquals(defaultBlockType, BlockType.CONTIGUOUS);
}
Aggregations