use of org.apache.hadoop.hdfs.server.datanode.ReplicaInfo in project hadoop by apache.
the class FsDatasetImplTestUtils method createReplicaWaitingToBeRecovered.
@Override
public Replica createReplicaWaitingToBeRecovered(FsVolumeSpi volume, ExtendedBlock eb) throws IOException {
FsVolumeImpl vol = (FsVolumeImpl) volume;
final String bpid = eb.getBlockPoolId();
final Block block = eb.getLocalBlock();
ReplicaInfo rwbr = new ReplicaBuilder(ReplicaState.RWR).setBlock(eb.getLocalBlock()).setFsVolume(volume).setDirectoryToUse(vol.createRbwFile(bpid, block).getParentFile()).build();
dataset.volumeMap.add(bpid, rwbr);
return rwbr;
}
use of org.apache.hadoop.hdfs.server.datanode.ReplicaInfo in project hadoop by apache.
the class FsDatasetImplTestUtils method changeStoredGenerationStamp.
@Override
public void changeStoredGenerationStamp(ExtendedBlock block, long newGenStamp) throws IOException {
ReplicaInfo r = dataset.getReplicaInfo(block);
File blockFile = new File(r.getBlockURI());
File metaFile = FsDatasetUtil.findMetaFile(blockFile);
File newMetaFile = new File(DatanodeUtil.getMetaName(blockFile.getAbsolutePath(), newGenStamp));
Files.move(metaFile.toPath(), newMetaFile.toPath(), StandardCopyOption.ATOMIC_MOVE);
}
use of org.apache.hadoop.hdfs.server.datanode.ReplicaInfo in project hadoop by apache.
the class FsDatasetImplTestUtils method getMaterializedReplica.
/**
* Return a materialized replica from the FsDatasetImpl.
*/
@Override
public MaterializedReplica getMaterializedReplica(ExtendedBlock block) throws ReplicaNotFoundException {
File blockFile;
try {
ReplicaInfo r = dataset.getReplicaInfo(block);
blockFile = new File(r.getBlockURI());
} catch (IOException e) {
LOG.error("Block file for " + block + " does not existed:", e);
throw new ReplicaNotFoundException(block);
}
File metaFile = FsDatasetUtil.getMetaFile(blockFile, block.getGenerationStamp());
return new FsDatasetImplMaterializedReplica(blockFile, metaFile);
}
Aggregations