Search in sources :

Example 1 with LocalReplicaInPipeline

use of org.apache.hadoop.hdfs.server.datanode.LocalReplicaInPipeline in project hadoop by apache.

the class FsVolumeImpl method append.

public ReplicaInPipeline append(String bpid, ReplicaInfo replicaInfo, long newGS, long estimateBlockLen) throws IOException {
    long bytesReserved = estimateBlockLen - replicaInfo.getNumBytes();
    if (getAvailable() < bytesReserved) {
        throw new DiskOutOfSpaceException("Insufficient space for appending to " + replicaInfo);
    }
    assert replicaInfo.getVolume() == this : "The volume of the replica should be the same as this volume";
    // construct a RBW replica with the new GS
    File newBlkFile = new File(getRbwDir(bpid), replicaInfo.getBlockName());
    LocalReplicaInPipeline newReplicaInfo = new ReplicaBuilder(ReplicaState.RBW).setBlockId(replicaInfo.getBlockId()).setLength(replicaInfo.getNumBytes()).setGenerationStamp(newGS).setFsVolume(this).setDirectoryToUse(newBlkFile.getParentFile()).setWriterThread(Thread.currentThread()).setBytesToReserve(bytesReserved).buildLocalReplicaInPipeline();
    // load last checksum and datalen
    LocalReplica localReplica = (LocalReplica) replicaInfo;
    byte[] lastChunkChecksum = loadLastPartialChunkChecksum(localReplica.getBlockFile(), localReplica.getMetaFile());
    newReplicaInfo.setLastChecksumAndDataLen(replicaInfo.getNumBytes(), lastChunkChecksum);
    // rename meta file to rbw directory
    // rename block file to rbw directory
    newReplicaInfo.moveReplicaFrom(replicaInfo, newBlkFile);
    reserveSpaceForReplica(bytesReserved);
    return newReplicaInfo;
}
Also used : DiskOutOfSpaceException(org.apache.hadoop.util.DiskChecker.DiskOutOfSpaceException) LocalReplica(org.apache.hadoop.hdfs.server.datanode.LocalReplica) ReplicaBuilder(org.apache.hadoop.hdfs.server.datanode.ReplicaBuilder) RandomAccessFile(java.io.RandomAccessFile) File(java.io.File) LocalReplicaInPipeline(org.apache.hadoop.hdfs.server.datanode.LocalReplicaInPipeline)

Example 2 with LocalReplicaInPipeline

use of org.apache.hadoop.hdfs.server.datanode.LocalReplicaInPipeline in project hadoop by apache.

the class FsVolumeImpl method updateRURCopyOnTruncate.

public ReplicaInPipeline updateRURCopyOnTruncate(ReplicaInfo rur, String bpid, long newBlockId, long recoveryId, long newlength) throws IOException {
    rur.breakHardLinksIfNeeded();
    File[] copiedReplicaFiles = copyReplicaWithNewBlockIdAndGS(rur, bpid, newBlockId, recoveryId);
    File blockFile = copiedReplicaFiles[1];
    File metaFile = copiedReplicaFiles[0];
    LocalReplica.truncateBlock(rur.getVolume(), blockFile, metaFile, rur.getNumBytes(), newlength, fileIoProvider);
    LocalReplicaInPipeline newReplicaInfo = new ReplicaBuilder(ReplicaState.RBW).setBlockId(newBlockId).setGenerationStamp(recoveryId).setFsVolume(this).setDirectoryToUse(blockFile.getParentFile()).setBytesToReserve(newlength).buildLocalReplicaInPipeline();
    // so no need to update it.
    return newReplicaInfo;
}
Also used : ReplicaBuilder(org.apache.hadoop.hdfs.server.datanode.ReplicaBuilder) RandomAccessFile(java.io.RandomAccessFile) File(java.io.File) LocalReplicaInPipeline(org.apache.hadoop.hdfs.server.datanode.LocalReplicaInPipeline)

Example 3 with LocalReplicaInPipeline

use of org.apache.hadoop.hdfs.server.datanode.LocalReplicaInPipeline in project hadoop by apache.

the class FsVolumeImpl method createTemporary.

public ReplicaInPipeline createTemporary(ExtendedBlock b) throws IOException {
    // create a temporary file to hold block in the designated volume
    File f = createTmpFile(b.getBlockPoolId(), b.getLocalBlock());
    LocalReplicaInPipeline newReplicaInfo = new ReplicaBuilder(ReplicaState.TEMPORARY).setBlockId(b.getBlockId()).setGenerationStamp(b.getGenerationStamp()).setDirectoryToUse(f.getParentFile()).setBytesToReserve(b.getLocalBlock().getNumBytes()).setFsVolume(this).buildLocalReplicaInPipeline();
    return newReplicaInfo;
}
Also used : ReplicaBuilder(org.apache.hadoop.hdfs.server.datanode.ReplicaBuilder) RandomAccessFile(java.io.RandomAccessFile) File(java.io.File) LocalReplicaInPipeline(org.apache.hadoop.hdfs.server.datanode.LocalReplicaInPipeline)

Example 4 with LocalReplicaInPipeline

use of org.apache.hadoop.hdfs.server.datanode.LocalReplicaInPipeline in project hadoop by apache.

the class FsVolumeImpl method convertTemporaryToRbw.

public ReplicaInPipeline convertTemporaryToRbw(ExtendedBlock b, ReplicaInfo temp) throws IOException {
    final long blockId = b.getBlockId();
    final long expectedGs = b.getGenerationStamp();
    final long visible = b.getNumBytes();
    final long numBytes = temp.getNumBytes();
    // move block files to the rbw directory
    BlockPoolSlice bpslice = getBlockPoolSlice(b.getBlockPoolId());
    final File dest = FsDatasetImpl.moveBlockFiles(b.getLocalBlock(), temp, bpslice.getRbwDir());
    // create RBW
    final LocalReplicaInPipeline rbw = new ReplicaBuilder(ReplicaState.RBW).setBlockId(blockId).setLength(numBytes).setGenerationStamp(expectedGs).setFsVolume(this).setDirectoryToUse(dest.getParentFile()).setWriterThread(Thread.currentThread()).setBytesToReserve(0).buildLocalReplicaInPipeline();
    rbw.setBytesAcked(visible);
    // load last checksum and datalen
    final File destMeta = FsDatasetUtil.getMetaFile(dest, b.getGenerationStamp());
    byte[] lastChunkChecksum = loadLastPartialChunkChecksum(dest, destMeta);
    rbw.setLastChecksumAndDataLen(numBytes, lastChunkChecksum);
    return rbw;
}
Also used : ReplicaBuilder(org.apache.hadoop.hdfs.server.datanode.ReplicaBuilder) RandomAccessFile(java.io.RandomAccessFile) File(java.io.File) LocalReplicaInPipeline(org.apache.hadoop.hdfs.server.datanode.LocalReplicaInPipeline)

Example 5 with LocalReplicaInPipeline

use of org.apache.hadoop.hdfs.server.datanode.LocalReplicaInPipeline in project hadoop by apache.

the class FsVolumeImpl method createRbw.

public ReplicaInPipeline createRbw(ExtendedBlock b) throws IOException {
    File f = createRbwFile(b.getBlockPoolId(), b.getLocalBlock());
    LocalReplicaInPipeline newReplicaInfo = new ReplicaBuilder(ReplicaState.RBW).setBlockId(b.getBlockId()).setGenerationStamp(b.getGenerationStamp()).setFsVolume(this).setDirectoryToUse(f.getParentFile()).setBytesToReserve(b.getNumBytes()).buildLocalReplicaInPipeline();
    return newReplicaInfo;
}
Also used : ReplicaBuilder(org.apache.hadoop.hdfs.server.datanode.ReplicaBuilder) RandomAccessFile(java.io.RandomAccessFile) File(java.io.File) LocalReplicaInPipeline(org.apache.hadoop.hdfs.server.datanode.LocalReplicaInPipeline)

Aggregations

LocalReplicaInPipeline (org.apache.hadoop.hdfs.server.datanode.LocalReplicaInPipeline)6 File (java.io.File)5 RandomAccessFile (java.io.RandomAccessFile)5 ReplicaBuilder (org.apache.hadoop.hdfs.server.datanode.ReplicaBuilder)5 LocalReplica (org.apache.hadoop.hdfs.server.datanode.LocalReplica)1 DiskOutOfSpaceException (org.apache.hadoop.util.DiskChecker.DiskOutOfSpaceException)1