Search in sources :

Example 11 with ReplicaBuilder

use of org.apache.hadoop.hdfs.server.datanode.ReplicaBuilder in project hadoop by apache.

the class FsDatasetImpl method initReplicaRecoveryImpl.

static ReplicaRecoveryInfo initReplicaRecoveryImpl(String bpid, ReplicaMap map, Block block, long recoveryId) throws IOException, MustStopExistingWriter {
    final ReplicaInfo replica = map.get(bpid, block.getBlockId());
    LOG.info("initReplicaRecovery: " + block + ", recoveryId=" + recoveryId + ", replica=" + replica);
    //check replica
    if (replica == null) {
        return null;
    }
    //stop writer if there is any
    if (replica.getState() == ReplicaState.TEMPORARY || replica.getState() == ReplicaState.RBW) {
        final ReplicaInPipeline rip = (ReplicaInPipeline) replica;
        if (!rip.attemptToSetWriter(null, Thread.currentThread())) {
            throw new MustStopExistingWriter(rip);
        }
        //check replica bytes on disk.
        if (replica.getBytesOnDisk() < replica.getVisibleLength()) {
            throw new IOException("THIS IS NOT SUPPOSED TO HAPPEN:" + " getBytesOnDisk() < getVisibleLength(), rip=" + replica);
        }
        //check the replica's files
        checkReplicaFiles(replica);
    }
    //check generation stamp
    if (replica.getGenerationStamp() < block.getGenerationStamp()) {
        throw new IOException("replica.getGenerationStamp() < block.getGenerationStamp(), block=" + block + ", replica=" + replica);
    }
    //check recovery id
    if (replica.getGenerationStamp() >= recoveryId) {
        throw new IOException("THIS IS NOT SUPPOSED TO HAPPEN:" + " replica.getGenerationStamp() >= recoveryId = " + recoveryId + ", block=" + block + ", replica=" + replica);
    }
    //check RUR
    final ReplicaInfo rur;
    if (replica.getState() == ReplicaState.RUR) {
        rur = replica;
        if (rur.getRecoveryID() >= recoveryId) {
            throw new RecoveryInProgressException("rur.getRecoveryID() >= recoveryId = " + recoveryId + ", block=" + block + ", rur=" + rur);
        }
        final long oldRecoveryID = rur.getRecoveryID();
        rur.setRecoveryID(recoveryId);
        LOG.info("initReplicaRecovery: update recovery id for " + block + " from " + oldRecoveryID + " to " + recoveryId);
    } else {
        rur = new ReplicaBuilder(ReplicaState.RUR).from(replica).setRecoveryId(recoveryId).build();
        map.add(bpid, rur);
        LOG.info("initReplicaRecovery: changing replica state for " + block + " from " + replica.getState() + " to " + rur.getState());
    }
    return rur.createInfo();
}
Also used : ReplicaInfo(org.apache.hadoop.hdfs.server.datanode.ReplicaInfo) ReplicaBuilder(org.apache.hadoop.hdfs.server.datanode.ReplicaBuilder) IOException(java.io.IOException) MultipleIOException(org.apache.hadoop.io.MultipleIOException) ReplicaInPipeline(org.apache.hadoop.hdfs.server.datanode.ReplicaInPipeline) RecoveryInProgressException(org.apache.hadoop.hdfs.protocol.RecoveryInProgressException)

Example 12 with ReplicaBuilder

use of org.apache.hadoop.hdfs.server.datanode.ReplicaBuilder in project hadoop by apache.

the class FsDatasetImplTestUtils method createReplicaWaitingToBeRecovered.

@Override
public Replica createReplicaWaitingToBeRecovered(FsVolumeSpi volume, ExtendedBlock eb) throws IOException {
    FsVolumeImpl vol = (FsVolumeImpl) volume;
    final String bpid = eb.getBlockPoolId();
    final Block block = eb.getLocalBlock();
    ReplicaInfo rwbr = new ReplicaBuilder(ReplicaState.RWR).setBlock(eb.getLocalBlock()).setFsVolume(volume).setDirectoryToUse(vol.createRbwFile(bpid, block).getParentFile()).build();
    dataset.volumeMap.add(bpid, rwbr);
    return rwbr;
}
Also used : ReplicaInfo(org.apache.hadoop.hdfs.server.datanode.ReplicaInfo) ReplicaBuilder(org.apache.hadoop.hdfs.server.datanode.ReplicaBuilder) ExtendedBlock(org.apache.hadoop.hdfs.protocol.ExtendedBlock) Block(org.apache.hadoop.hdfs.protocol.Block)

Aggregations

ReplicaBuilder (org.apache.hadoop.hdfs.server.datanode.ReplicaBuilder)12 File (java.io.File)10 RandomAccessFile (java.io.RandomAccessFile)9 ReplicaInfo (org.apache.hadoop.hdfs.server.datanode.ReplicaInfo)6 LocalReplicaInPipeline (org.apache.hadoop.hdfs.server.datanode.LocalReplicaInPipeline)5 IOException (java.io.IOException)2 Block (org.apache.hadoop.hdfs.protocol.Block)2 ExtendedBlock (org.apache.hadoop.hdfs.protocol.ExtendedBlock)2 MultipleIOException (org.apache.hadoop.io.MultipleIOException)2 FileNotFoundException (java.io.FileNotFoundException)1 Scanner (java.util.Scanner)1 RecoveryInProgressException (org.apache.hadoop.hdfs.protocol.RecoveryInProgressException)1 FileIoProvider (org.apache.hadoop.hdfs.server.datanode.FileIoProvider)1 LocalReplica (org.apache.hadoop.hdfs.server.datanode.LocalReplica)1 ReplicaInPipeline (org.apache.hadoop.hdfs.server.datanode.ReplicaInPipeline)1 RecoveringBlock (org.apache.hadoop.hdfs.server.protocol.BlockRecoveryCommand.RecoveringBlock)1 AutoCloseableLock (org.apache.hadoop.util.AutoCloseableLock)1 DiskOutOfSpaceException (org.apache.hadoop.util.DiskChecker.DiskOutOfSpaceException)1