Search in sources :

Example 41 with AutoCloseableLock

use of org.apache.hadoop.util.AutoCloseableLock in project hadoop by apache.

the class FsDatasetImpl method onCompleteLazyPersist.

@Override
public void onCompleteLazyPersist(String bpId, long blockId, long creationTime, File[] savedFiles, FsVolumeImpl targetVolume) {
    try (AutoCloseableLock lock = datasetLock.acquire()) {
        ramDiskReplicaTracker.recordEndLazyPersist(bpId, blockId, savedFiles);
        targetVolume.incDfsUsedAndNumBlocks(bpId, savedFiles[0].length() + savedFiles[1].length());
        // Update metrics (ignore the metadata file size)
        datanode.getMetrics().incrRamDiskBlocksLazyPersisted();
        datanode.getMetrics().incrRamDiskBytesLazyPersisted(savedFiles[1].length());
        datanode.getMetrics().addRamDiskBlocksLazyPersistWindowMs(Time.monotonicNow() - creationTime);
        if (LOG.isDebugEnabled()) {
            LOG.debug("LazyWriter: Finish persisting RamDisk block: " + " block pool Id: " + bpId + " block id: " + blockId + " to block file " + savedFiles[1] + " and meta file " + savedFiles[0] + " on target volume " + targetVolume);
        }
    }
}
Also used : AutoCloseableLock(org.apache.hadoop.util.AutoCloseableLock)

Example 42 with AutoCloseableLock

use of org.apache.hadoop.util.AutoCloseableLock in project hadoop by apache.

the class FsDatasetImpl method moveBlockAcrossVolumes.

/**
   * Moves a given block from one volume to another volume. This is used by disk
   * balancer.
   *
   * @param block       - ExtendedBlock
   * @param destination - Destination volume
   * @return Old replica info
   */
@Override
public ReplicaInfo moveBlockAcrossVolumes(ExtendedBlock block, FsVolumeSpi destination) throws IOException {
    ReplicaInfo replicaInfo = getReplicaInfo(block);
    if (replicaInfo.getState() != ReplicaState.FINALIZED) {
        throw new ReplicaNotFoundException(ReplicaNotFoundException.UNFINALIZED_REPLICA + block);
    }
    FsVolumeReference volumeRef = null;
    try (AutoCloseableLock lock = datasetLock.acquire()) {
        volumeRef = destination.obtainReference();
    }
    try {
        moveBlock(block, replicaInfo, volumeRef);
    } finally {
        if (volumeRef != null) {
            volumeRef.close();
        }
    }
    return replicaInfo;
}
Also used : FsVolumeReference(org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeReference) ReplicaInfo(org.apache.hadoop.hdfs.server.datanode.ReplicaInfo) ReplicaNotFoundException(org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException) AutoCloseableLock(org.apache.hadoop.util.AutoCloseableLock)

Example 43 with AutoCloseableLock

use of org.apache.hadoop.util.AutoCloseableLock in project hadoop by apache.

the class BlockPoolSlice method readReplicasFromCache.

private boolean readReplicasFromCache(ReplicaMap volumeMap, final RamDiskReplicaTracker lazyWriteReplicaMap) {
    ReplicaMap tmpReplicaMap = new ReplicaMap(new AutoCloseableLock());
    File replicaFile = new File(currentDir, REPLICA_CACHE_FILE);
    // Check whether the file exists or not.
    if (!replicaFile.exists()) {
        LOG.info("Replica Cache file: " + replicaFile.getPath() + " doesn't exist ");
        return false;
    }
    long fileLastModifiedTime = replicaFile.lastModified();
    if (System.currentTimeMillis() > fileLastModifiedTime + replicaCacheExpiry) {
        LOG.info("Replica Cache file: " + replicaFile.getPath() + " has gone stale");
        // Just to make findbugs happy
        if (!replicaFile.delete()) {
            LOG.info("Replica Cache file: " + replicaFile.getPath() + " cannot be deleted");
        }
        return false;
    }
    FileInputStream inputStream = null;
    try {
        inputStream = fileIoProvider.getFileInputStream(volume, replicaFile);
        BlockListAsLongs blocksList = BlockListAsLongs.readFrom(inputStream, maxDataLength);
        if (blocksList == null) {
            return false;
        }
        for (BlockReportReplica replica : blocksList) {
            switch(replica.getState()) {
                case FINALIZED:
                    addReplicaToReplicasMap(replica, tmpReplicaMap, lazyWriteReplicaMap, true);
                    break;
                case RUR:
                case RBW:
                case RWR:
                    addReplicaToReplicasMap(replica, tmpReplicaMap, lazyWriteReplicaMap, false);
                    break;
                default:
                    break;
            }
        }
        inputStream.close();
        // to scan all the files on disk.
        for (Iterator<ReplicaInfo> iter = tmpReplicaMap.replicas(bpid).iterator(); iter.hasNext(); ) {
            ReplicaInfo info = iter.next();
            // We use a lightweight GSet to store replicaInfo, we need to remove
            // it from one GSet before adding to another.
            iter.remove();
            volumeMap.add(bpid, info);
        }
        LOG.info("Successfully read replica from cache file : " + replicaFile.getPath());
        return true;
    } catch (Exception e) {
        // Any exception we need to revert back to read from disk
        // Log the error and return false
        LOG.info("Exception occured while reading the replicas cache file: " + replicaFile.getPath(), e);
        return false;
    } finally {
        if (!fileIoProvider.delete(volume, replicaFile)) {
            LOG.info("Failed to delete replica cache file: " + replicaFile.getPath());
        }
        // close the inputStream
        IOUtils.closeStream(inputStream);
    }
}
Also used : ReplicaInfo(org.apache.hadoop.hdfs.server.datanode.ReplicaInfo) AutoCloseableLock(org.apache.hadoop.util.AutoCloseableLock) BlockListAsLongs(org.apache.hadoop.hdfs.protocol.BlockListAsLongs) BlockReportReplica(org.apache.hadoop.hdfs.protocol.BlockListAsLongs.BlockReportReplica) RandomAccessFile(java.io.RandomAccessFile) File(java.io.File) FileInputStream(java.io.FileInputStream) IOException(java.io.IOException) FileNotFoundException(java.io.FileNotFoundException) DiskErrorException(org.apache.hadoop.util.DiskChecker.DiskErrorException)

Example 44 with AutoCloseableLock

use of org.apache.hadoop.util.AutoCloseableLock in project hadoop by apache.

the class ReplicaMap method add.

/**
   * Add a replica's meta information into the map 
   * 
   * @param bpid block pool id
   * @param replicaInfo a replica's meta information
   * @return previous meta information of the replica
   * @throws IllegalArgumentException if the input parameter is null
   */
ReplicaInfo add(String bpid, ReplicaInfo replicaInfo) {
    checkBlockPool(bpid);
    checkBlock(replicaInfo);
    try (AutoCloseableLock l = lock.acquire()) {
        FoldedTreeSet<ReplicaInfo> set = map.get(bpid);
        if (set == null) {
            // Add an entry for block pool if it does not exist already
            set = new FoldedTreeSet<>();
            map.put(bpid, set);
        }
        return set.addOrReplace(replicaInfo);
    }
}
Also used : ReplicaInfo(org.apache.hadoop.hdfs.server.datanode.ReplicaInfo) AutoCloseableLock(org.apache.hadoop.util.AutoCloseableLock)

Aggregations

AutoCloseableLock (org.apache.hadoop.util.AutoCloseableLock)44 ReplicaInfo (org.apache.hadoop.hdfs.server.datanode.ReplicaInfo)27 IOException (java.io.IOException)23 MultipleIOException (org.apache.hadoop.io.MultipleIOException)17 FsVolumeReference (org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeReference)10 ReplicaInPipeline (org.apache.hadoop.hdfs.server.datanode.ReplicaInPipeline)9 ReplicaNotFoundException (org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException)8 File (java.io.File)7 ReplicaHandler (org.apache.hadoop.hdfs.server.datanode.ReplicaHandler)5 FsVolumeSpi (org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi)5 ExtendedBlock (org.apache.hadoop.hdfs.protocol.ExtendedBlock)4 ReplicaAlreadyExistsException (org.apache.hadoop.hdfs.server.datanode.ReplicaAlreadyExistsException)4 RecoveringBlock (org.apache.hadoop.hdfs.server.protocol.BlockRecoveryCommand.RecoveringBlock)4 DatanodeStorage (org.apache.hadoop.hdfs.server.protocol.DatanodeStorage)4 ArrayList (java.util.ArrayList)3 HashMap (java.util.HashMap)3 ConcurrentHashMap (java.util.concurrent.ConcurrentHashMap)3 Block (org.apache.hadoop.hdfs.protocol.Block)3 BlockListAsLongs (org.apache.hadoop.hdfs.protocol.BlockListAsLongs)3 FsDatasetSpi (org.apache.hadoop.hdfs.server.datanode.fsdataset.FsDatasetSpi)3