Search in sources :

Example 1 with FsVolumeReference

use of org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeReference in project hadoop by apache.

the class DatasetVolumeChecker method checkVolume.

/**
   * Check a single volume asynchronously, returning a {@link ListenableFuture}
   * that can be used to retrieve the final result.
   *
   * If the volume cannot be referenced then it is already closed and
   * cannot be checked. No error is propagated to the callback.
   *
   * @param volume the volume that is to be checked.
   * @param callback callback to be invoked when the volume check completes.
   * @return true if the check was scheduled and the callback will be invoked.
   *         false otherwise.
   */
public boolean checkVolume(final FsVolumeSpi volume, Callback callback) {
    if (volume == null) {
        LOG.debug("Cannot schedule check on null volume");
        return false;
    }
    FsVolumeReference volumeReference;
    try {
        volumeReference = volume.obtainReference();
    } catch (ClosedChannelException e) {
        // The volume has already been closed.
        return false;
    }
    Optional<ListenableFuture<VolumeCheckResult>> olf = delegateChecker.schedule(volume, IGNORED_CONTEXT);
    if (olf.isPresent()) {
        numVolumeChecks.incrementAndGet();
        Futures.addCallback(olf.get(), new ResultHandler(volumeReference, new HashSet<>(), new HashSet<>(), new AtomicLong(1), callback));
        return true;
    } else {
        IOUtils.cleanup(null, volumeReference);
    }
    return false;
}
Also used : ClosedChannelException(java.nio.channels.ClosedChannelException) AtomicLong(java.util.concurrent.atomic.AtomicLong) FsVolumeReference(org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeReference) ListenableFuture(com.google.common.util.concurrent.ListenableFuture) HashSet(java.util.HashSet)

Example 2 with FsVolumeReference

use of org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeReference in project hadoop by apache.

the class TestFsVolumeList method testGetNextVolumeWithClosedVolume.

@Test(timeout = 30000)
public void testGetNextVolumeWithClosedVolume() throws IOException {
    FsVolumeList volumeList = new FsVolumeList(Collections.<VolumeFailureInfo>emptyList(), blockScanner, blockChooser);
    final List<FsVolumeImpl> volumes = new ArrayList<>();
    for (int i = 0; i < 3; i++) {
        File curDir = new File(baseDir, "nextvolume-" + i);
        curDir.mkdirs();
        FsVolumeImpl volume = new FsVolumeImplBuilder().setConf(conf).setDataset(dataset).setStorageID("storage-id").setStorageDirectory(new StorageDirectory(StorageLocation.parse(curDir.getPath()))).build();
        volume.setCapacityForTesting(1024 * 1024 * 1024);
        volumes.add(volume);
        volumeList.addVolume(volume.obtainReference());
    }
    // Close the second volume.
    volumes.get(1).setClosed();
    try {
        GenericTestUtils.waitFor(new Supplier<Boolean>() {

            @Override
            public Boolean get() {
                return volumes.get(1).checkClosed();
            }
        }, 100, 3000);
    } catch (TimeoutException e) {
        fail("timed out while waiting for volume to be removed.");
    } catch (InterruptedException ie) {
        Thread.currentThread().interrupt();
    }
    for (int i = 0; i < 10; i++) {
        try (FsVolumeReference ref = volumeList.getNextVolume(StorageType.DEFAULT, 128)) {
            // volume No.2 will not be chosen.
            assertNotEquals(ref.getVolume(), volumes.get(1));
        }
    }
}
Also used : ArrayList(java.util.ArrayList) StorageDirectory(org.apache.hadoop.hdfs.server.common.Storage.StorageDirectory) FsVolumeReference(org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeReference) File(java.io.File) TimeoutException(java.util.concurrent.TimeoutException) Test(org.junit.Test)

Example 3 with FsVolumeReference

use of org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeReference in project hadoop by apache.

the class TestDatasetVolumeCheckerTimeout method makeSlowVolume.

static FsVolumeSpi makeSlowVolume() throws Exception {
    final FsVolumeSpi volume = mock(FsVolumeSpi.class);
    final FsVolumeReference reference = mock(FsVolumeReference.class);
    final StorageLocation location = mock(StorageLocation.class);
    when(reference.getVolume()).thenReturn(volume);
    when(volume.obtainReference()).thenReturn(reference);
    when(volume.getStorageLocation()).thenReturn(location);
    when(volume.check(anyObject())).thenAnswer(new Answer<VolumeCheckResult>() {

        @Override
        public VolumeCheckResult answer(InvocationOnMock invocationOnMock) throws Throwable {
            // Wait for the disk check to timeout and then release lock.
            lock.lock();
            lock.unlock();
            return VolumeCheckResult.HEALTHY;
        }
    });
    return volume;
}
Also used : FsVolumeReference(org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeReference) InvocationOnMock(org.mockito.invocation.InvocationOnMock) FsVolumeSpi(org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi) StorageLocation(org.apache.hadoop.hdfs.server.datanode.StorageLocation)

Example 4 with FsVolumeReference

use of org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeReference in project hadoop by apache.

the class FsDatasetImpl method getVolumeInfo.

private Collection<VolumeInfo> getVolumeInfo() {
    Collection<VolumeInfo> info = new ArrayList<VolumeInfo>();
    for (FsVolumeImpl volume : volumes.getVolumes()) {
        long used = 0;
        long free = 0;
        try (FsVolumeReference ref = volume.obtainReference()) {
            used = volume.getDfsUsed();
            free = volume.getAvailable();
        } catch (ClosedChannelException e) {
            continue;
        } catch (IOException e) {
            LOG.warn(e.getMessage());
            used = 0;
            free = 0;
        }
        info.add(new VolumeInfo(volume, used, free));
    }
    return info;
}
Also used : ClosedChannelException(java.nio.channels.ClosedChannelException) FsVolumeReference(org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeReference) ArrayList(java.util.ArrayList) IOException(java.io.IOException) MultipleIOException(org.apache.hadoop.io.MultipleIOException)

Example 5 with FsVolumeReference

use of org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeReference in project hadoop by apache.

the class FsDatasetImpl method moveBlockAcrossStorage.

/**
   * Move block files from one storage to another storage.
   * @return Returns the Old replicaInfo
   * @throws IOException
   */
@Override
public ReplicaInfo moveBlockAcrossStorage(ExtendedBlock block, StorageType targetStorageType) throws IOException {
    ReplicaInfo replicaInfo = getReplicaInfo(block);
    if (replicaInfo.getState() != ReplicaState.FINALIZED) {
        throw new ReplicaNotFoundException(ReplicaNotFoundException.UNFINALIZED_REPLICA + block);
    }
    if (replicaInfo.getNumBytes() != block.getNumBytes()) {
        throw new IOException("Corrupted replica " + replicaInfo + " with a length of " + replicaInfo.getNumBytes() + " expected length is " + block.getNumBytes());
    }
    if (replicaInfo.getVolume().getStorageType() == targetStorageType) {
        throw new ReplicaAlreadyExistsException("Replica " + replicaInfo + " already exists on storage " + targetStorageType);
    }
    if (replicaInfo.isOnTransientStorage()) {
        // Block movement from RAM_DISK will be done by LazyPersist mechanism
        throw new IOException("Replica " + replicaInfo + " cannot be moved from storageType : " + replicaInfo.getVolume().getStorageType());
    }
    FsVolumeReference volumeRef = null;
    try (AutoCloseableLock lock = datasetLock.acquire()) {
        volumeRef = volumes.getNextVolume(targetStorageType, block.getNumBytes());
    }
    try {
        moveBlock(block, replicaInfo, volumeRef);
    } finally {
        if (volumeRef != null) {
            volumeRef.close();
        }
    }
    // Replace the old block if any to reschedule the scanning.
    return replicaInfo;
}
Also used : FsVolumeReference(org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeReference) ReplicaInfo(org.apache.hadoop.hdfs.server.datanode.ReplicaInfo) ReplicaNotFoundException(org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException) AutoCloseableLock(org.apache.hadoop.util.AutoCloseableLock) ReplicaAlreadyExistsException(org.apache.hadoop.hdfs.server.datanode.ReplicaAlreadyExistsException) IOException(java.io.IOException) MultipleIOException(org.apache.hadoop.io.MultipleIOException)

Aggregations

FsVolumeReference (org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeReference)20 IOException (java.io.IOException)12 AutoCloseableLock (org.apache.hadoop.util.AutoCloseableLock)10 MultipleIOException (org.apache.hadoop.io.MultipleIOException)9 ReplicaInfo (org.apache.hadoop.hdfs.server.datanode.ReplicaInfo)7 ClosedChannelException (java.nio.channels.ClosedChannelException)6 ReplicaHandler (org.apache.hadoop.hdfs.server.datanode.ReplicaHandler)5 ArrayList (java.util.ArrayList)4 ReplicaInPipeline (org.apache.hadoop.hdfs.server.datanode.ReplicaInPipeline)4 ReplicaNotFoundException (org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException)4 ReplicaAlreadyExistsException (org.apache.hadoop.hdfs.server.datanode.ReplicaAlreadyExistsException)3 FsVolumeSpi (org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi)3 ListenableFuture (com.google.common.util.concurrent.ListenableFuture)2 File (java.io.File)2 HashSet (java.util.HashSet)2 CopyOnWriteArrayList (java.util.concurrent.CopyOnWriteArrayList)2 AtomicLong (java.util.concurrent.atomic.AtomicLong)2 StorageDirectory (org.apache.hadoop.hdfs.server.common.Storage.StorageDirectory)2 StorageLocation (org.apache.hadoop.hdfs.server.datanode.StorageLocation)2 Test (org.junit.Test)2