Search in sources :

Example 1 with FsStatus

use of org.apache.hadoop.fs.FsStatus in project hadoop by apache.

the class DFSAdmin method report.

/**
   * Gives a report on how the FileSystem is doing.
   * @exception IOException if the filesystem does not exist.
   */
public void report(String[] argv, int i) throws IOException {
    DistributedFileSystem dfs = getDFS();
    FsStatus ds = dfs.getStatus();
    long capacity = ds.getCapacity();
    long used = ds.getUsed();
    long remaining = ds.getRemaining();
    long bytesInFuture = dfs.getBytesWithFutureGenerationStamps();
    long presentCapacity = used + remaining;
    boolean mode = dfs.setSafeMode(HdfsConstants.SafeModeAction.SAFEMODE_GET);
    if (mode) {
        System.out.println("Safe mode is ON");
        if (bytesInFuture > 0) {
            System.out.println("\nWARNING: ");
            System.out.println("Name node has detected blocks with generation " + "stamps in future.");
            System.out.println("Forcing exit from safemode will cause " + bytesInFuture + " byte(s) to be deleted.");
            System.out.println("If you are sure that the NameNode was started with" + " the correct metadata files then you may proceed with " + "'-safemode forceExit'\n");
        }
    }
    System.out.println("Configured Capacity: " + capacity + " (" + StringUtils.byteDesc(capacity) + ")");
    System.out.println("Present Capacity: " + presentCapacity + " (" + StringUtils.byteDesc(presentCapacity) + ")");
    System.out.println("DFS Remaining: " + remaining + " (" + StringUtils.byteDesc(remaining) + ")");
    System.out.println("DFS Used: " + used + " (" + StringUtils.byteDesc(used) + ")");
    System.out.println("DFS Used%: " + StringUtils.formatPercent(used / (double) presentCapacity, 2));
    /* These counts are not always upto date. They are updated after  
     * iteration of an internal list. Should be updated in a few seconds to 
     * minutes. Use "-metaSave" to list of all such blocks and accurate 
     * counts.
     */
    System.out.println("Under replicated blocks: " + dfs.getUnderReplicatedBlocksCount());
    System.out.println("Blocks with corrupt replicas: " + dfs.getCorruptBlocksCount());
    System.out.println("Missing blocks: " + dfs.getMissingBlocksCount());
    System.out.println("Missing blocks (with replication factor 1): " + dfs.getMissingReplOneBlocksCount());
    System.out.println("Pending deletion blocks: " + dfs.getPendingDeletionBlocksCount());
    System.out.println();
    System.out.println("-------------------------------------------------");
    // Parse arguments for filtering the node list
    List<String> args = Arrays.asList(argv);
    // Truncate already handled arguments before parsing report()-specific ones
    args = new ArrayList<String>(args.subList(i, args.size()));
    final boolean listLive = StringUtils.popOption("-live", args);
    final boolean listDead = StringUtils.popOption("-dead", args);
    final boolean listDecommissioning = StringUtils.popOption("-decommissioning", args);
    // If no filter flags are found, then list all DN types
    boolean listAll = (!listLive && !listDead && !listDecommissioning);
    if (listAll || listLive) {
        DatanodeInfo[] live = dfs.getDataNodeStats(DatanodeReportType.LIVE);
        if (live.length > 0 || listLive) {
            System.out.println("Live datanodes (" + live.length + "):\n");
        }
        if (live.length > 0) {
            for (DatanodeInfo dn : live) {
                System.out.println(dn.getDatanodeReport());
                System.out.println();
            }
        }
    }
    if (listAll || listDead) {
        DatanodeInfo[] dead = dfs.getDataNodeStats(DatanodeReportType.DEAD);
        if (dead.length > 0 || listDead) {
            System.out.println("Dead datanodes (" + dead.length + "):\n");
        }
        if (dead.length > 0) {
            for (DatanodeInfo dn : dead) {
                System.out.println(dn.getDatanodeReport());
                System.out.println();
            }
        }
    }
    if (listAll || listDecommissioning) {
        DatanodeInfo[] decom = dfs.getDataNodeStats(DatanodeReportType.DECOMMISSIONING);
        if (decom.length > 0 || listDecommissioning) {
            System.out.println("Decommissioning datanodes (" + decom.length + "):\n");
        }
        if (decom.length > 0) {
            for (DatanodeInfo dn : decom) {
                System.out.println(dn.getDatanodeReport());
                System.out.println();
            }
        }
    }
}
Also used : DatanodeInfo(org.apache.hadoop.hdfs.protocol.DatanodeInfo) DistributedFileSystem(org.apache.hadoop.hdfs.DistributedFileSystem) FsStatus(org.apache.hadoop.fs.FsStatus)

Example 2 with FsStatus

use of org.apache.hadoop.fs.FsStatus in project hadoop by apache.

the class ViewFileSystemBaseTest method testViewFileSystemUtil.

@Test(expected = NotInMountpointException.class)
public void testViewFileSystemUtil() throws Exception {
    Configuration newConf = new Configuration(conf);
    FileSystem fileSystem = FileSystem.get(FsConstants.LOCAL_FS_URI, newConf);
    Assert.assertFalse("Unexpected FileSystem: " + fileSystem, ViewFileSystemUtil.isViewFileSystem(fileSystem));
    fileSystem = FileSystem.get(FsConstants.VIEWFS_URI, newConf);
    Assert.assertTrue("Unexpected FileSystem: " + fileSystem, ViewFileSystemUtil.isViewFileSystem(fileSystem));
    // Case 1: Verify FsStatus of root path returns all MountPoints status.
    Map<MountPoint, FsStatus> mountPointFsStatusMap = ViewFileSystemUtil.getStatus(fileSystem, InodeTree.SlashPath);
    Assert.assertEquals(getExpectedMountPoints(), mountPointFsStatusMap.size());
    // Case 2: Verify FsStatus of an internal dir returns all
    // MountPoints status.
    mountPointFsStatusMap = ViewFileSystemUtil.getStatus(fileSystem, new Path("/internalDir"));
    Assert.assertEquals(getExpectedMountPoints(), mountPointFsStatusMap.size());
    // Case 3: Verify FsStatus of a matching MountPoint returns exactly
    // the corresponding MountPoint status.
    mountPointFsStatusMap = ViewFileSystemUtil.getStatus(fileSystem, new Path("/user"));
    Assert.assertEquals(1, mountPointFsStatusMap.size());
    for (Entry<MountPoint, FsStatus> entry : mountPointFsStatusMap.entrySet()) {
        Assert.assertEquals(entry.getKey().getMountedOnPath().toString(), "/user");
    }
    // Case 4: Verify FsStatus of a path over a MountPoint returns the
    // corresponding MountPoint status.
    mountPointFsStatusMap = ViewFileSystemUtil.getStatus(fileSystem, new Path("/user/cloud"));
    Assert.assertEquals(1, mountPointFsStatusMap.size());
    for (Entry<MountPoint, FsStatus> entry : mountPointFsStatusMap.entrySet()) {
        Assert.assertEquals(entry.getKey().getMountedOnPath().toString(), "/user");
    }
    // Case 5: Verify FsStatus of any level of an internal dir
    // returns all MountPoints status.
    mountPointFsStatusMap = ViewFileSystemUtil.getStatus(fileSystem, new Path("/internalDir/internalDir2"));
    Assert.assertEquals(getExpectedMountPoints(), mountPointFsStatusMap.size());
    // Case 6: Verify FsStatus of a MountPoint URI returns
    // the MountPoint status.
    mountPointFsStatusMap = ViewFileSystemUtil.getStatus(fileSystem, new Path("viewfs:/user/"));
    Assert.assertEquals(1, mountPointFsStatusMap.size());
    for (Entry<MountPoint, FsStatus> entry : mountPointFsStatusMap.entrySet()) {
        Assert.assertEquals(entry.getKey().getMountedOnPath().toString(), "/user");
    }
    // Case 7: Verify FsStatus of a non MountPoint path throws exception
    ViewFileSystemUtil.getStatus(fileSystem, new Path("/non-existing"));
}
Also used : MountPoint(org.apache.hadoop.fs.viewfs.ViewFileSystem.MountPoint) Path(org.apache.hadoop.fs.Path) Configuration(org.apache.hadoop.conf.Configuration) FileSystem(org.apache.hadoop.fs.FileSystem) FsStatus(org.apache.hadoop.fs.FsStatus) Test(org.junit.Test)

Example 3 with FsStatus

use of org.apache.hadoop.fs.FsStatus in project hadoop by apache.

the class ITestS3A method testS3AStatus.

@Test
public void testS3AStatus() throws Exception {
    FsStatus fsStatus = fc.getFsStatus(null);
    assertNotNull(fsStatus);
    assertTrue("Used capacity should be positive: " + fsStatus.getUsed(), fsStatus.getUsed() >= 0);
    assertTrue("Remaining capacity should be positive: " + fsStatus.getRemaining(), fsStatus.getRemaining() >= 0);
    assertTrue("Capacity should be positive: " + fsStatus.getCapacity(), fsStatus.getCapacity() >= 0);
}
Also used : FsStatus(org.apache.hadoop.fs.FsStatus) Test(org.junit.Test)

Example 4 with FsStatus

use of org.apache.hadoop.fs.FsStatus in project hadoop by apache.

the class ViewFileSystemUtil method getStatus.

/**
   * Get FsStatus for all ViewFsMountPoints matching path for the given
   * ViewFileSystem.
   *
   * Say ViewFileSystem has following mount points configured
   *  (1) hdfs://NN0_host:port/sales mounted on /dept/sales
   *  (2) hdfs://NN1_host:port/marketing mounted on /dept/marketing
   *  (3) hdfs://NN2_host:port/eng_usa mounted on /dept/eng/usa
   *  (4) hdfs://NN3_host:port/eng_asia mounted on /dept/eng/asia
   *
   * For the above config, here is a sample list of paths and their matching
   * mount points while getting FsStatus
   *
   *  Path                  Description                      Matching MountPoint
   *
   *  "/"                   Root ViewFileSystem lists all    (1), (2), (3), (4)
   *                         mount points.
   *
   *  "/dept"               Not a mount point, but a valid   (1), (2), (3), (4)
   *                         internal dir in the mount tree
   *                         and resolved down to "/" path.
   *
   *  "/dept/sales"         Matches a mount point            (1)
   *
   *  "/dept/sales/india"   Path is over a valid mount point (1)
   *                         and resolved down to
   *                         "/dept/sales"
   *
   *  "/dept/eng"           Not a mount point, but a valid   (1), (2), (3), (4)
   *                         internal dir in the mount tree
   *                         and resolved down to "/" path.
   *
   *  "/erp"                Doesn't match or leads to or
   *                         over any valid mount points     None
   *
   *
   * @param fileSystem - ViewFileSystem on which mount point exists
   * @param path - URI for which FsStatus is requested
   * @return Map of ViewFsMountPoint and FsStatus
   */
public static Map<MountPoint, FsStatus> getStatus(FileSystem fileSystem, Path path) throws IOException {
    if (!isViewFileSystem(fileSystem)) {
        throw new UnsupportedFileSystemException("FileSystem '" + fileSystem.getUri() + "'is not a ViewFileSystem.");
    }
    ViewFileSystem viewFileSystem = (ViewFileSystem) fileSystem;
    String viewFsUriPath = viewFileSystem.getUriPath(path);
    boolean isPathOverMountPoint = false;
    boolean isPathLeadingToMountPoint = false;
    boolean isPathIncludesAllMountPoint = false;
    Map<MountPoint, FsStatus> mountPointMap = new HashMap<>();
    for (MountPoint mountPoint : viewFileSystem.getMountPoints()) {
        String[] mountPointPathComponents = InodeTree.breakIntoPathComponents(mountPoint.getMountedOnPath().toString());
        String[] incomingPathComponents = InodeTree.breakIntoPathComponents(viewFsUriPath);
        int pathCompIndex;
        for (pathCompIndex = 0; pathCompIndex < mountPointPathComponents.length && pathCompIndex < incomingPathComponents.length; pathCompIndex++) {
            if (!mountPointPathComponents[pathCompIndex].equals(incomingPathComponents[pathCompIndex])) {
                break;
            }
        }
        if (pathCompIndex >= mountPointPathComponents.length) {
            // Patch matches or over a valid mount point
            isPathOverMountPoint = true;
            mountPointMap.clear();
            updateMountPointFsStatus(viewFileSystem, mountPointMap, mountPoint, new Path(viewFsUriPath));
            break;
        } else {
            if (pathCompIndex > 1) {
                // Path is in the mount tree
                isPathLeadingToMountPoint = true;
            } else if (incomingPathComponents.length <= 1) {
                // Special case of "/" path
                isPathIncludesAllMountPoint = true;
            }
            updateMountPointFsStatus(viewFileSystem, mountPointMap, mountPoint, mountPoint.getMountedOnPath());
        }
    }
    if (!isPathOverMountPoint && !isPathLeadingToMountPoint && !isPathIncludesAllMountPoint) {
        throw new NotInMountpointException(path, "getStatus");
    }
    return mountPointMap;
}
Also used : MountPoint(org.apache.hadoop.fs.viewfs.ViewFileSystem.MountPoint) Path(org.apache.hadoop.fs.Path) HashMap(java.util.HashMap) UnsupportedFileSystemException(org.apache.hadoop.fs.UnsupportedFileSystemException) FsStatus(org.apache.hadoop.fs.FsStatus) MountPoint(org.apache.hadoop.fs.viewfs.ViewFileSystem.MountPoint)

Example 5 with FsStatus

use of org.apache.hadoop.fs.FsStatus in project hadoop by apache.

the class ViewFileSystemUtil method updateMountPointFsStatus.

/**
   * Update FsStatus for the given the mount point.
   *
   * @param viewFileSystem
   * @param mountPointMap
   * @param mountPoint
   * @param path
   */
private static void updateMountPointFsStatus(final ViewFileSystem viewFileSystem, final Map<MountPoint, FsStatus> mountPointMap, final MountPoint mountPoint, final Path path) throws IOException {
    FsStatus fsStatus = viewFileSystem.getStatus(path);
    mountPointMap.put(mountPoint, fsStatus);
}
Also used : FsStatus(org.apache.hadoop.fs.FsStatus)

Aggregations

FsStatus (org.apache.hadoop.fs.FsStatus)9 Path (org.apache.hadoop.fs.Path)4 Test (org.junit.Test)3 IOException (java.io.IOException)2 Configuration (org.apache.hadoop.conf.Configuration)2 FSDataOutputStream (org.apache.hadoop.fs.FSDataOutputStream)2 MountPoint (org.apache.hadoop.fs.viewfs.ViewFileSystem.MountPoint)2 DistributedFileSystem (org.apache.hadoop.hdfs.DistributedFileSystem)2 VisibleForTesting (com.google.common.annotations.VisibleForTesting)1 DataOutputStream (java.io.DataOutputStream)1 ArrayList (java.util.ArrayList)1 HashMap (java.util.HashMap)1 CreateFlag (org.apache.hadoop.fs.CreateFlag)1 FileContext (org.apache.hadoop.fs.FileContext)1 FileSystem (org.apache.hadoop.fs.FileSystem)1 UnsupportedFileSystemException (org.apache.hadoop.fs.UnsupportedFileSystemException)1 FsPermission (org.apache.hadoop.fs.permission.FsPermission)1 DFSClient (org.apache.hadoop.hdfs.DFSClient)1 DatanodeInfo (org.apache.hadoop.hdfs.protocol.DatanodeInfo)1 RemoteException (org.apache.hadoop.ipc.RemoteException)1