Search in sources :

Example 1 with DirectoryListingStartAfterNotFoundException

use of org.apache.hadoop.fs.DirectoryListingStartAfterNotFoundException in project hadoop by apache.

the class FSDirStatAndListingOp method getListingInt.

static DirectoryListing getListingInt(FSDirectory fsd, final String srcArg, byte[] startAfter, boolean needLocation) throws IOException {
    final FSPermissionChecker pc = fsd.getPermissionChecker();
    final INodesInPath iip = fsd.resolvePath(pc, srcArg, DirOp.READ);
    // common case so avoid any unnecessary processing unless required.
    if (startAfter.length > 0 && startAfter[0] == Path.SEPARATOR_CHAR) {
        final String startAfterString = DFSUtil.bytes2String(startAfter);
        if (FSDirectory.isReservedName(startAfterString)) {
            try {
                byte[][] components = INode.getPathComponents(startAfterString);
                components = FSDirectory.resolveComponents(components, fsd);
                startAfter = components[components.length - 1];
            } catch (IOException e) {
                // Possibly the inode is deleted
                throw new DirectoryListingStartAfterNotFoundException("Can't find startAfter " + startAfterString);
            }
        }
    }
    boolean isSuperUser = true;
    if (fsd.isPermissionEnabled()) {
        if (iip.getLastINode() != null && iip.getLastINode().isDirectory()) {
            fsd.checkPathAccess(pc, iip, FsAction.READ_EXECUTE);
        }
        isSuperUser = pc.isSuperUser();
    }
    return getListing(fsd, iip, startAfter, needLocation, isSuperUser);
}
Also used : DirectoryListingStartAfterNotFoundException(org.apache.hadoop.fs.DirectoryListingStartAfterNotFoundException) IOException(java.io.IOException)

Example 2 with DirectoryListingStartAfterNotFoundException

use of org.apache.hadoop.fs.DirectoryListingStartAfterNotFoundException in project hadoop by apache.

the class TestINodeFile method testFilesInGetListingOps.

@Test
public void testFilesInGetListingOps() throws Exception {
    final Configuration conf = new Configuration();
    MiniDFSCluster cluster = null;
    try {
        cluster = new MiniDFSCluster.Builder(conf).numDataNodes(1).build();
        cluster.waitActive();
        final DistributedFileSystem hdfs = cluster.getFileSystem();
        final FSDirectory fsdir = cluster.getNamesystem().getFSDirectory();
        hdfs.mkdirs(new Path("/tmp"));
        DFSTestUtil.createFile(hdfs, new Path("/tmp/f1"), 0, (short) 1, 0);
        DFSTestUtil.createFile(hdfs, new Path("/tmp/f2"), 0, (short) 1, 0);
        DFSTestUtil.createFile(hdfs, new Path("/tmp/f3"), 0, (short) 1, 0);
        DirectoryListing dl = cluster.getNameNodeRpc().getListing("/tmp", HdfsFileStatus.EMPTY_NAME, false);
        assertTrue(dl.getPartialListing().length == 3);
        String f2 = new String("f2");
        dl = cluster.getNameNodeRpc().getListing("/tmp", f2.getBytes(), false);
        assertTrue(dl.getPartialListing().length == 1);
        INode f2INode = fsdir.getINode("/tmp/f2");
        String f2InodePath = "/.reserved/.inodes/" + f2INode.getId();
        dl = cluster.getNameNodeRpc().getListing("/tmp", f2InodePath.getBytes(), false);
        assertTrue(dl.getPartialListing().length == 1);
        // Test the deleted startAfter file
        hdfs.delete(new Path("/tmp/f2"), false);
        try {
            dl = cluster.getNameNodeRpc().getListing("/tmp", f2InodePath.getBytes(), false);
            fail("Didn't get exception for the deleted startAfter token.");
        } catch (IOException e) {
            assertTrue(e instanceof DirectoryListingStartAfterNotFoundException);
        }
    } finally {
        if (cluster != null) {
            cluster.shutdown();
        }
    }
}
Also used : Path(org.apache.hadoop.fs.Path) DirectoryListing(org.apache.hadoop.hdfs.protocol.DirectoryListing) MiniDFSCluster(org.apache.hadoop.hdfs.MiniDFSCluster) Configuration(org.apache.hadoop.conf.Configuration) DirectoryListingStartAfterNotFoundException(org.apache.hadoop.fs.DirectoryListingStartAfterNotFoundException) IOException(java.io.IOException) DistributedFileSystem(org.apache.hadoop.hdfs.DistributedFileSystem) Test(org.junit.Test)

Aggregations

IOException (java.io.IOException)2 DirectoryListingStartAfterNotFoundException (org.apache.hadoop.fs.DirectoryListingStartAfterNotFoundException)2 Configuration (org.apache.hadoop.conf.Configuration)1 Path (org.apache.hadoop.fs.Path)1 DistributedFileSystem (org.apache.hadoop.hdfs.DistributedFileSystem)1 MiniDFSCluster (org.apache.hadoop.hdfs.MiniDFSCluster)1 DirectoryListing (org.apache.hadoop.hdfs.protocol.DirectoryListing)1 Test (org.junit.Test)1