Search in sources :

Example 6 with ParentNotDirectoryException

use of org.apache.hadoop.fs.ParentNotDirectoryException in project hadoop by apache.

the class TestSwiftFileSystemBasicOps method testCreateDirWithFileParent.

@Test(timeout = SWIFT_TEST_TIMEOUT)
public void testCreateDirWithFileParent() throws Throwable {
    Path path = new Path("/test/CreateDirWithFileParent");
    Path child = new Path(path, "subdir/child");
    fs.mkdirs(path.getParent());
    try {
        //create the child dir
        writeTextFile(fs, path, "parent", true);
        try {
            fs.mkdirs(child);
        } catch (ParentNotDirectoryException expected) {
            LOG.debug("Expected Exception", expected);
        }
    } finally {
        fs.delete(path, true);
    }
}
Also used : Path(org.apache.hadoop.fs.Path) ParentNotDirectoryException(org.apache.hadoop.fs.ParentNotDirectoryException) Test(org.junit.Test)

Example 7 with ParentNotDirectoryException

use of org.apache.hadoop.fs.ParentNotDirectoryException in project cdap by caskdata.

the class MasterServiceMain method createDirectory.

private void createDirectory(FileContext fileContext, String path) {
    try {
        org.apache.hadoop.fs.Path fPath = new org.apache.hadoop.fs.Path(path);
        boolean dirExists = checkDirectoryExists(fileContext, fPath);
        if (!dirExists) {
            FsPermission permission = new FsPermission(FsAction.ALL, FsAction.ALL, FsAction.ALL);
            // file context does ( permission AND  (NOT of umask) ) and uses that as permission, by default umask is 022,
            // if we want 777 permission, we have to set umask to 000
            fileContext.setUMask(new FsPermission(FsAction.NONE, FsAction.NONE, FsAction.NONE));
            fileContext.mkdir(fPath, permission, true);
        }
    } catch (FileAlreadyExistsException e) {
    // should not happen as we create only if dir exists
    } catch (AccessControlException | ParentNotDirectoryException | FileNotFoundException e) {
        // just log the exception
        LOG.error("Exception while trying to create directory at {}", path, e);
    } catch (IOException e) {
        throw Throwables.propagate(e);
    }
}
Also used : Path(java.nio.file.Path) FileAlreadyExistsException(org.apache.hadoop.fs.FileAlreadyExistsException) FileNotFoundException(java.io.FileNotFoundException) AccessControlException(org.apache.hadoop.security.AccessControlException) IOException(java.io.IOException) ParentNotDirectoryException(org.apache.hadoop.fs.ParentNotDirectoryException) FsPermission(org.apache.hadoop.fs.permission.FsPermission)

Example 8 with ParentNotDirectoryException

use of org.apache.hadoop.fs.ParentNotDirectoryException in project hadoop by apache.

the class FTPFileSystem method mkdirs.

/**
   * Convenience method, so that we don't open a new connection when using this
   * method from within another method. Otherwise every API invocation incurs
   * the overhead of opening/closing a TCP connection.
   */
private boolean mkdirs(FTPClient client, Path file, FsPermission permission) throws IOException {
    boolean created = true;
    Path workDir = new Path(client.printWorkingDirectory());
    Path absolute = makeAbsolute(workDir, file);
    String pathName = absolute.getName();
    if (!exists(client, absolute)) {
        Path parent = absolute.getParent();
        created = (parent == null || mkdirs(client, parent, FsPermission.getDirDefault()));
        if (created) {
            String parentDir = parent.toUri().getPath();
            client.changeWorkingDirectory(parentDir);
            created = created && client.makeDirectory(pathName);
        }
    } else if (isFile(client, absolute)) {
        throw new ParentNotDirectoryException(String.format("Can't make directory for path %s since it is a file.", absolute));
    }
    return created;
}
Also used : Path(org.apache.hadoop.fs.Path) ParentNotDirectoryException(org.apache.hadoop.fs.ParentNotDirectoryException)

Example 9 with ParentNotDirectoryException

use of org.apache.hadoop.fs.ParentNotDirectoryException in project hadoop by apache.

the class FSDirRenameOp method unprotectedRenameTo.

/**
   * Rename src to dst.
   * See {@link DistributedFileSystem#rename(Path, Path, Options.Rename...)}
   * for details related to rename semantics and exceptions.
   *
   * @param fsd             FSDirectory
   * @param src             source path
   * @param dst             destination path
   * @param timestamp       modification time
   * @param collectedBlocks blocks to be removed
   * @param options         Rename options
   * @return whether a file/directory gets overwritten in the dst path
   */
static RenameResult unprotectedRenameTo(FSDirectory fsd, final INodesInPath srcIIP, final INodesInPath dstIIP, long timestamp, BlocksMapUpdateInfo collectedBlocks, Options.Rename... options) throws IOException {
    assert fsd.hasWriteLock();
    boolean overwrite = options != null && Arrays.asList(options).contains(Options.Rename.OVERWRITE);
    final String src = srcIIP.getPath();
    final String dst = dstIIP.getPath();
    final String error;
    final INode srcInode = srcIIP.getLastINode();
    validateRenameSource(fsd, srcIIP);
    // validate the destination
    if (dst.equals(src)) {
        throw new FileAlreadyExistsException("The source " + src + " and destination " + dst + " are the same");
    }
    validateDestination(src, dst, srcInode);
    if (dstIIP.length() == 1) {
        error = "rename destination cannot be the root";
        NameNode.stateChangeLog.warn("DIR* FSDirectory.unprotectedRenameTo: " + error);
        throw new IOException(error);
    }
    BlockStoragePolicySuite bsps = fsd.getBlockStoragePolicySuite();
    fsd.ezManager.checkMoveValidity(srcIIP, dstIIP);
    final INode dstInode = dstIIP.getLastINode();
    List<INodeDirectory> snapshottableDirs = new ArrayList<>();
    if (dstInode != null) {
        // Destination exists
        validateOverwrite(src, dst, overwrite, srcInode, dstInode);
        FSDirSnapshotOp.checkSnapshot(fsd, dstIIP, snapshottableDirs);
    }
    INode dstParent = dstIIP.getINode(-2);
    if (dstParent == null) {
        error = "rename destination parent " + dst + " not found.";
        NameNode.stateChangeLog.warn("DIR* FSDirectory.unprotectedRenameTo: " + error);
        throw new FileNotFoundException(error);
    }
    if (!dstParent.isDirectory()) {
        error = "rename destination parent " + dst + " is a file.";
        NameNode.stateChangeLog.warn("DIR* FSDirectory.unprotectedRenameTo: " + error);
        throw new ParentNotDirectoryException(error);
    }
    // Ensure dst has quota to accommodate rename
    verifyFsLimitsForRename(fsd, srcIIP, dstIIP);
    verifyQuotaForRename(fsd, srcIIP, dstIIP);
    RenameOperation tx = new RenameOperation(fsd, srcIIP, dstIIP);
    boolean undoRemoveSrc = true;
    tx.removeSrc();
    boolean undoRemoveDst = false;
    long removedNum = 0;
    try {
        if (dstInode != null) {
            // dst exists, remove it
            removedNum = tx.removeDst();
            if (removedNum != -1) {
                undoRemoveDst = true;
            }
        }
        // add src as dst to complete rename
        INodesInPath renamedIIP = tx.addSourceToDestination();
        if (renamedIIP != null) {
            undoRemoveSrc = false;
            if (NameNode.stateChangeLog.isDebugEnabled()) {
                NameNode.stateChangeLog.debug("DIR* FSDirectory.unprotectedRenameTo: " + src + " is renamed to " + dst);
            }
            tx.updateMtimeAndLease(timestamp);
            // Collect the blocks and remove the lease for previous dst
            boolean filesDeleted = false;
            if (undoRemoveDst) {
                undoRemoveDst = false;
                if (removedNum > 0) {
                    filesDeleted = tx.cleanDst(bsps, collectedBlocks);
                }
            }
            if (snapshottableDirs.size() > 0) {
                // There are snapshottable directories (without snapshots) to be
                // deleted. Need to update the SnapshotManager.
                fsd.getFSNamesystem().removeSnapshottableDirs(snapshottableDirs);
            }
            tx.updateQuotasInSourceTree(bsps);
            return createRenameResult(fsd, renamedIIP, filesDeleted, collectedBlocks);
        }
    } finally {
        if (undoRemoveSrc) {
            tx.restoreSource();
        }
        if (undoRemoveDst) {
            // Rename failed - restore dst
            tx.restoreDst(bsps);
        }
    }
    NameNode.stateChangeLog.warn("DIR* FSDirectory.unprotectedRenameTo: " + "failed to rename " + src + " to " + dst);
    throw new IOException("rename from " + src + " to " + dst + " failed.");
}
Also used : BlockStoragePolicySuite(org.apache.hadoop.hdfs.server.blockmanagement.BlockStoragePolicySuite) FileAlreadyExistsException(org.apache.hadoop.fs.FileAlreadyExistsException) ChunkedArrayList(org.apache.hadoop.util.ChunkedArrayList) ArrayList(java.util.ArrayList) FileNotFoundException(java.io.FileNotFoundException) IOException(java.io.IOException) ParentNotDirectoryException(org.apache.hadoop.fs.ParentNotDirectoryException)

Example 10 with ParentNotDirectoryException

use of org.apache.hadoop.fs.ParentNotDirectoryException in project hadoop by apache.

the class TestDFSMkdirs method testMkdir.

/**
   * Tests mkdir will not create directory when parent is missing.
   */
@Test
public void testMkdir() throws IOException {
    Configuration conf = new HdfsConfiguration();
    MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).numDataNodes(2).build();
    DistributedFileSystem dfs = cluster.getFileSystem();
    try {
        // Create a dir in root dir, should succeed
        assertTrue(dfs.mkdir(new Path("/mkdir-" + Time.now()), FsPermission.getDefault()));
        // Create a dir when parent dir exists as a file, should fail
        IOException expectedException = null;
        String filePath = "/mkdir-file-" + Time.now();
        DFSTestUtil.writeFile(dfs, new Path(filePath), "hello world");
        try {
            dfs.mkdir(new Path(filePath + "/mkdir"), FsPermission.getDefault());
        } catch (IOException e) {
            expectedException = e;
        }
        assertTrue("Create a directory when parent dir exists as file using" + " mkdir() should throw ParentNotDirectoryException ", expectedException != null && expectedException instanceof ParentNotDirectoryException);
        // Create a dir in a non-exist directory, should fail
        expectedException = null;
        try {
            dfs.mkdir(new Path("/non-exist/mkdir-" + Time.now()), FsPermission.getDefault());
        } catch (IOException e) {
            expectedException = e;
        }
        assertTrue("Create a directory in a non-exist parent dir using" + " mkdir() should throw FileNotFoundException ", expectedException != null && expectedException instanceof FileNotFoundException);
    } finally {
        dfs.close();
        cluster.shutdown();
    }
}
Also used : Path(org.apache.hadoop.fs.Path) Configuration(org.apache.hadoop.conf.Configuration) ParentNotDirectoryException(org.apache.hadoop.fs.ParentNotDirectoryException) FileNotFoundException(java.io.FileNotFoundException) IOException(java.io.IOException) Test(org.junit.Test)

Aggregations

ParentNotDirectoryException (org.apache.hadoop.fs.ParentNotDirectoryException)12 Path (org.apache.hadoop.fs.Path)8 FileNotFoundException (java.io.FileNotFoundException)6 IOException (java.io.IOException)6 Test (org.junit.Test)5 FileAlreadyExistsException (org.apache.hadoop.fs.FileAlreadyExistsException)4 AccessControlException (org.apache.hadoop.security.AccessControlException)3 FileSystem (org.apache.hadoop.fs.FileSystem)2 VisibleForTesting (com.google.common.annotations.VisibleForTesting)1 Path (java.nio.file.Path)1 ArrayList (java.util.ArrayList)1 Configuration (org.apache.hadoop.conf.Configuration)1 CreateFlag (org.apache.hadoop.fs.CreateFlag)1 FileStatus (org.apache.hadoop.fs.FileStatus)1 InvalidPathException (org.apache.hadoop.fs.InvalidPathException)1 FsPermission (org.apache.hadoop.fs.permission.FsPermission)1 SnapshotAccessControlException (org.apache.hadoop.hdfs.protocol.SnapshotAccessControlException)1 BlockStoragePolicySuite (org.apache.hadoop.hdfs.server.blockmanagement.BlockStoragePolicySuite)1 ChunkedArrayList (org.apache.hadoop.util.ChunkedArrayList)1