Search in sources :

Example 31 with FileAlreadyExistsException

use of org.apache.hadoop.fs.FileAlreadyExistsException in project hadoop by apache.

the class AbstractContractCreateTest method testOverwriteEmptyDirectory.

@Test
public void testOverwriteEmptyDirectory() throws Throwable {
    describe("verify trying to create a file over an empty dir fails");
    Path path = path("testOverwriteEmptyDirectory");
    mkdirs(path);
    assertIsDirectory(path);
    byte[] data = dataset(256, 'a', 'z');
    try {
        writeDataset(getFileSystem(), path, data, data.length, 1024, true);
        assertIsDirectory(path);
        fail("write of file over empty dir succeeded");
    } catch (FileAlreadyExistsException expected) {
        //expected
        handleExpectedException(expected);
    } catch (FileNotFoundException e) {
        handleRelaxedException("overwriting a dir with a file ", "FileAlreadyExistsException", e);
    } catch (IOException e) {
        handleRelaxedException("overwriting a dir with a file ", "FileAlreadyExistsException", e);
    }
    assertIsDirectory(path);
}
Also used : Path(org.apache.hadoop.fs.Path) FileAlreadyExistsException(org.apache.hadoop.fs.FileAlreadyExistsException) FileNotFoundException(java.io.FileNotFoundException) IOException(java.io.IOException) Test(org.junit.Test)

Example 32 with FileAlreadyExistsException

use of org.apache.hadoop.fs.FileAlreadyExistsException in project hadoop by apache.

the class HdfsAdmin method provisionEZTrash.

private void provisionEZTrash(Path path) throws IOException {
    // make sure the path is an EZ
    EncryptionZone ez = dfs.getEZForPath(path);
    if (ez == null) {
        throw new IllegalArgumentException(path + " is not an encryption zone.");
    }
    String ezPath = ez.getPath();
    if (!path.toString().equals(ezPath)) {
        throw new IllegalArgumentException(path + " is not the root of an " + "encryption zone. Do you mean " + ez.getPath() + "?");
    }
    // check if the trash directory exists
    Path trashPath = new Path(ez.getPath(), FileSystem.TRASH_PREFIX);
    try {
        FileStatus trashFileStatus = dfs.getFileStatus(trashPath);
        String errMessage = "Will not provision new trash directory for " + "encryption zone " + ez.getPath() + ". Path already exists.";
        if (!trashFileStatus.isDirectory()) {
            errMessage += "\r\n" + "Warning: " + trashPath.toString() + " is not a directory";
        }
        if (!trashFileStatus.getPermission().equals(TRASH_PERMISSION)) {
            errMessage += "\r\n" + "Warning: the permission of " + trashPath.toString() + " is not " + TRASH_PERMISSION;
        }
        throw new FileAlreadyExistsException(errMessage);
    } catch (FileNotFoundException ignored) {
    // no trash path
    }
    // Update the permission bits
    dfs.mkdir(trashPath, TRASH_PERMISSION);
    dfs.setPermission(trashPath, TRASH_PERMISSION);
}
Also used : Path(org.apache.hadoop.fs.Path) EncryptionZone(org.apache.hadoop.hdfs.protocol.EncryptionZone) FileAlreadyExistsException(org.apache.hadoop.fs.FileAlreadyExistsException) FileStatus(org.apache.hadoop.fs.FileStatus) FileNotFoundException(java.io.FileNotFoundException) HadoopIllegalArgumentException(org.apache.hadoop.HadoopIllegalArgumentException)

Example 33 with FileAlreadyExistsException

use of org.apache.hadoop.fs.FileAlreadyExistsException in project hadoop by apache.

the class FTPFileSystem method create.

/**
   * A stream obtained via this call must be closed before using other APIs of
   * this class or else the invocation will block.
   */
@Override
public FSDataOutputStream create(Path file, FsPermission permission, boolean overwrite, int bufferSize, short replication, long blockSize, Progressable progress) throws IOException {
    final FTPClient client = connect();
    Path workDir = new Path(client.printWorkingDirectory());
    Path absolute = makeAbsolute(workDir, file);
    FileStatus status;
    try {
        status = getFileStatus(client, file);
    } catch (FileNotFoundException fnfe) {
        status = null;
    }
    if (status != null) {
        if (overwrite && !status.isDirectory()) {
            delete(client, file, false);
        } else {
            disconnect(client);
            throw new FileAlreadyExistsException("File already exists: " + file);
        }
    }
    Path parent = absolute.getParent();
    if (parent == null || !mkdirs(client, parent, FsPermission.getDirDefault())) {
        parent = (parent == null) ? new Path("/") : parent;
        disconnect(client);
        throw new IOException("create(): Mkdirs failed to create: " + parent);
    }
    client.allocate(bufferSize);
    // Change to parent directory on the server. Only then can we write to the
    // file on the server by opening up an OutputStream. As a side effect the
    // working directory on the server is changed to the parent directory of the
    // file. The FTP client connection is closed when close() is called on the
    // FSDataOutputStream.
    client.changeWorkingDirectory(parent.toUri().getPath());
    FSDataOutputStream fos = new FSDataOutputStream(client.storeFileStream(file.getName()), statistics) {

        @Override
        public void close() throws IOException {
            super.close();
            if (!client.isConnected()) {
                throw new FTPException("Client not connected");
            }
            boolean cmdCompleted = client.completePendingCommand();
            disconnect(client);
            if (!cmdCompleted) {
                throw new FTPException("Could not complete transfer, Reply Code - " + client.getReplyCode());
            }
        }
    };
    if (!FTPReply.isPositivePreliminary(client.getReplyCode())) {
        // The ftpClient is an inconsistent state. Must close the stream
        // which in turn will logout and disconnect from FTP server
        fos.close();
        throw new IOException("Unable to create file: " + file + ", Aborting");
    }
    return fos;
}
Also used : Path(org.apache.hadoop.fs.Path) FileAlreadyExistsException(org.apache.hadoop.fs.FileAlreadyExistsException) FileStatus(org.apache.hadoop.fs.FileStatus) FileNotFoundException(java.io.FileNotFoundException) IOException(java.io.IOException) FSDataOutputStream(org.apache.hadoop.fs.FSDataOutputStream) FTPClient(org.apache.commons.net.ftp.FTPClient)

Example 34 with FileAlreadyExistsException

use of org.apache.hadoop.fs.FileAlreadyExistsException in project hadoop by apache.

the class FTPFileSystem method rename.

/**
   * Convenience method, so that we don't open a new connection when using this
   * method from within another method. Otherwise every API invocation incurs
   * the overhead of opening/closing a TCP connection.
   * 
   * @param client
   * @param src
   * @param dst
   * @return
   * @throws IOException
   */
@SuppressWarnings("deprecation")
private boolean rename(FTPClient client, Path src, Path dst) throws IOException {
    Path workDir = new Path(client.printWorkingDirectory());
    Path absoluteSrc = makeAbsolute(workDir, src);
    Path absoluteDst = makeAbsolute(workDir, dst);
    if (!exists(client, absoluteSrc)) {
        throw new FileNotFoundException("Source path " + src + " does not exist");
    }
    if (isDirectory(absoluteDst)) {
        // destination is a directory: rename goes underneath it with the
        // source name
        absoluteDst = new Path(absoluteDst, absoluteSrc.getName());
    }
    if (exists(client, absoluteDst)) {
        throw new FileAlreadyExistsException("Destination path " + dst + " already exists");
    }
    String parentSrc = absoluteSrc.getParent().toUri().toString();
    String parentDst = absoluteDst.getParent().toUri().toString();
    if (isParentOf(absoluteSrc, absoluteDst)) {
        throw new IOException("Cannot rename " + absoluteSrc + " under itself" + " : " + absoluteDst);
    }
    if (!parentSrc.equals(parentDst)) {
        throw new IOException("Cannot rename source: " + absoluteSrc + " to " + absoluteDst + " -" + E_SAME_DIRECTORY_ONLY);
    }
    String from = absoluteSrc.getName();
    String to = absoluteDst.getName();
    client.changeWorkingDirectory(parentSrc);
    boolean renamed = client.rename(from, to);
    return renamed;
}
Also used : Path(org.apache.hadoop.fs.Path) FileAlreadyExistsException(org.apache.hadoop.fs.FileAlreadyExistsException) FileNotFoundException(java.io.FileNotFoundException) IOException(java.io.IOException)

Example 35 with FileAlreadyExistsException

use of org.apache.hadoop.fs.FileAlreadyExistsException in project hadoop by apache.

the class HistoryFileManager method mkdir.

private void mkdir(FileContext fc, Path path, FsPermission fsp) throws IOException {
    if (!fc.util().exists(path)) {
        try {
            fc.mkdir(path, fsp, true);
            FileStatus fsStatus = fc.getFileStatus(path);
            LOG.info("Perms after creating " + fsStatus.getPermission().toShort() + ", Expected: " + fsp.toShort());
            if (fsStatus.getPermission().toShort() != fsp.toShort()) {
                LOG.info("Explicitly setting permissions to : " + fsp.toShort() + ", " + fsp);
                fc.setPermission(path, fsp);
            }
        } catch (FileAlreadyExistsException e) {
            LOG.info("Directory: [" + path + "] already exists.");
        }
    }
}
Also used : FileAlreadyExistsException(org.apache.hadoop.fs.FileAlreadyExistsException) FileStatus(org.apache.hadoop.fs.FileStatus)

Aggregations

FileAlreadyExistsException (org.apache.hadoop.fs.FileAlreadyExistsException)48 Path (org.apache.hadoop.fs.Path)32 IOException (java.io.IOException)22 FileNotFoundException (java.io.FileNotFoundException)17 FSDataOutputStream (org.apache.hadoop.fs.FSDataOutputStream)14 FileStatus (org.apache.hadoop.fs.FileStatus)14 Test (org.junit.Test)13 FileSystem (org.apache.hadoop.fs.FileSystem)7 ParentNotDirectoryException (org.apache.hadoop.fs.ParentNotDirectoryException)4 RemoteException (org.apache.hadoop.ipc.RemoteException)4 File (java.io.File)3 ArrayList (java.util.ArrayList)3 Cleanup (lombok.Cleanup)3 lombok.val (lombok.val)3 FsPermission (org.apache.hadoop.fs.permission.FsPermission)3 AlreadyBeingCreatedException (org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException)3 DataOutputStream (java.io.DataOutputStream)2 InterruptedIOException (java.io.InterruptedIOException)2 HashMap (java.util.HashMap)2 Configuration (org.apache.hadoop.conf.Configuration)2