Search in sources :

Example 21 with AccessControlException

use of org.apache.hadoop.security.AccessControlException in project hadoop by apache.

the class FSNamesystem method unsetStoragePolicy.

/**
   * unset storage policy set for a given file or a directory.
   *
   * @param src file/directory path
   */
void unsetStoragePolicy(String src) throws IOException {
    final String operationName = "unsetStoragePolicy";
    HdfsFileStatus auditStat;
    checkOperation(OperationCategory.WRITE);
    writeLock();
    try {
        checkOperation(OperationCategory.WRITE);
        checkNameNodeSafeMode("Cannot unset storage policy for " + src);
        auditStat = FSDirAttrOp.unsetStoragePolicy(dir, blockManager, src);
    } catch (AccessControlException e) {
        logAuditEvent(false, operationName, src);
        throw e;
    } finally {
        writeUnlock(operationName);
    }
    getEditLog().logSync();
    logAuditEvent(true, operationName, src, null, auditStat);
}
Also used : HdfsFileStatus(org.apache.hadoop.hdfs.protocol.HdfsFileStatus) AccessControlException(org.apache.hadoop.security.AccessControlException) SnapshotAccessControlException(org.apache.hadoop.hdfs.protocol.SnapshotAccessControlException)

Example 22 with AccessControlException

use of org.apache.hadoop.security.AccessControlException in project hadoop by apache.

the class FSNamesystem method removeAclEntries.

void removeAclEntries(final String src, List<AclEntry> aclSpec) throws IOException {
    final String operationName = "removeAclEntries";
    checkOperation(OperationCategory.WRITE);
    HdfsFileStatus auditStat = null;
    writeLock();
    try {
        checkOperation(OperationCategory.WRITE);
        checkNameNodeSafeMode("Cannot remove ACL entries on " + src);
        auditStat = FSDirAclOp.removeAclEntries(dir, src, aclSpec);
    } catch (AccessControlException e) {
        logAuditEvent(false, operationName, src);
        throw e;
    } finally {
        writeUnlock(operationName);
    }
    getEditLog().logSync();
    logAuditEvent(true, operationName, src, null, auditStat);
}
Also used : HdfsFileStatus(org.apache.hadoop.hdfs.protocol.HdfsFileStatus) AccessControlException(org.apache.hadoop.security.AccessControlException) SnapshotAccessControlException(org.apache.hadoop.hdfs.protocol.SnapshotAccessControlException)

Example 23 with AccessControlException

use of org.apache.hadoop.security.AccessControlException in project hadoop by apache.

the class FSNamesystem method unsetErasureCodingPolicy.

/**
   * Unset an erasure coding policy from the given path.
   * @param srcArg  The path of the target directory.
   * @throws AccessControlException  if the caller is not the superuser.
   * @throws UnresolvedLinkException if the path can't be resolved.
   * @throws SafeModeException       if the Namenode is in safe mode.
   */
void unsetErasureCodingPolicy(final String srcArg, final boolean logRetryCache) throws IOException, UnresolvedLinkException, SafeModeException, AccessControlException {
    final String operationName = "unsetErasureCodingPolicy";
    checkOperation(OperationCategory.WRITE);
    HdfsFileStatus resultingStat = null;
    final FSPermissionChecker pc = getPermissionChecker();
    boolean success = false;
    writeLock();
    try {
        checkOperation(OperationCategory.WRITE);
        checkNameNodeSafeMode("Cannot unset erasure coding policy on " + srcArg);
        resultingStat = FSDirErasureCodingOp.unsetErasureCodingPolicy(this, srcArg, pc, logRetryCache);
        success = true;
    } catch (AccessControlException ace) {
        logAuditEvent(success, operationName, srcArg, null, resultingStat);
        throw ace;
    } finally {
        writeUnlock(operationName);
        if (success) {
            getEditLog().logSync();
        }
    }
    logAuditEvent(success, operationName, srcArg, null, resultingStat);
}
Also used : HdfsFileStatus(org.apache.hadoop.hdfs.protocol.HdfsFileStatus) AccessControlException(org.apache.hadoop.security.AccessControlException) SnapshotAccessControlException(org.apache.hadoop.hdfs.protocol.SnapshotAccessControlException)

Example 24 with AccessControlException

use of org.apache.hadoop.security.AccessControlException in project hadoop by apache.

the class FSNamesystem method startFile.

/**
   * Create a new file entry in the namespace.
   * 
   * For description of parameters and exceptions thrown see
   * {@link ClientProtocol#create}, except it returns valid file status upon
   * success
   */
HdfsFileStatus startFile(String src, PermissionStatus permissions, String holder, String clientMachine, EnumSet<CreateFlag> flag, boolean createParent, short replication, long blockSize, CryptoProtocolVersion[] supportedVersions, boolean logRetryCache) throws IOException {
    HdfsFileStatus status;
    try {
        status = startFileInt(src, permissions, holder, clientMachine, flag, createParent, replication, blockSize, supportedVersions, logRetryCache);
    } catch (AccessControlException e) {
        logAuditEvent(false, "create", src);
        throw e;
    }
    logAuditEvent(true, "create", src, null, status);
    return status;
}
Also used : HdfsFileStatus(org.apache.hadoop.hdfs.protocol.HdfsFileStatus) AccessControlException(org.apache.hadoop.security.AccessControlException) SnapshotAccessControlException(org.apache.hadoop.hdfs.protocol.SnapshotAccessControlException)

Example 25 with AccessControlException

use of org.apache.hadoop.security.AccessControlException in project hadoop by apache.

the class FSNamesystem method removeXAttr.

void removeXAttr(String src, XAttr xAttr, boolean logRetryCache) throws IOException {
    final String operationName = "removeXAttr";
    HdfsFileStatus auditStat = null;
    writeLock();
    try {
        checkOperation(OperationCategory.WRITE);
        checkNameNodeSafeMode("Cannot remove XAttr entry on " + src);
        auditStat = FSDirXAttrOp.removeXAttr(dir, src, xAttr, logRetryCache);
    } catch (AccessControlException e) {
        logAuditEvent(false, operationName, src);
        throw e;
    } finally {
        writeUnlock(operationName);
    }
    getEditLog().logSync();
    logAuditEvent(true, operationName, src, null, auditStat);
}
Also used : HdfsFileStatus(org.apache.hadoop.hdfs.protocol.HdfsFileStatus) AccessControlException(org.apache.hadoop.security.AccessControlException) SnapshotAccessControlException(org.apache.hadoop.hdfs.protocol.SnapshotAccessControlException)

Aggregations

AccessControlException (org.apache.hadoop.security.AccessControlException)128 Test (org.junit.Test)59 Path (org.apache.hadoop.fs.Path)53 IOException (java.io.IOException)52 SnapshotAccessControlException (org.apache.hadoop.hdfs.protocol.SnapshotAccessControlException)35 FsPermission (org.apache.hadoop.fs.permission.FsPermission)32 UserGroupInformation (org.apache.hadoop.security.UserGroupInformation)22 HdfsFileStatus (org.apache.hadoop.hdfs.protocol.HdfsFileStatus)21 FileSystem (org.apache.hadoop.fs.FileSystem)19 DistributedFileSystem (org.apache.hadoop.hdfs.DistributedFileSystem)14 Configuration (org.apache.hadoop.conf.Configuration)11 FileNotFoundException (java.io.FileNotFoundException)10 CachePoolInfo (org.apache.hadoop.hdfs.protocol.CachePoolInfo)8 PrivilegedExceptionAction (java.security.PrivilegedExceptionAction)7 FileStatus (org.apache.hadoop.fs.FileStatus)6 CacheDirectiveInfo (org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo)6 Text (org.apache.hadoop.io.Text)5 InvalidToken (org.apache.hadoop.security.token.SecretManager.InvalidToken)5 YarnException (org.apache.hadoop.yarn.exceptions.YarnException)5 ArrayList (java.util.ArrayList)4