Search in sources :

Example 36 with FsPermission

use of org.apache.hadoop.fs.permission.FsPermission in project hadoop by apache.

the class AclStorage method readINodeLogicalAcl.

/**
   * Reads the existing ACL of an inode.  This method always returns the full
   * logical ACL of the inode after reading relevant data from the inode's
   * {@link FsPermission} and {@link AclFeature}.  Note that every inode
   * logically has an ACL, even if no ACL has been set explicitly.  If the inode
   * does not have an extended ACL, then the result is a minimal ACL consising of
   * exactly 3 entries that correspond to the owner, group and other permissions.
   * This method always reads the inode's current state and does not support
   * querying by snapshot ID.  This is because the method is intended to support
   * ACL modification APIs, which always apply a delta on top of current state.
   *
   * @param inode INode to read
   * @return List<AclEntry> containing all logical inode ACL entries
   */
public static List<AclEntry> readINodeLogicalAcl(INode inode) {
    FsPermission perm = inode.getFsPermission();
    AclFeature f = inode.getAclFeature();
    if (f == null) {
        return AclUtil.getMinimalAcl(perm);
    }
    final List<AclEntry> existingAcl;
    // Split ACL entries stored in the feature into access vs. default.
    List<AclEntry> featureEntries = getEntriesFromAclFeature(f);
    ScopedAclEntries scoped = new ScopedAclEntries(featureEntries);
    List<AclEntry> accessEntries = scoped.getAccessEntries();
    List<AclEntry> defaultEntries = scoped.getDefaultEntries();
    // Pre-allocate list size for the explicit entries stored in the feature
    // plus the 3 implicit entries (owner, group and other) from the permission
    // bits.
    existingAcl = Lists.newArrayListWithCapacity(featureEntries.size() + 3);
    if (!accessEntries.isEmpty()) {
        // Add owner entry implied from user permission bits.
        existingAcl.add(new AclEntry.Builder().setScope(AclEntryScope.ACCESS).setType(AclEntryType.USER).setPermission(perm.getUserAction()).build());
        // Next add all named user and group entries taken from the feature.
        existingAcl.addAll(accessEntries);
        // Add mask entry implied from group permission bits.
        existingAcl.add(new AclEntry.Builder().setScope(AclEntryScope.ACCESS).setType(AclEntryType.MASK).setPermission(perm.getGroupAction()).build());
        // Add other entry implied from other permission bits.
        existingAcl.add(new AclEntry.Builder().setScope(AclEntryScope.ACCESS).setType(AclEntryType.OTHER).setPermission(perm.getOtherAction()).build());
    } else {
        // It's possible that there is a default ACL but no access ACL. In this
        // case, add the minimal access ACL implied by the permission bits.
        existingAcl.addAll(AclUtil.getMinimalAcl(perm));
    }
    // Add all default entries after the access entries.
    existingAcl.addAll(defaultEntries);
    // The above adds entries in the correct order, so no need to sort here.
    return existingAcl;
}
Also used : ScopedAclEntries(org.apache.hadoop.fs.permission.ScopedAclEntries) AclEntry(org.apache.hadoop.fs.permission.AclEntry) FsPermission(org.apache.hadoop.fs.permission.FsPermission)

Example 37 with FsPermission

use of org.apache.hadoop.fs.permission.FsPermission in project hadoop by apache.

the class CachePool method createFromInfoAndDefaults.

/**
   * Create a new cache pool based on a CachePoolInfo object and the defaults.
   * We will fill in information that was not supplied according to the
   * defaults.
   */
static CachePool createFromInfoAndDefaults(CachePoolInfo info) throws IOException {
    UserGroupInformation ugi = null;
    String ownerName = info.getOwnerName();
    if (ownerName == null) {
        ugi = NameNode.getRemoteUser();
        ownerName = ugi.getShortUserName();
    }
    String groupName = info.getGroupName();
    if (groupName == null) {
        if (ugi == null) {
            ugi = NameNode.getRemoteUser();
        }
        groupName = ugi.getPrimaryGroupName();
    }
    FsPermission mode = (info.getMode() == null) ? FsPermission.getCachePoolDefault() : info.getMode();
    long limit = info.getLimit() == null ? CachePoolInfo.DEFAULT_LIMIT : info.getLimit();
    short defaultReplication = info.getDefaultReplication() == null ? CachePoolInfo.DEFAULT_REPLICATION_NUM : info.getDefaultReplication();
    long maxRelativeExpiry = info.getMaxRelativeExpiryMs() == null ? CachePoolInfo.DEFAULT_MAX_RELATIVE_EXPIRY : info.getMaxRelativeExpiryMs();
    return new CachePool(info.getPoolName(), ownerName, groupName, mode, limit, defaultReplication, maxRelativeExpiry);
}
Also used : FsPermission(org.apache.hadoop.fs.permission.FsPermission) UserGroupInformation(org.apache.hadoop.security.UserGroupInformation)

Example 38 with FsPermission

use of org.apache.hadoop.fs.permission.FsPermission in project hadoop by apache.

the class FSDirStatAndListingOp method getPermissionForFileStatus.

/**
   * Returns an inode's FsPermission for use in an outbound FileStatus.  If the
   * inode has an ACL or is for an encrypted file/dir, then this method will
   * return an FsPermissionExtension.
   *
   * @param node INode to check
   * @param isEncrypted boolean true if the file/dir is encrypted
   * @return FsPermission from inode, with ACL bit on if the inode has an ACL
   * and encrypted bit on if it represents an encrypted file/dir.
   */
private static FsPermission getPermissionForFileStatus(INodeAttributes node, boolean isEncrypted) {
    FsPermission perm = node.getFsPermission();
    boolean hasAcl = node.getAclFeature() != null;
    if (hasAcl || isEncrypted) {
        perm = new FsPermissionExtension(perm, hasAcl, isEncrypted);
    }
    return perm;
}
Also used : FsPermissionExtension(org.apache.hadoop.hdfs.protocol.FsPermissionExtension) FsPermission(org.apache.hadoop.fs.permission.FsPermission)

Example 39 with FsPermission

use of org.apache.hadoop.fs.permission.FsPermission in project hadoop by apache.

the class TestViewFsWithAcls method setUp.

@Before
public void setUp() throws Exception {
    fcTarget = fc;
    fcTarget2 = fc2;
    targetTestRoot = fileContextTestHelper.getAbsoluteTestRootPath(fc);
    targetTestRoot2 = fileContextTestHelper.getAbsoluteTestRootPath(fc2);
    fcTarget.delete(targetTestRoot, true);
    fcTarget2.delete(targetTestRoot2, true);
    fcTarget.mkdir(targetTestRoot, new FsPermission((short) 0750), true);
    fcTarget2.mkdir(targetTestRoot2, new FsPermission((short) 0750), true);
    fsViewConf = ViewFileSystemTestSetup.createConfig();
    setupMountPoints();
    fcView = FileContext.getFileContext(FsConstants.VIEWFS_URI, fsViewConf);
}
Also used : FsPermission(org.apache.hadoop.fs.permission.FsPermission) Before(org.junit.Before)

Example 40 with FsPermission

use of org.apache.hadoop.fs.permission.FsPermission in project hadoop by apache.

the class TestViewFsWithXAttrs method setUp.

@Before
public void setUp() throws Exception {
    fcTarget = fc;
    fcTarget2 = fc2;
    targetTestRoot = fileContextTestHelper.getAbsoluteTestRootPath(fc);
    targetTestRoot2 = fileContextTestHelper.getAbsoluteTestRootPath(fc2);
    fcTarget.delete(targetTestRoot, true);
    fcTarget2.delete(targetTestRoot2, true);
    fcTarget.mkdir(targetTestRoot, new FsPermission((short) 0750), true);
    fcTarget2.mkdir(targetTestRoot2, new FsPermission((short) 0750), true);
    fsViewConf = ViewFileSystemTestSetup.createConfig();
    setupMountPoints();
    fcView = FileContext.getFileContext(FsConstants.VIEWFS_URI, fsViewConf);
}
Also used : FsPermission(org.apache.hadoop.fs.permission.FsPermission) Before(org.junit.Before)

Aggregations

FsPermission (org.apache.hadoop.fs.permission.FsPermission)427 Path (org.apache.hadoop.fs.Path)267 Test (org.junit.Test)180 IOException (java.io.IOException)120 FileSystem (org.apache.hadoop.fs.FileSystem)93 Configuration (org.apache.hadoop.conf.Configuration)89 FileStatus (org.apache.hadoop.fs.FileStatus)87 FSDataOutputStream (org.apache.hadoop.fs.FSDataOutputStream)52 AccessControlException (org.apache.hadoop.security.AccessControlException)43 UserGroupInformation (org.apache.hadoop.security.UserGroupInformation)36 FileNotFoundException (java.io.FileNotFoundException)33 MiniDFSCluster (org.apache.hadoop.hdfs.MiniDFSCluster)29 File (java.io.File)26 DistributedFileSystem (org.apache.hadoop.hdfs.DistributedFileSystem)26 HdfsConfiguration (org.apache.hadoop.hdfs.HdfsConfiguration)26 AclEntry (org.apache.hadoop.fs.permission.AclEntry)25 ArrayList (java.util.ArrayList)22 HashMap (java.util.HashMap)19 YarnConfiguration (org.apache.hadoop.yarn.conf.YarnConfiguration)16 URI (java.net.URI)15