Search in sources :

Example 1 with AccessControlEnforcer

use of org.apache.hadoop.hdfs.server.namenode.INodeAttributeProvider.AccessControlEnforcer in project hadoop by apache.

the class FSPermissionChecker method checkPermission.

/**
   * Check whether current user have permissions to access the path.
   * Traverse is always checked.
   *
   * Parent path means the parent directory for the path.
   * Ancestor path means the last (the closest) existing ancestor directory
   * of the path.
   * Note that if the parent path exists,
   * then the parent path and the ancestor path are the same.
   *
   * For example, suppose the path is "/foo/bar/baz".
   * No matter baz is a file or a directory,
   * the parent path is "/foo/bar".
   * If bar exists, then the ancestor path is also "/foo/bar".
   * If bar does not exist and foo exists,
   * then the ancestor path is "/foo".
   * Further, if both foo and bar do not exist,
   * then the ancestor path is "/".
   *
   * @param doCheckOwner Require user to be the owner of the path?
   * @param ancestorAccess The access required by the ancestor of the path.
   * @param parentAccess The access required by the parent of the path.
   * @param access The access required by the path.
   * @param subAccess If path is a directory,
   * it is the access required of the path and all the sub-directories.
   * If path is not a directory, there is no effect.
   * @param ignoreEmptyDir Ignore permission checking for empty directory?
   * @throws AccessControlException
   * 
   * Guarded by {@link FSNamesystem#readLock()}
   * Caller of this method must hold that lock.
   */
void checkPermission(INodesInPath inodesInPath, boolean doCheckOwner, FsAction ancestorAccess, FsAction parentAccess, FsAction access, FsAction subAccess, boolean ignoreEmptyDir) throws AccessControlException {
    if (LOG.isDebugEnabled()) {
        LOG.debug("ACCESS CHECK: " + this + ", doCheckOwner=" + doCheckOwner + ", ancestorAccess=" + ancestorAccess + ", parentAccess=" + parentAccess + ", access=" + access + ", subAccess=" + subAccess + ", ignoreEmptyDir=" + ignoreEmptyDir);
    }
    // check if (parentAccess != null) && file exists, then check sb
    // If resolveLink, the check is performed on the link target.
    final int snapshotId = inodesInPath.getPathSnapshotId();
    final INode[] inodes = inodesInPath.getINodesArray();
    final INodeAttributes[] inodeAttrs = new INodeAttributes[inodes.length];
    final byte[][] components = inodesInPath.getPathComponents();
    for (int i = 0; i < inodes.length && inodes[i] != null; i++) {
        inodeAttrs[i] = getINodeAttrs(components, i, inodes[i], snapshotId);
    }
    String path = inodesInPath.getPath();
    int ancestorIndex = inodes.length - 2;
    AccessControlEnforcer enforcer = getAccessControlEnforcer();
    enforcer.checkPermission(fsOwner, supergroup, callerUgi, inodeAttrs, inodes, components, snapshotId, path, ancestorIndex, doCheckOwner, ancestorAccess, parentAccess, access, subAccess, ignoreEmptyDir);
}
Also used : AccessControlEnforcer(org.apache.hadoop.hdfs.server.namenode.INodeAttributeProvider.AccessControlEnforcer)

Example 2 with AccessControlEnforcer

use of org.apache.hadoop.hdfs.server.namenode.INodeAttributeProvider.AccessControlEnforcer in project ranger by apache.

the class RangerHdfsAuthorizerTest method setup.

@BeforeClass
public static void setup() {
    try {
        File file = File.createTempFile("hdfs-version-site", ".xml");
        file.deleteOnExit();
        try (final FileOutputStream outStream = new FileOutputStream(file);
            final OutputStreamWriter writer = new OutputStreamWriter(outStream, StandardCharsets.UTF_8)) {
            writer.write("<configuration>\n" + "        <property>\n" + "                <name>hdfs.version</name>\n" + "                <value>hdfs_version_3.0</value>\n" + "        </property>\n" + "</configuration>\n");
        }
        RangerConfiguration config = RangerConfiguration.getInstance();
        config.addResource(new org.apache.hadoop.fs.Path(file.toURI()));
    } catch (Exception exception) {
        Assert.fail("Cannot create hdfs-version-site file:[" + exception.getMessage() + "]");
    }
    authorizer = new RangerHdfsAuthorizer();
    authorizer.start();
    AccessControlEnforcer accessControlEnforcer = Mockito.mock(AccessControlEnforcer.class);
    rangerControlEnforcer = authorizer.getExternalAccessControlEnforcer(accessControlEnforcer);
}
Also used : RangerHdfsAuthorizer(org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer) FileOutputStream(java.io.FileOutputStream) OutputStreamWriter(java.io.OutputStreamWriter) AccessControlEnforcer(org.apache.hadoop.hdfs.server.namenode.INodeAttributeProvider.AccessControlEnforcer) File(java.io.File) RangerConfiguration(org.apache.ranger.authorization.hadoop.config.RangerConfiguration) AccessControlException(org.apache.hadoop.security.AccessControlException) UnsupportedEncodingException(java.io.UnsupportedEncodingException) BeforeClass(org.junit.BeforeClass)

Aggregations

AccessControlEnforcer (org.apache.hadoop.hdfs.server.namenode.INodeAttributeProvider.AccessControlEnforcer)2 File (java.io.File)1 FileOutputStream (java.io.FileOutputStream)1 OutputStreamWriter (java.io.OutputStreamWriter)1 UnsupportedEncodingException (java.io.UnsupportedEncodingException)1 AccessControlException (org.apache.hadoop.security.AccessControlException)1 RangerHdfsAuthorizer (org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer)1 RangerConfiguration (org.apache.ranger.authorization.hadoop.config.RangerConfiguration)1 BeforeClass (org.junit.BeforeClass)1