Search in sources :

Example 56 with FsPermission

use of org.apache.hadoop.fs.permission.FsPermission in project hadoop by apache.

the class DFSClient method createSymlink.

/**
   * Creates a symbolic link.
   *
   * @see ClientProtocol#createSymlink(String, String,FsPermission, boolean)
   */
public void createSymlink(String target, String link, boolean createParent) throws IOException {
    checkOpen();
    try (TraceScope ignored = newPathTraceScope("createSymlink", target)) {
        final FsPermission dirPerm = applyUMask(null);
        namenode.createSymlink(target, link, dirPerm, createParent);
    } catch (RemoteException re) {
        throw re.unwrapRemoteException(AccessControlException.class, FileAlreadyExistsException.class, FileNotFoundException.class, ParentNotDirectoryException.class, NSQuotaExceededException.class, DSQuotaExceededException.class, QuotaByStorageTypeExceededException.class, UnresolvedPathException.class, SnapshotAccessControlException.class);
    }
}
Also used : QuotaByStorageTypeExceededException(org.apache.hadoop.hdfs.protocol.QuotaByStorageTypeExceededException) FileAlreadyExistsException(org.apache.hadoop.fs.FileAlreadyExistsException) ParentNotDirectoryException(org.apache.hadoop.fs.ParentNotDirectoryException) DSQuotaExceededException(org.apache.hadoop.hdfs.protocol.DSQuotaExceededException) TraceScope(org.apache.htrace.core.TraceScope) FileNotFoundException(java.io.FileNotFoundException) AccessControlException(org.apache.hadoop.security.AccessControlException) SnapshotAccessControlException(org.apache.hadoop.hdfs.protocol.SnapshotAccessControlException) NSQuotaExceededException(org.apache.hadoop.hdfs.protocol.NSQuotaExceededException) SnapshotAccessControlException(org.apache.hadoop.hdfs.protocol.SnapshotAccessControlException) FsPermission(org.apache.hadoop.fs.permission.FsPermission) RemoteException(org.apache.hadoop.ipc.RemoteException) UnresolvedPathException(org.apache.hadoop.hdfs.protocol.UnresolvedPathException)

Example 57 with FsPermission

use of org.apache.hadoop.fs.permission.FsPermission in project hadoop by apache.

the class DFSClient method create.

/**
   * Same as {@link #create(String, FsPermission, EnumSet, boolean, short, long,
   * Progressable, int, ChecksumOpt)} with the addition of favoredNodes that is
   * a hint to where the namenode should place the file blocks.
   * The favored nodes hint is not persisted in HDFS. Hence it may be honored
   * at the creation time only. HDFS could move the blocks during balancing or
   * replication, to move the blocks from favored nodes. A value of null means
   * no favored nodes for this create
   */
public DFSOutputStream create(String src, FsPermission permission, EnumSet<CreateFlag> flag, boolean createParent, short replication, long blockSize, Progressable progress, int buffersize, ChecksumOpt checksumOpt, InetSocketAddress[] favoredNodes) throws IOException {
    checkOpen();
    final FsPermission masked = applyUMask(permission);
    LOG.debug("{}: masked={}", src, masked);
    final DFSOutputStream result = DFSOutputStream.newStreamForCreate(this, src, masked, flag, createParent, replication, blockSize, progress, dfsClientConf.createChecksum(checksumOpt), getFavoredNodesStr(favoredNodes));
    beginFileLease(result.getFileId(), result);
    return result;
}
Also used : FsPermission(org.apache.hadoop.fs.permission.FsPermission)

Example 58 with FsPermission

use of org.apache.hadoop.fs.permission.FsPermission in project hadoop by apache.

the class TestEncryptionZones method testListEncryptionZonesAsNonSuperUser.

/**
   * Test listing encryption zones as a non super user.
   */
@Test
public void testListEncryptionZonesAsNonSuperUser() throws Exception {
    final UserGroupInformation user = UserGroupInformation.createUserForTesting("user", new String[] { "mygroup" });
    final Path testRoot = new Path("/tmp/TestEncryptionZones");
    final Path superPath = new Path(testRoot, "superuseronly");
    final Path allPath = new Path(testRoot, "accessall");
    fsWrapper.mkdir(superPath, new FsPermission((short) 0700), true);
    dfsAdmin.createEncryptionZone(superPath, TEST_KEY, NO_TRASH);
    fsWrapper.mkdir(allPath, new FsPermission((short) 0707), true);
    dfsAdmin.createEncryptionZone(allPath, TEST_KEY, NO_TRASH);
    user.doAs(new PrivilegedExceptionAction<Object>() {

        @Override
        public Object run() throws Exception {
            final HdfsAdmin userAdmin = new HdfsAdmin(FileSystem.getDefaultUri(conf), conf);
            try {
                userAdmin.listEncryptionZones();
            } catch (AccessControlException e) {
                assertExceptionContains("Superuser privilege is required", e);
            }
            return null;
        }
    });
}
Also used : Path(org.apache.hadoop.fs.Path) HdfsAdmin(org.apache.hadoop.hdfs.client.HdfsAdmin) AccessControlException(org.apache.hadoop.security.AccessControlException) Matchers.anyObject(org.mockito.Matchers.anyObject) FsPermission(org.apache.hadoop.fs.permission.FsPermission) IOException(java.io.IOException) ExecutionException(java.util.concurrent.ExecutionException) AccessControlException(org.apache.hadoop.security.AccessControlException) UserGroupInformation(org.apache.hadoop.security.UserGroupInformation) Test(org.junit.Test)

Example 59 with FsPermission

use of org.apache.hadoop.fs.permission.FsPermission in project hadoop by apache.

the class TestDistributedFileSystem method testStatistics.

@Test
public void testStatistics() throws IOException {
    FileSystem.getStatistics(HdfsConstants.HDFS_URI_SCHEME, DistributedFileSystem.class).reset();
    @SuppressWarnings("unchecked") ThreadLocal<StatisticsData> data = (ThreadLocal<StatisticsData>) Whitebox.getInternalState(FileSystem.getStatistics(HdfsConstants.HDFS_URI_SCHEME, DistributedFileSystem.class), "threadData");
    data.set(null);
    int lsLimit = 2;
    final Configuration conf = getTestConfiguration();
    conf.setInt(DFSConfigKeys.DFS_LIST_LIMIT, lsLimit);
    final MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).build();
    try {
        cluster.waitActive();
        final FileSystem fs = cluster.getFileSystem();
        Path dir = new Path("/test");
        Path file = new Path(dir, "file");
        int readOps = 0;
        int writeOps = 0;
        int largeReadOps = 0;
        long opCount = getOpStatistics(OpType.MKDIRS);
        fs.mkdirs(dir);
        checkStatistics(fs, readOps, ++writeOps, largeReadOps);
        checkOpStatistics(OpType.MKDIRS, opCount + 1);
        opCount = getOpStatistics(OpType.CREATE);
        FSDataOutputStream out = fs.create(file, (short) 1);
        out.close();
        checkStatistics(fs, readOps, ++writeOps, largeReadOps);
        checkOpStatistics(OpType.CREATE, opCount + 1);
        opCount = getOpStatistics(OpType.GET_FILE_STATUS);
        FileStatus status = fs.getFileStatus(file);
        checkStatistics(fs, ++readOps, writeOps, largeReadOps);
        checkOpStatistics(OpType.GET_FILE_STATUS, opCount + 1);
        opCount = getOpStatistics(OpType.GET_FILE_BLOCK_LOCATIONS);
        fs.getFileBlockLocations(file, 0, 0);
        checkStatistics(fs, ++readOps, writeOps, largeReadOps);
        checkOpStatistics(OpType.GET_FILE_BLOCK_LOCATIONS, opCount + 1);
        fs.getFileBlockLocations(status, 0, 0);
        checkStatistics(fs, ++readOps, writeOps, largeReadOps);
        checkOpStatistics(OpType.GET_FILE_BLOCK_LOCATIONS, opCount + 2);
        opCount = getOpStatistics(OpType.OPEN);
        FSDataInputStream in = fs.open(file);
        in.close();
        checkStatistics(fs, ++readOps, writeOps, largeReadOps);
        checkOpStatistics(OpType.OPEN, opCount + 1);
        opCount = getOpStatistics(OpType.SET_REPLICATION);
        fs.setReplication(file, (short) 2);
        checkStatistics(fs, readOps, ++writeOps, largeReadOps);
        checkOpStatistics(OpType.SET_REPLICATION, opCount + 1);
        opCount = getOpStatistics(OpType.RENAME);
        Path file1 = new Path(dir, "file1");
        fs.rename(file, file1);
        checkStatistics(fs, readOps, ++writeOps, largeReadOps);
        checkOpStatistics(OpType.RENAME, opCount + 1);
        opCount = getOpStatistics(OpType.GET_CONTENT_SUMMARY);
        fs.getContentSummary(file1);
        checkStatistics(fs, ++readOps, writeOps, largeReadOps);
        checkOpStatistics(OpType.GET_CONTENT_SUMMARY, opCount + 1);
        // Iterative ls test
        long mkdirOp = getOpStatistics(OpType.MKDIRS);
        long listStatusOp = getOpStatistics(OpType.LIST_STATUS);
        for (int i = 0; i < 10; i++) {
            Path p = new Path(dir, Integer.toString(i));
            fs.mkdirs(p);
            mkdirOp++;
            FileStatus[] list = fs.listStatus(dir);
            if (list.length > lsLimit) {
                // if large directory, then count readOps and largeReadOps by 
                // number times listStatus iterates
                int iterations = (int) Math.ceil((double) list.length / lsLimit);
                largeReadOps += iterations;
                readOps += iterations;
                listStatusOp += iterations;
            } else {
                // Single iteration in listStatus - no large read operation done
                readOps++;
                listStatusOp++;
            }
            // writeOps incremented by 1 for mkdirs
            // readOps and largeReadOps incremented by 1 or more
            checkStatistics(fs, readOps, ++writeOps, largeReadOps);
            checkOpStatistics(OpType.MKDIRS, mkdirOp);
            checkOpStatistics(OpType.LIST_STATUS, listStatusOp);
        }
        opCount = getOpStatistics(OpType.GET_STATUS);
        fs.getStatus(file1);
        checkStatistics(fs, ++readOps, writeOps, largeReadOps);
        checkOpStatistics(OpType.GET_STATUS, opCount + 1);
        opCount = getOpStatistics(OpType.GET_FILE_CHECKSUM);
        fs.getFileChecksum(file1);
        checkStatistics(fs, ++readOps, writeOps, largeReadOps);
        checkOpStatistics(OpType.GET_FILE_CHECKSUM, opCount + 1);
        opCount = getOpStatistics(OpType.SET_PERMISSION);
        fs.setPermission(file1, new FsPermission((short) 0777));
        checkStatistics(fs, readOps, ++writeOps, largeReadOps);
        checkOpStatistics(OpType.SET_PERMISSION, opCount + 1);
        opCount = getOpStatistics(OpType.SET_TIMES);
        fs.setTimes(file1, 0L, 0L);
        checkStatistics(fs, readOps, ++writeOps, largeReadOps);
        checkOpStatistics(OpType.SET_TIMES, opCount + 1);
        opCount = getOpStatistics(OpType.SET_OWNER);
        UserGroupInformation ugi = UserGroupInformation.getCurrentUser();
        fs.setOwner(file1, ugi.getUserName(), ugi.getGroupNames()[0]);
        checkOpStatistics(OpType.SET_OWNER, opCount + 1);
        checkStatistics(fs, readOps, ++writeOps, largeReadOps);
        opCount = getOpStatistics(OpType.DELETE);
        fs.delete(dir, true);
        checkStatistics(fs, readOps, ++writeOps, largeReadOps);
        checkOpStatistics(OpType.DELETE, opCount + 1);
    } finally {
        if (cluster != null)
            cluster.shutdown();
    }
}
Also used : Path(org.apache.hadoop.fs.Path) FileStatus(org.apache.hadoop.fs.FileStatus) LocatedFileStatus(org.apache.hadoop.fs.LocatedFileStatus) Configuration(org.apache.hadoop.conf.Configuration) FileSystem(org.apache.hadoop.fs.FileSystem) FSDataInputStream(org.apache.hadoop.fs.FSDataInputStream) FSDataOutputStream(org.apache.hadoop.fs.FSDataOutputStream) FsPermission(org.apache.hadoop.fs.permission.FsPermission) StatisticsData(org.apache.hadoop.fs.FileSystem.Statistics.StatisticsData) UserGroupInformation(org.apache.hadoop.security.UserGroupInformation) Test(org.junit.Test)

Example 60 with FsPermission

use of org.apache.hadoop.fs.permission.FsPermission in project hadoop by apache.

the class TestDistributedFileSystem method testCreateWithCustomChecksum.

@Test
public void testCreateWithCustomChecksum() throws Exception {
    Configuration conf = getTestConfiguration();
    MiniDFSCluster cluster = null;
    Path testBasePath = new Path("/test/csum");
    // create args 
    Path path1 = new Path(testBasePath, "file_wtih_crc1");
    Path path2 = new Path(testBasePath, "file_with_crc2");
    ChecksumOpt opt1 = new ChecksumOpt(DataChecksum.Type.CRC32C, 512);
    ChecksumOpt opt2 = new ChecksumOpt(DataChecksum.Type.CRC32, 512);
    // common args
    FsPermission perm = FsPermission.getDefault().applyUMask(FsPermission.getUMask(conf));
    EnumSet<CreateFlag> flags = EnumSet.of(CreateFlag.OVERWRITE, CreateFlag.CREATE);
    short repl = 1;
    try {
        cluster = new MiniDFSCluster.Builder(conf).numDataNodes(1).build();
        FileSystem dfs = cluster.getFileSystem();
        dfs.mkdirs(testBasePath);
        // create two files with different checksum types
        FSDataOutputStream out1 = dfs.create(path1, perm, flags, 4096, repl, 131072L, null, opt1);
        FSDataOutputStream out2 = dfs.create(path2, perm, flags, 4096, repl, 131072L, null, opt2);
        for (int i = 0; i < 1024; i++) {
            out1.write(i);
            out2.write(i);
        }
        out1.close();
        out2.close();
        // the two checksums must be different.
        MD5MD5CRC32FileChecksum sum1 = (MD5MD5CRC32FileChecksum) dfs.getFileChecksum(path1);
        MD5MD5CRC32FileChecksum sum2 = (MD5MD5CRC32FileChecksum) dfs.getFileChecksum(path2);
        assertFalse(sum1.equals(sum2));
        // check the individual params
        assertEquals(DataChecksum.Type.CRC32C, sum1.getCrcType());
        assertEquals(DataChecksum.Type.CRC32, sum2.getCrcType());
    } finally {
        if (cluster != null) {
            cluster.getFileSystem().delete(testBasePath, true);
            cluster.shutdown();
        }
    }
}
Also used : Path(org.apache.hadoop.fs.Path) CreateFlag(org.apache.hadoop.fs.CreateFlag) MD5MD5CRC32FileChecksum(org.apache.hadoop.fs.MD5MD5CRC32FileChecksum) Configuration(org.apache.hadoop.conf.Configuration) ChecksumOpt(org.apache.hadoop.fs.Options.ChecksumOpt) FileSystem(org.apache.hadoop.fs.FileSystem) FsPermission(org.apache.hadoop.fs.permission.FsPermission) FSDataOutputStream(org.apache.hadoop.fs.FSDataOutputStream) Test(org.junit.Test)

Aggregations

FsPermission (org.apache.hadoop.fs.permission.FsPermission)427 Path (org.apache.hadoop.fs.Path)267 Test (org.junit.Test)180 IOException (java.io.IOException)120 FileSystem (org.apache.hadoop.fs.FileSystem)93 Configuration (org.apache.hadoop.conf.Configuration)89 FileStatus (org.apache.hadoop.fs.FileStatus)87 FSDataOutputStream (org.apache.hadoop.fs.FSDataOutputStream)52 AccessControlException (org.apache.hadoop.security.AccessControlException)43 UserGroupInformation (org.apache.hadoop.security.UserGroupInformation)36 FileNotFoundException (java.io.FileNotFoundException)33 MiniDFSCluster (org.apache.hadoop.hdfs.MiniDFSCluster)29 File (java.io.File)26 DistributedFileSystem (org.apache.hadoop.hdfs.DistributedFileSystem)26 HdfsConfiguration (org.apache.hadoop.hdfs.HdfsConfiguration)26 AclEntry (org.apache.hadoop.fs.permission.AclEntry)25 ArrayList (java.util.ArrayList)22 HashMap (java.util.HashMap)19 YarnConfiguration (org.apache.hadoop.yarn.conf.YarnConfiguration)16 URI (java.net.URI)15