Search in sources :

Example 1 with InvalidPathException

use of org.apache.hadoop.fs.InvalidPathException in project hadoop by apache.

the class TestDFSMkdirs method testMkdirRpcNonCanonicalPath.

/**
   * Regression test for HDFS-3626. Creates a file using a non-canonical path
   * (i.e. with extra slashes between components) and makes sure that the NN
   * rejects it.
   */
@Test
public void testMkdirRpcNonCanonicalPath() throws IOException {
    MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).numDataNodes(0).build();
    try {
        NamenodeProtocols nnrpc = cluster.getNameNodeRpc();
        for (String pathStr : NON_CANONICAL_PATHS) {
            try {
                nnrpc.mkdirs(pathStr, new FsPermission((short) 0755), true);
                fail("Did not fail when called with a non-canonicalized path: " + pathStr);
            } catch (InvalidPathException ipe) {
            // expected
            }
        }
    } finally {
        cluster.shutdown();
    }
}
Also used : NamenodeProtocols(org.apache.hadoop.hdfs.server.protocol.NamenodeProtocols) FsPermission(org.apache.hadoop.fs.permission.FsPermission) InvalidPathException(org.apache.hadoop.fs.InvalidPathException) Test(org.junit.Test)

Example 2 with InvalidPathException

use of org.apache.hadoop.fs.InvalidPathException in project hadoop by apache.

the class FSDirectory method resolvePath.

/**
   * Resolves a given path into an INodesInPath.  All ancestor inodes that
   * exist are validated as traversable directories.  Symlinks in the ancestry
   * will generate an UnresolvedLinkException.  The returned IIP will be an
   * accessible path that also passed additional sanity checks based on how
   * the path will be used as specified by the DirOp.
   *   READ:   Expands reserved paths and performs permission checks
   *           during traversal.  Raw paths are only accessible by a superuser.
   *   WRITE:  In addition to READ checks, ensures the path is not a
   *           snapshot path.
   *   CREATE: In addition to WRITE checks, ensures path does not contain
   *           illegal character sequences.
   *
   * @param pc  A permission checker for traversal checks.  Pass null for
   *            no permission checks.
   * @param src The path to resolve.
   * @param dirOp The {@link DirOp} that controls additional checks.
   * @param resolveLink If false, only ancestor symlinks will be checked.  If
   *         true, the last inode will also be checked.
   * @return if the path indicates an inode, return path after replacing up to
   *         <inodeid> with the corresponding path of the inode, else the path
   *         in {@code src} as is. If the path refers to a path in the "raw"
   *         directory, return the non-raw pathname.
   * @throws FileNotFoundException
   * @throws AccessControlException
   * @throws ParentNotDirectoryException
   * @throws UnresolvedLinkException
   */
@VisibleForTesting
public INodesInPath resolvePath(FSPermissionChecker pc, String src, DirOp dirOp) throws UnresolvedLinkException, FileNotFoundException, AccessControlException, ParentNotDirectoryException {
    boolean isCreate = (dirOp == DirOp.CREATE || dirOp == DirOp.CREATE_LINK);
    // prevent creation of new invalid paths
    if (isCreate && !DFSUtil.isValidName(src)) {
        throw new InvalidPathException("Invalid file name: " + src);
    }
    byte[][] components = INode.getPathComponents(src);
    boolean isRaw = isReservedRawName(components);
    if (isPermissionEnabled && pc != null && isRaw) {
        pc.checkSuperuserPrivilege();
    }
    components = resolveComponents(components, this);
    INodesInPath iip = INodesInPath.resolve(rootDir, components, isRaw);
    // PNDE
    try {
        checkTraverse(pc, iip, dirOp);
    } catch (ParentNotDirectoryException pnde) {
        if (!isCreate) {
            throw new AccessControlException(pnde.getMessage());
        }
        throw pnde;
    }
    return iip;
}
Also used : ParentNotDirectoryException(org.apache.hadoop.fs.ParentNotDirectoryException) AccessControlException(org.apache.hadoop.security.AccessControlException) SnapshotAccessControlException(org.apache.hadoop.hdfs.protocol.SnapshotAccessControlException) InvalidPathException(org.apache.hadoop.fs.InvalidPathException) VisibleForTesting(com.google.common.annotations.VisibleForTesting)

Example 3 with InvalidPathException

use of org.apache.hadoop.fs.InvalidPathException in project hadoop by apache.

the class FSDirSnapshotOp method createSnapshot.

/**
   * Create a snapshot
   * @param snapshotRoot The directory path where the snapshot is taken
   * @param snapshotName The name of the snapshot
   */
static String createSnapshot(FSDirectory fsd, SnapshotManager snapshotManager, String snapshotRoot, String snapshotName, boolean logRetryCache) throws IOException {
    FSPermissionChecker pc = fsd.getPermissionChecker();
    final INodesInPath iip = fsd.resolvePath(pc, snapshotRoot, DirOp.WRITE);
    if (fsd.isPermissionEnabled()) {
        fsd.checkOwner(pc, iip);
    }
    if (snapshotName == null || snapshotName.isEmpty()) {
        snapshotName = Snapshot.generateDefaultSnapshotName();
    } else if (!DFSUtil.isValidNameForComponent(snapshotName)) {
        throw new InvalidPathException("Invalid snapshot name: " + snapshotName);
    }
    String snapshotPath = null;
    verifySnapshotName(fsd, snapshotName, snapshotRoot);
    fsd.writeLock();
    try {
        snapshotPath = snapshotManager.createSnapshot(iip, snapshotRoot, snapshotName);
    } finally {
        fsd.writeUnlock();
    }
    fsd.getEditLog().logCreateSnapshot(snapshotRoot, snapshotName, logRetryCache);
    return snapshotPath;
}
Also used : InvalidPathException(org.apache.hadoop.fs.InvalidPathException)

Example 4 with InvalidPathException

use of org.apache.hadoop.fs.InvalidPathException in project hadoop by apache.

the class FSNamesystem method startFileInt.

private HdfsFileStatus startFileInt(String src, PermissionStatus permissions, String holder, String clientMachine, EnumSet<CreateFlag> flag, boolean createParent, short replication, long blockSize, CryptoProtocolVersion[] supportedVersions, boolean logRetryCache) throws IOException {
    if (NameNode.stateChangeLog.isDebugEnabled()) {
        StringBuilder builder = new StringBuilder();
        builder.append("DIR* NameSystem.startFile: src=").append(src).append(", holder=").append(holder).append(", clientMachine=").append(clientMachine).append(", createParent=").append(createParent).append(", replication=").append(replication).append(", createFlag=").append(flag).append(", blockSize=").append(blockSize).append(", supportedVersions=").append(Arrays.toString(supportedVersions));
        NameNode.stateChangeLog.debug(builder.toString());
    }
    if (!DFSUtil.isValidName(src) || FSDirectory.isExactReservedName(src) || (FSDirectory.isReservedName(src) && !FSDirectory.isReservedRawName(src) && !FSDirectory.isReservedInodesName(src))) {
        throw new InvalidPathException(src);
    }
    FSPermissionChecker pc = getPermissionChecker();
    INodesInPath iip = null;
    // until we do something that might create edits
    boolean skipSync = true;
    HdfsFileStatus stat = null;
    BlocksMapUpdateInfo toRemoveBlocks = null;
    checkOperation(OperationCategory.WRITE);
    writeLock();
    try {
        checkOperation(OperationCategory.WRITE);
        checkNameNodeSafeMode("Cannot create file" + src);
        iip = FSDirWriteFileOp.resolvePathForStartFile(dir, pc, src, flag, createParent);
        if (!FSDirErasureCodingOp.hasErasureCodingPolicy(this, iip)) {
            blockManager.verifyReplication(src, replication, clientMachine);
        }
        if (blockSize < minBlockSize) {
            throw new IOException("Specified block size is less than configured" + " minimum value (" + DFSConfigKeys.DFS_NAMENODE_MIN_BLOCK_SIZE_KEY + "): " + blockSize + " < " + minBlockSize);
        }
        FileEncryptionInfo feInfo = null;
        if (provider != null) {
            EncryptionKeyInfo ezInfo = FSDirEncryptionZoneOp.getEncryptionKeyInfo(this, iip, supportedVersions);
            // and/or EZ has not mutated
            if (ezInfo != null) {
                checkOperation(OperationCategory.WRITE);
                iip = FSDirWriteFileOp.resolvePathForStartFile(dir, pc, iip.getPath(), flag, createParent);
                feInfo = FSDirEncryptionZoneOp.getFileEncryptionInfo(dir, iip, ezInfo);
            }
        }
        // following might generate edits
        skipSync = false;
        toRemoveBlocks = new BlocksMapUpdateInfo();
        dir.writeLock();
        try {
            stat = FSDirWriteFileOp.startFile(this, iip, permissions, holder, clientMachine, flag, createParent, replication, blockSize, feInfo, toRemoveBlocks, logRetryCache);
        } catch (IOException e) {
            skipSync = e instanceof StandbyException;
            throw e;
        } finally {
            dir.writeUnlock();
        }
    } finally {
        writeUnlock("create");
        // They need to be sync'ed even when an exception was thrown.
        if (!skipSync) {
            getEditLog().logSync();
            if (toRemoveBlocks != null) {
                removeBlocks(toRemoveBlocks);
                toRemoveBlocks.clear();
            }
        }
    }
    return stat;
}
Also used : BlocksMapUpdateInfo(org.apache.hadoop.hdfs.server.namenode.INode.BlocksMapUpdateInfo) StandbyException(org.apache.hadoop.ipc.StandbyException) HdfsFileStatus(org.apache.hadoop.hdfs.protocol.HdfsFileStatus) IOException(java.io.IOException) EncryptionKeyInfo(org.apache.hadoop.hdfs.server.namenode.FSDirEncryptionZoneOp.EncryptionKeyInfo) FileEncryptionInfo(org.apache.hadoop.fs.FileEncryptionInfo) InvalidPathException(org.apache.hadoop.fs.InvalidPathException)

Example 5 with InvalidPathException

use of org.apache.hadoop.fs.InvalidPathException in project carbondata by apache.

the class MapredCarbonInputFormat method populateCarbonTable.

/**
 * this method will read the schema from the physical file and populate into CARBON_TABLE
 *
 * @param configuration
 * @throws IOException
 */
private static void populateCarbonTable(Configuration configuration, String paths) throws IOException, InvalidConfigurationException {
    String dirs = configuration.get(INPUT_DIR, "");
    String[] inputPaths = StringUtils.split(dirs);
    String validInputPath = null;
    if (inputPaths.length == 0) {
        throw new InvalidPathException("No input paths specified in job");
    } else {
        if (paths != null) {
            for (String inputPath : inputPaths) {
                if (paths.startsWith(inputPath.replace("file:", ""))) {
                    validInputPath = inputPath;
                    break;
                }
            }
        }
    }
    AbsoluteTableIdentifier absoluteTableIdentifier = AbsoluteTableIdentifier.from(validInputPath, getDatabaseName(configuration), getTableName(configuration));
    // read the schema file to get the absoluteTableIdentifier having the correct table id
    // persisted in the schema
    CarbonTable carbonTable = SchemaReader.readCarbonTableFromStore(absoluteTableIdentifier);
    configuration.set(CARBON_TABLE, ObjectSerializationUtil.convertObjectToString(carbonTable));
    setTableInfo(configuration, carbonTable.getTableInfo());
}
Also used : CarbonTable(org.apache.carbondata.core.metadata.schema.table.CarbonTable) AbsoluteTableIdentifier(org.apache.carbondata.core.metadata.AbsoluteTableIdentifier) InvalidPathException(org.apache.hadoop.fs.InvalidPathException)

Aggregations

InvalidPathException (org.apache.hadoop.fs.InvalidPathException)8 FsPermission (org.apache.hadoop.fs.permission.FsPermission)3 AbsoluteTableIdentifier (org.apache.carbondata.core.metadata.AbsoluteTableIdentifier)2 CarbonTable (org.apache.carbondata.core.metadata.schema.table.CarbonTable)2 NamenodeProtocols (org.apache.hadoop.hdfs.server.protocol.NamenodeProtocols)2 VisibleForTesting (com.google.common.annotations.VisibleForTesting)1 IOException (java.io.IOException)1 URI (java.net.URI)1 Configuration (org.apache.hadoop.conf.Configuration)1 CreateFlag (org.apache.hadoop.fs.CreateFlag)1 FSDataOutputStream (org.apache.hadoop.fs.FSDataOutputStream)1 FileEncryptionInfo (org.apache.hadoop.fs.FileEncryptionInfo)1 FileSystem (org.apache.hadoop.fs.FileSystem)1 ParentNotDirectoryException (org.apache.hadoop.fs.ParentNotDirectoryException)1 Path (org.apache.hadoop.fs.Path)1 HdfsFileStatus (org.apache.hadoop.hdfs.protocol.HdfsFileStatus)1 SnapshotAccessControlException (org.apache.hadoop.hdfs.protocol.SnapshotAccessControlException)1 EncryptionKeyInfo (org.apache.hadoop.hdfs.server.namenode.FSDirEncryptionZoneOp.EncryptionKeyInfo)1 BlocksMapUpdateInfo (org.apache.hadoop.hdfs.server.namenode.INode.BlocksMapUpdateInfo)1 StandbyException (org.apache.hadoop.ipc.StandbyException)1