Search in sources :

Example 46 with FileSystem

use of org.apache.hadoop.fs.FileSystem in project hadoop by apache.

the class PathData method expandAsGlob.

/**
   * Expand the given path as a glob pattern.  Non-existent paths do not
   * throw an exception because creation commands like touch and mkdir need
   * to create them.  The "stat" field will be null if the path does not
   * exist.
   * @param pattern the pattern to expand as a glob
   * @param conf the hadoop configuration
   * @return list of {@link PathData} objects.  if the pattern is not a glob,
   * and does not exist, the list will contain a single PathData with a null
   * stat 
   * @throws IOException anything else goes wrong...
   */
public static PathData[] expandAsGlob(String pattern, Configuration conf) throws IOException {
    Path globPath = new Path(pattern);
    FileSystem fs = globPath.getFileSystem(conf);
    FileStatus[] stats = fs.globStatus(globPath);
    PathData[] items = null;
    if (stats == null) {
        // remove any quoting in the glob pattern
        pattern = pattern.replaceAll("\\\\(.)", "$1");
        // not a glob & file not found, so add the path with a null stat
        items = new PathData[] { new PathData(fs, pattern, null) };
    } else {
        // figure out what type of glob path was given, will convert globbed
        // paths to match the type to preserve relativity
        PathType globType;
        URI globUri = globPath.toUri();
        if (globUri.getScheme() != null) {
            globType = PathType.HAS_SCHEME;
        } else if (!globUri.getPath().isEmpty() && new Path(globUri.getPath()).isAbsolute()) {
            globType = PathType.SCHEMELESS_ABSOLUTE;
        } else {
            globType = PathType.RELATIVE;
        }
        // convert stats to PathData
        items = new PathData[stats.length];
        int i = 0;
        for (FileStatus stat : stats) {
            URI matchUri = stat.getPath().toUri();
            String globMatch = null;
            switch(globType) {
                case // use as-is, but remove authority if necessary
                HAS_SCHEME:
                    if (globUri.getAuthority() == null) {
                        matchUri = removeAuthority(matchUri);
                    }
                    globMatch = uriToString(matchUri, false);
                    break;
                case // take just the uri's path
                SCHEMELESS_ABSOLUTE:
                    globMatch = matchUri.getPath();
                    break;
                case // make it relative to the current working dir
                RELATIVE:
                    URI cwdUri = fs.getWorkingDirectory().toUri();
                    globMatch = relativize(cwdUri, matchUri, stat.isDirectory());
                    break;
            }
            items[i++] = new PathData(fs, globMatch, stat);
        }
    }
    Arrays.sort(items);
    return items;
}
Also used : Path(org.apache.hadoop.fs.Path) FileStatus(org.apache.hadoop.fs.FileStatus) FileSystem(org.apache.hadoop.fs.FileSystem) LocalFileSystem(org.apache.hadoop.fs.LocalFileSystem) URI(java.net.URI)

Example 47 with FileSystem

use of org.apache.hadoop.fs.FileSystem in project hadoop by apache.

the class ViewFileSystem method listStatus.

@Override
public FileStatus[] listStatus(final Path f) throws AccessControlException, FileNotFoundException, IOException {
    InodeTree.ResolveResult<FileSystem> res = fsState.resolve(getUriPath(f), true);
    FileStatus[] statusLst = res.targetFileSystem.listStatus(res.remainingPath);
    if (!res.isInternalDir()) {
        // We need to change the name in the FileStatus as described in
        // {@link #getFileStatus }
        int i = 0;
        for (FileStatus status : statusLst) {
            statusLst[i++] = fixFileStatus(status, getChrootedPath(res, status, f));
        }
    }
    return statusLst;
}
Also used : FileStatus(org.apache.hadoop.fs.FileStatus) LocatedFileStatus(org.apache.hadoop.fs.LocatedFileStatus) FileSystem(org.apache.hadoop.fs.FileSystem)

Example 48 with FileSystem

use of org.apache.hadoop.fs.FileSystem in project hadoop by apache.

the class ViewFileSystem method initialize.

/**
   * Called after a new FileSystem instance is constructed.
   * @param theUri a uri whose authority section names the host, port, etc. for
   *        this FileSystem
   * @param conf the configuration
   */
@Override
public void initialize(final URI theUri, final Configuration conf) throws IOException {
    super.initialize(theUri, conf);
    setConf(conf);
    config = conf;
    // Now build  client side view (i.e. client side mount table) from config.
    final String authority = theUri.getAuthority();
    try {
        myUri = new URI(FsConstants.VIEWFS_SCHEME, authority, "/", null, null);
        fsState = new InodeTree<FileSystem>(conf, authority) {

            @Override
            protected FileSystem getTargetFileSystem(final URI uri) throws URISyntaxException, IOException {
                return new ChRootedFileSystem(uri, config);
            }

            @Override
            protected FileSystem getTargetFileSystem(final INodeDir<FileSystem> dir) throws URISyntaxException {
                return new InternalDirOfViewFs(dir, creationTime, ugi, myUri);
            }

            @Override
            protected FileSystem getTargetFileSystem(URI[] mergeFsURIList) throws URISyntaxException, UnsupportedFileSystemException {
                throw new UnsupportedFileSystemException("mergefs not implemented");
            // return MergeFs.createMergeFs(mergeFsURIList, config);
            }
        };
        workingDir = this.getHomeDirectory();
    } catch (URISyntaxException e) {
        throw new IOException("URISyntax exception: " + theUri);
    }
}
Also used : FileSystem(org.apache.hadoop.fs.FileSystem) UnsupportedFileSystemException(org.apache.hadoop.fs.UnsupportedFileSystemException) URISyntaxException(java.net.URISyntaxException) IOException(java.io.IOException) URI(java.net.URI)

Example 49 with FileSystem

use of org.apache.hadoop.fs.FileSystem in project hadoop by apache.

the class TestRawlocalContractRename method testRenameWithNonEmptySubDirPOSIX.

/**
   * Test fallback rename code <code>handleEmptyDstDirectoryOnWindows()</code>
   * even on not Windows platform where the normal <code>File.renameTo()</code>
   * is supposed to work well. This test has been added for HADOOP-9805.
   * 
   * @see AbstractContractRenameTest#testRenameWithNonEmptySubDirPOSIX()
   */
@Test
public void testRenameWithNonEmptySubDirPOSIX() throws Throwable {
    final Path renameTestDir = path("testRenameWithNonEmptySubDir");
    final Path srcDir = new Path(renameTestDir, "src1");
    final Path srcSubDir = new Path(srcDir, "sub");
    final Path finalDir = new Path(renameTestDir, "dest");
    FileSystem fs = getFileSystem();
    ContractTestUtils.rm(fs, renameTestDir, true, false);
    fs.mkdirs(srcDir);
    fs.mkdirs(finalDir);
    ContractTestUtils.writeTextFile(fs, new Path(srcDir, "source.txt"), "this is the file in src dir", false);
    ContractTestUtils.writeTextFile(fs, new Path(srcSubDir, "subfile.txt"), "this is the file in src/sub dir", false);
    ContractTestUtils.assertPathExists(fs, "not created in src dir", new Path(srcDir, "source.txt"));
    ContractTestUtils.assertPathExists(fs, "not created in src/sub dir", new Path(srcSubDir, "subfile.txt"));
    RawLocalFileSystem rlfs = (RawLocalFileSystem) fs;
    rlfs.handleEmptyDstDirectoryOnWindows(srcDir, rlfs.pathToFile(srcDir), finalDir, rlfs.pathToFile(finalDir));
    // Accept only POSIX rename behavior in this test
    ContractTestUtils.assertPathExists(fs, "not renamed into dest dir", new Path(finalDir, "source.txt"));
    ContractTestUtils.assertPathExists(fs, "not renamed into dest/sub dir", new Path(finalDir, "sub/subfile.txt"));
    ContractTestUtils.assertPathDoesNotExist(fs, "not deleted", new Path(srcDir, "source.txt"));
}
Also used : Path(org.apache.hadoop.fs.Path) RawLocalFileSystem(org.apache.hadoop.fs.RawLocalFileSystem) FileSystem(org.apache.hadoop.fs.FileSystem) RawLocalFileSystem(org.apache.hadoop.fs.RawLocalFileSystem) AbstractContractRenameTest(org.apache.hadoop.fs.contract.AbstractContractRenameTest) Test(org.junit.Test)

Example 50 with FileSystem

use of org.apache.hadoop.fs.FileSystem in project hadoop by apache.

the class AbstractContractRootDirectoryTest method testMkDirDepth1.

@Test
public void testMkDirDepth1() throws Throwable {
    FileSystem fs = getFileSystem();
    Path dir = new Path("/testmkdirdepth1");
    assertPathDoesNotExist("directory already exists", dir);
    fs.mkdirs(dir);
    assertIsDirectory(dir);
    assertPathExists("directory already exists", dir);
    assertDeleted(dir, true);
}
Also used : Path(org.apache.hadoop.fs.Path) FileSystem(org.apache.hadoop.fs.FileSystem) Test(org.junit.Test)

Aggregations

FileSystem (org.apache.hadoop.fs.FileSystem)2611 Path (org.apache.hadoop.fs.Path)2199 Test (org.junit.Test)1034 Configuration (org.apache.hadoop.conf.Configuration)890 IOException (java.io.IOException)757 FileStatus (org.apache.hadoop.fs.FileStatus)419 FSDataOutputStream (org.apache.hadoop.fs.FSDataOutputStream)264 MiniDFSCluster (org.apache.hadoop.hdfs.MiniDFSCluster)227 ArrayList (java.util.ArrayList)208 File (java.io.File)181 DistributedFileSystem (org.apache.hadoop.hdfs.DistributedFileSystem)165 JobConf (org.apache.hadoop.mapred.JobConf)163 FSDataInputStream (org.apache.hadoop.fs.FSDataInputStream)151 HdfsConfiguration (org.apache.hadoop.hdfs.HdfsConfiguration)145 URI (java.net.URI)135 SequenceFile (org.apache.hadoop.io.SequenceFile)118 Text (org.apache.hadoop.io.Text)112 FileNotFoundException (java.io.FileNotFoundException)102 FsPermission (org.apache.hadoop.fs.permission.FsPermission)94 Job (org.apache.hadoop.mapreduce.Job)81