Search in sources :

Example 31 with FileContext

use of org.apache.hadoop.fs.FileContext in project apex-core by apache.

the class AsyncFSStorageAgentTest method testDelete.

@Test
public void testDelete() throws IOException {
    testLoad();
    testMeta.storageAgent.delete(1, 1);
    Path appPath = new Path(testMeta.applicationPath);
    FileContext fileContext = FileContext.getFileContext();
    Assert.assertTrue("operator 2 window 1", fileContext.util().exists(new Path(appPath + "/" + 2 + "/" + 1)));
    Assert.assertFalse("operator 1 window 1", fileContext.util().exists(new Path(appPath + "/" + 1 + "/" + 1)));
}
Also used : Path(org.apache.hadoop.fs.Path) FileContext(org.apache.hadoop.fs.FileContext) Test(org.junit.Test)

Example 32 with FileContext

use of org.apache.hadoop.fs.FileContext in project apex-core by apache.

the class FSStorageAgentTest method testDelete.

@Test
public void testDelete() throws IOException {
    testLoad();
    testMeta.storageAgent.delete(1, 1);
    Path appPath = new Path(testMeta.applicationPath);
    FileContext fileContext = FileContext.getFileContext();
    Assert.assertTrue("operator 2 window 1", fileContext.util().exists(new Path(appPath + "/" + 2 + "/" + 1)));
    Assert.assertFalse("operator 1 window 1", fileContext.util().exists(new Path(appPath + "/" + 1 + "/" + 1)));
}
Also used : Path(org.apache.hadoop.fs.Path) FileContext(org.apache.hadoop.fs.FileContext) Test(org.junit.Test)

Example 33 with FileContext

use of org.apache.hadoop.fs.FileContext in project storm by apache.

the class HdfsBlobStoreFile method commit.

@Override
public void commit() throws IOException {
    checkIsNotTmp();
    // FileContext supports atomic rename, whereas FileSystem doesn't
    FileContext fc = FileContext.getFileContext(hadoopConf);
    Path dest = new Path(path.getParent(), BLOBSTORE_DATA_FILE);
    if (mustBeNew) {
        fc.rename(path, dest);
    } else {
        fc.rename(path, dest, Options.Rename.OVERWRITE);
    }
// Note, we could add support for setting the replication factor
}
Also used : Path(org.apache.hadoop.fs.Path) FileContext(org.apache.hadoop.fs.FileContext)

Example 34 with FileContext

use of org.apache.hadoop.fs.FileContext in project Gaffer by gchq.

the class WriteData method renameFiles.

private void renameFiles(final int partitionId, final long taskAttemptId, final Set<String> groups, final Map<String, Path> groupToWriterPath) throws Exception {
    LOGGER.info("Renaming output files from {} to {}", "input-" + partitionId + "-" + taskAttemptId + ".parquet", "input-" + partitionId);
    final FileContext fileContext = FileContext.getFileContext(new Configuration());
    for (final String group : groups) {
        final Path src = groupToWriterPath.get(group);
        final String newName = "input-" + partitionId + ".parquet";
        final Path dst = new Path(groupToDirectory.get(group) + "/" + newName);
        try {
            fileContext.rename(src, dst, Options.Rename.NONE);
            LOGGER.debug("Renamed {} to {}", src, dst);
        } catch (final FileAlreadyExistsException e) {
            // Another task got there first
            LOGGER.debug("Not renaming {} to {} as the destination already exists", src, dst);
        }
    }
}
Also used : Path(org.apache.hadoop.fs.Path) FileAlreadyExistsException(org.apache.hadoop.fs.FileAlreadyExistsException) Configuration(org.apache.hadoop.conf.Configuration) FileContext(org.apache.hadoop.fs.FileContext)

Example 35 with FileContext

use of org.apache.hadoop.fs.FileContext in project apex-malhar by apache.

the class IOUtilsTest method testCopyPartialHelper.

private void testCopyPartialHelper(int dataSize, int offset, long size) throws IOException {
    FileUtils.deleteQuietly(new File("target/IOUtilsTest"));
    File file = new File("target/IOUtilsTest/testCopyPartial/input");
    createDataFile(file, dataSize);
    FileContext fileContext = FileContext.getFileContext();
    DataInputStream inputStream = fileContext.open(new Path(file.getAbsolutePath()));
    Path output = new Path("target/IOUtilsTest/testCopyPartial/output");
    DataOutputStream outputStream = fileContext.create(output, EnumSet.of(CreateFlag.CREATE, CreateFlag.OVERWRITE), Options.CreateOpts.CreateParent.createParent());
    if (offset == 0) {
        IOUtils.copyPartial(inputStream, size, outputStream);
    } else {
        IOUtils.copyPartial(inputStream, offset, size, outputStream);
    }
    outputStream.close();
    Assert.assertTrue("output exists", fileContext.util().exists(output));
    Assert.assertEquals("output size", size, fileContext.getFileStatus(output).getLen());
// FileUtils.deleteQuietly(new File("target/IOUtilsTest"));
}
Also used : Path(org.apache.hadoop.fs.Path) DataOutputStream(java.io.DataOutputStream) DataInputStream(java.io.DataInputStream) File(java.io.File) FileContext(org.apache.hadoop.fs.FileContext)

Aggregations

FileContext (org.apache.hadoop.fs.FileContext)84 Path (org.apache.hadoop.fs.Path)71 Test (org.junit.Test)34 Configuration (org.apache.hadoop.conf.Configuration)33 IOException (java.io.IOException)29 File (java.io.File)16 YarnConfiguration (org.apache.hadoop.yarn.conf.YarnConfiguration)14 FileStatus (org.apache.hadoop.fs.FileStatus)13 HashMap (java.util.HashMap)12 FsPermission (org.apache.hadoop.fs.permission.FsPermission)10 ArrayList (java.util.ArrayList)9 FileSystem (org.apache.hadoop.fs.FileSystem)8 LocalResource (org.apache.hadoop.yarn.api.records.LocalResource)8 ExecutorService (java.util.concurrent.ExecutorService)7 ContainerId (org.apache.hadoop.yarn.api.records.ContainerId)7 URISyntaxException (java.net.URISyntaxException)6 ConcurrentHashMap (java.util.concurrent.ConcurrentHashMap)6 ExecutionException (java.util.concurrent.ExecutionException)6 Future (java.util.concurrent.Future)6 FSDataInputStream (org.apache.hadoop.fs.FSDataInputStream)6