Search in sources :

Example 6 with LocalFileSystem

use of org.apache.hadoop.fs.LocalFileSystem in project hadoop by apache.

the class TestTraceUtils method testTracingGlobber.

/**
   * Test tracing the globber.  This is a regression test for HDFS-9187.
   */
@Test
public void testTracingGlobber() throws Exception {
    // Bypass the normal FileSystem object creation path by just creating an
    // instance of a subclass.
    FileSystem fs = new LocalFileSystem();
    fs.initialize(new URI("file:///"), new Configuration());
    fs.globStatus(new Path("/"));
    fs.close();
}
Also used : Path(org.apache.hadoop.fs.Path) HTraceConfiguration(org.apache.htrace.core.HTraceConfiguration) Configuration(org.apache.hadoop.conf.Configuration) LocalFileSystem(org.apache.hadoop.fs.LocalFileSystem) FileSystem(org.apache.hadoop.fs.FileSystem) LocalFileSystem(org.apache.hadoop.fs.LocalFileSystem) URI(java.net.URI) Test(org.junit.Test)

Example 7 with LocalFileSystem

use of org.apache.hadoop.fs.LocalFileSystem in project hadoop by apache.

the class TestDiskChecker method _mkdirs.

private void _mkdirs(boolean exists, FsPermission before, FsPermission after) throws Throwable {
    File localDir = make(stub(File.class).returning(exists).from.exists());
    when(localDir.mkdir()).thenReturn(true);
    // use default stubs
    Path dir = mock(Path.class);
    LocalFileSystem fs = make(stub(LocalFileSystem.class).returning(localDir).from.pathToFile(dir));
    FileStatus stat = make(stub(FileStatus.class).returning(after).from.getPermission());
    when(fs.getFileStatus(dir)).thenReturn(stat);
    try {
        DiskChecker.mkdirsWithExistsAndPermissionCheck(fs, dir, before);
        if (!exists)
            verify(fs).setPermission(dir, before);
        else {
            verify(fs).getFileStatus(dir);
            verify(stat).getPermission();
        }
    } catch (DiskErrorException e) {
        if (before != after)
            assertTrue(e.getMessage().startsWith("Incorrect permission"));
    }
}
Also used : Path(org.apache.hadoop.fs.Path) FileStatus(org.apache.hadoop.fs.FileStatus) LocalFileSystem(org.apache.hadoop.fs.LocalFileSystem) DiskErrorException(org.apache.hadoop.util.DiskChecker.DiskErrorException)

Example 8 with LocalFileSystem

use of org.apache.hadoop.fs.LocalFileSystem in project hadoop by apache.

the class TestBloomMapFile method setUp.

@Before
public void setUp() throws Exception {
    LocalFileSystem fs = FileSystem.getLocal(conf);
    if (fs.exists(TEST_ROOT) && !fs.delete(TEST_ROOT, true)) {
        fail("Can't clean up test root dir");
    }
    fs.mkdirs(TEST_ROOT);
}
Also used : LocalFileSystem(org.apache.hadoop.fs.LocalFileSystem) Before(org.junit.Before)

Example 9 with LocalFileSystem

use of org.apache.hadoop.fs.LocalFileSystem in project crunch by cloudera.

the class CompositePathIterableTest method testCreate_DirectoryPresentButNoFiles.

@Test
public void testCreate_DirectoryPresentButNoFiles() throws IOException {
    String inputFilePath = Files.createTempDir().getAbsolutePath();
    Configuration conf = new Configuration();
    LocalFileSystem local = FileSystem.getLocal(conf);
    Iterable<String> iterable = CompositePathIterable.create(local, new Path(inputFilePath), new TextFileReaderFactory<String>(Writables.strings(), conf));
    assertTrue(Lists.newArrayList(iterable).isEmpty());
}
Also used : Path(org.apache.hadoop.fs.Path) Configuration(org.apache.hadoop.conf.Configuration) LocalFileSystem(org.apache.hadoop.fs.LocalFileSystem) Test(org.junit.Test)

Example 10 with LocalFileSystem

use of org.apache.hadoop.fs.LocalFileSystem in project hadoop by apache.

the class StorageLocation method makeBlockPoolDir.

/**
   * Create physical directory for block pools on the data node.
   *
   * @param blockPoolID
   *          the block pool id
   * @param conf
   *          Configuration instance to use.
   * @throws IOException on errors
   */
public void makeBlockPoolDir(String blockPoolID, Configuration conf) throws IOException {
    if (conf == null) {
        conf = new HdfsConfiguration();
    }
    LocalFileSystem localFS = FileSystem.getLocal(conf);
    FsPermission permission = new FsPermission(conf.get(DFSConfigKeys.DFS_DATANODE_DATA_DIR_PERMISSION_KEY, DFSConfigKeys.DFS_DATANODE_DATA_DIR_PERMISSION_DEFAULT));
    File data = new File(getBpURI(blockPoolID, Storage.STORAGE_DIR_CURRENT));
    try {
        DiskChecker.checkDir(localFS, new Path(data.toURI()), permission);
    } catch (IOException e) {
        DataStorage.LOG.warn("Invalid directory in: " + data.getCanonicalPath() + ": " + e.getMessage());
    }
}
Also used : Path(org.apache.hadoop.fs.Path) LocalFileSystem(org.apache.hadoop.fs.LocalFileSystem) FsPermission(org.apache.hadoop.fs.permission.FsPermission) IOException(java.io.IOException) HdfsConfiguration(org.apache.hadoop.hdfs.HdfsConfiguration) File(java.io.File)

Aggregations

LocalFileSystem (org.apache.hadoop.fs.LocalFileSystem)121 Path (org.apache.hadoop.fs.Path)77 Test (org.junit.Test)64 Configuration (org.apache.hadoop.conf.Configuration)57 FileSystem (org.apache.hadoop.fs.FileSystem)35 IOException (java.io.IOException)33 File (java.io.File)23 NewTableConfiguration (org.apache.accumulo.core.client.admin.NewTableConfiguration)23 SamplerConfiguration (org.apache.accumulo.core.client.sample.SamplerConfiguration)23 SummarizerConfiguration (org.apache.accumulo.core.client.summary.SummarizerConfiguration)23 DefaultConfiguration (org.apache.accumulo.core.conf.DefaultConfiguration)23 Key (org.apache.accumulo.core.data.Key)22 Value (org.apache.accumulo.core.data.Value)22 ArrayList (java.util.ArrayList)19 ExecutorService (java.util.concurrent.ExecutorService)15 Future (java.util.concurrent.Future)15 Scanner (org.apache.accumulo.core.client.Scanner)14 DataSegment (org.apache.druid.timeline.DataSegment)13 DataSegmentPusher (org.apache.druid.segment.loading.DataSegmentPusher)8 HdfsDataSegmentPusher (org.apache.druid.storage.hdfs.HdfsDataSegmentPusher)8