Search in sources :

Example 1 with FsServerDefaults

use of org.apache.hadoop.fs.FsServerDefaults in project hadoop by apache.

the class TestDistributedFileSystem method testGetServerDefaults.

@Test(timeout = 60000)
public void testGetServerDefaults() throws IOException {
    Configuration conf = new HdfsConfiguration();
    MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).build();
    try {
        cluster.waitActive();
        DistributedFileSystem dfs = cluster.getFileSystem();
        FsServerDefaults fsServerDefaults = dfs.getServerDefaults();
        Assert.assertNotNull(fsServerDefaults);
    } finally {
        cluster.shutdown();
    }
}
Also used : Configuration(org.apache.hadoop.conf.Configuration) FsServerDefaults(org.apache.hadoop.fs.FsServerDefaults) Test(org.junit.Test)

Example 2 with FsServerDefaults

use of org.apache.hadoop.fs.FsServerDefaults in project hadoop by apache.

the class TestFileCreation method testServerDefaults.

/**
   * Test that server default values can be retrieved on the client side
   */
@Test
public void testServerDefaults() throws IOException {
    Configuration conf = new HdfsConfiguration();
    conf.setLong(DFS_BLOCK_SIZE_KEY, DFS_BLOCK_SIZE_DEFAULT);
    conf.setInt(DFS_BYTES_PER_CHECKSUM_KEY, DFS_BYTES_PER_CHECKSUM_DEFAULT);
    conf.setInt(DFS_CLIENT_WRITE_PACKET_SIZE_KEY, DFS_CLIENT_WRITE_PACKET_SIZE_DEFAULT);
    conf.setInt(DFS_REPLICATION_KEY, DFS_REPLICATION_DEFAULT + 1);
    conf.setInt(IO_FILE_BUFFER_SIZE_KEY, IO_FILE_BUFFER_SIZE_DEFAULT);
    MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).numDataNodes(DFSConfigKeys.DFS_REPLICATION_DEFAULT + 1).build();
    cluster.waitActive();
    FileSystem fs = cluster.getFileSystem();
    try {
        FsServerDefaults serverDefaults = fs.getServerDefaults(new Path("/"));
        assertEquals(DFS_BLOCK_SIZE_DEFAULT, serverDefaults.getBlockSize());
        assertEquals(DFS_BYTES_PER_CHECKSUM_DEFAULT, serverDefaults.getBytesPerChecksum());
        assertEquals(DFS_CLIENT_WRITE_PACKET_SIZE_DEFAULT, serverDefaults.getWritePacketSize());
        assertEquals(DFS_REPLICATION_DEFAULT + 1, serverDefaults.getReplication());
        assertEquals(IO_FILE_BUFFER_SIZE_DEFAULT, serverDefaults.getFileBufferSize());
    } finally {
        fs.close();
        cluster.shutdown();
    }
}
Also used : Path(org.apache.hadoop.fs.Path) Configuration(org.apache.hadoop.conf.Configuration) FsServerDefaults(org.apache.hadoop.fs.FsServerDefaults) FileSystem(org.apache.hadoop.fs.FileSystem) Test(org.junit.Test)

Example 3 with FsServerDefaults

use of org.apache.hadoop.fs.FsServerDefaults in project lucene-solr by apache.

the class HdfsFileWriter method getOutputStream.

private static final OutputStream getOutputStream(FileSystem fileSystem, Path path) throws IOException {
    Configuration conf = fileSystem.getConf();
    FsServerDefaults fsDefaults = fileSystem.getServerDefaults(path);
    EnumSet<CreateFlag> flags = EnumSet.of(CreateFlag.CREATE, CreateFlag.OVERWRITE);
    if (Boolean.getBoolean(HDFS_SYNC_BLOCK)) {
        flags.add(CreateFlag.SYNC_BLOCK);
    }
    return fileSystem.create(path, FsPermission.getDefault().applyUMask(FsPermission.getUMask(conf)), flags, fsDefaults.getFileBufferSize(), fsDefaults.getReplication(), fsDefaults.getBlockSize(), null);
}
Also used : CreateFlag(org.apache.hadoop.fs.CreateFlag) Configuration(org.apache.hadoop.conf.Configuration) FsServerDefaults(org.apache.hadoop.fs.FsServerDefaults)

Aggregations

Configuration (org.apache.hadoop.conf.Configuration)3 FsServerDefaults (org.apache.hadoop.fs.FsServerDefaults)3 Test (org.junit.Test)2 CreateFlag (org.apache.hadoop.fs.CreateFlag)1 FileSystem (org.apache.hadoop.fs.FileSystem)1 Path (org.apache.hadoop.fs.Path)1