Search in sources :

Example 81 with HdfsConfiguration

use of org.apache.hadoop.hdfs.HdfsConfiguration in project hadoop by apache.

the class TestBlockManager method testBlockReportQueueing.

@Test
public void testBlockReportQueueing() throws Exception {
    Configuration conf = new HdfsConfiguration();
    final MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).build();
    try {
        cluster.waitActive();
        final FSNamesystem fsn = cluster.getNamesystem();
        final BlockManager bm = fsn.getBlockManager();
        final ExecutorService executor = Executors.newCachedThreadPool();
        final CyclicBarrier startBarrier = new CyclicBarrier(2);
        final CountDownLatch endLatch = new CountDownLatch(3);
        final CountDownLatch doneLatch = new CountDownLatch(1);
        // create a task intended to block while processing, thus causing
        // the queue to backup.  simulates how a full BR is processed.
        FutureTask<?> blockingOp = new FutureTask<Void>(new Callable<Void>() {

            @Override
            public Void call() throws IOException {
                bm.runBlockOp(new Callable<Void>() {

                    @Override
                    public Void call() throws InterruptedException, BrokenBarrierException {
                        // use a barrier to control the blocking.
                        startBarrier.await();
                        endLatch.countDown();
                        return null;
                    }
                });
                // signal that runBlockOp returned
                doneLatch.countDown();
                return null;
            }
        });
        // create an async task.  simulates how an IBR is processed.
        Callable<?> asyncOp = new Callable<Void>() {

            @Override
            public Void call() throws IOException {
                bm.enqueueBlockOp(new Runnable() {

                    @Override
                    public void run() {
                        // use the latch to signal if the op has run.
                        endLatch.countDown();
                    }
                });
                return null;
            }
        };
        // calling get forces its execution so we can test if it's blocked.
        Future<?> blockedFuture = executor.submit(blockingOp);
        boolean isBlocked = false;
        try {
            // wait 1s for the future to block.  it should run instantaneously.
            blockedFuture.get(1, TimeUnit.SECONDS);
        } catch (TimeoutException te) {
            isBlocked = true;
        }
        assertTrue(isBlocked);
        // should effectively return immediately since calls are queued.
        // however they should be backed up in the queue behind the blocking
        // operation.
        executor.submit(asyncOp).get(1, TimeUnit.SECONDS);
        executor.submit(asyncOp).get(1, TimeUnit.SECONDS);
        // check the async calls are queued, and first is still blocked.
        assertEquals(2, bm.getBlockOpQueueLength());
        assertFalse(blockedFuture.isDone());
        // unblock the queue, wait for last op to complete, check the blocked
        // call has returned
        startBarrier.await(1, TimeUnit.SECONDS);
        assertTrue(endLatch.await(1, TimeUnit.SECONDS));
        assertEquals(0, bm.getBlockOpQueueLength());
        assertTrue(doneLatch.await(1, TimeUnit.SECONDS));
    } finally {
        cluster.shutdown();
    }
}
Also used : MiniDFSCluster(org.apache.hadoop.hdfs.MiniDFSCluster) Configuration(org.apache.hadoop.conf.Configuration) HdfsConfiguration(org.apache.hadoop.hdfs.HdfsConfiguration) IOException(java.io.IOException) HdfsConfiguration(org.apache.hadoop.hdfs.HdfsConfiguration) CountDownLatch(java.util.concurrent.CountDownLatch) Callable(java.util.concurrent.Callable) CyclicBarrier(java.util.concurrent.CyclicBarrier) FutureTask(java.util.concurrent.FutureTask) ExecutorService(java.util.concurrent.ExecutorService) FSNamesystem(org.apache.hadoop.hdfs.server.namenode.FSNamesystem) TimeoutException(java.util.concurrent.TimeoutException) Test(org.junit.Test)

Example 82 with HdfsConfiguration

use of org.apache.hadoop.hdfs.HdfsConfiguration in project hadoop by apache.

the class TestComputeInvalidateWork method setup.

@Before
public void setup() throws Exception {
    conf = new HdfsConfiguration();
    cluster = new MiniDFSCluster.Builder(conf).numDataNodes(NUM_OF_DATANODES).build();
    cluster.waitActive();
    namesystem = cluster.getNamesystem();
    bm = namesystem.getBlockManager();
    nodes = bm.getDatanodeManager().getHeartbeatManager().getDatanodes();
    assertEquals(nodes.length, NUM_OF_DATANODES);
}
Also used : MiniDFSCluster(org.apache.hadoop.hdfs.MiniDFSCluster) HdfsConfiguration(org.apache.hadoop.hdfs.HdfsConfiguration) Before(org.junit.Before)

Example 83 with HdfsConfiguration

use of org.apache.hadoop.hdfs.HdfsConfiguration in project hadoop by apache.

the class TestBlockManager method testAsyncIBR.

// spam the block manager with IBRs to verify queuing is occurring.
@Test
public void testAsyncIBR() throws Exception {
    Logger.getRootLogger().setLevel(Level.WARN);
    // will create files with many small blocks.
    final int blkSize = 4 * 1024;
    final int fileSize = blkSize * 100;
    final byte[] buf = new byte[2 * blkSize];
    final int numWriters = 4;
    final int repl = 3;
    final CyclicBarrier barrier = new CyclicBarrier(numWriters);
    final CountDownLatch writeLatch = new CountDownLatch(numWriters);
    final AtomicBoolean failure = new AtomicBoolean();
    final Configuration conf = new HdfsConfiguration();
    conf.getLong(DFSConfigKeys.DFS_NAMENODE_MIN_BLOCK_SIZE_KEY, blkSize);
    final MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).numDataNodes(8).build();
    try {
        cluster.waitActive();
        // create multiple writer threads to create a file with many blocks.
        // will test that concurrent writing causes IBR batching in the NN
        Thread[] writers = new Thread[numWriters];
        for (int i = 0; i < writers.length; i++) {
            final Path p = new Path("/writer" + i);
            writers[i] = new Thread(new Runnable() {

                @Override
                public void run() {
                    try {
                        FileSystem fs = cluster.getFileSystem();
                        FSDataOutputStream os = fs.create(p, true, buf.length, (short) repl, blkSize);
                        // align writers for maximum chance of IBR batching.
                        barrier.await();
                        int remaining = fileSize;
                        while (remaining > 0) {
                            os.write(buf);
                            remaining -= buf.length;
                        }
                        os.close();
                    } catch (Exception e) {
                        e.printStackTrace();
                        failure.set(true);
                    }
                    // let main thread know we are done.
                    writeLatch.countDown();
                }
            });
            writers[i].start();
        }
        // when and how many IBRs are queued is indeterminate, so just watch
        // the metrics and verify something was queued at during execution.
        boolean sawQueued = false;
        while (!writeLatch.await(10, TimeUnit.MILLISECONDS)) {
            assertFalse(failure.get());
            MetricsRecordBuilder rb = getMetrics("NameNodeActivity");
            long queued = MetricsAsserts.getIntGauge("BlockOpsQueued", rb);
            sawQueued |= (queued > 0);
        }
        assertFalse(failure.get());
        assertTrue(sawQueued);
        // verify that batching of the IBRs occurred.
        MetricsRecordBuilder rb = getMetrics("NameNodeActivity");
        long batched = MetricsAsserts.getLongCounter("BlockOpsBatched", rb);
        assertTrue(batched > 0);
    } finally {
        cluster.shutdown();
    }
}
Also used : Path(org.apache.hadoop.fs.Path) MiniDFSCluster(org.apache.hadoop.hdfs.MiniDFSCluster) Configuration(org.apache.hadoop.conf.Configuration) HdfsConfiguration(org.apache.hadoop.hdfs.HdfsConfiguration) MetricsRecordBuilder(org.apache.hadoop.metrics2.MetricsRecordBuilder) CountDownLatch(java.util.concurrent.CountDownLatch) HdfsConfiguration(org.apache.hadoop.hdfs.HdfsConfiguration) TimeoutException(java.util.concurrent.TimeoutException) IOException(java.io.IOException) BrokenBarrierException(java.util.concurrent.BrokenBarrierException) RemoteException(org.apache.hadoop.ipc.RemoteException) CyclicBarrier(java.util.concurrent.CyclicBarrier) AtomicBoolean(java.util.concurrent.atomic.AtomicBoolean) FileSystem(org.apache.hadoop.fs.FileSystem) DistributedFileSystem(org.apache.hadoop.hdfs.DistributedFileSystem) FSDataOutputStream(org.apache.hadoop.fs.FSDataOutputStream) MetricsRecordBuilder(org.apache.hadoop.metrics2.MetricsRecordBuilder) Test(org.junit.Test)

Example 84 with HdfsConfiguration

use of org.apache.hadoop.hdfs.HdfsConfiguration in project hadoop by apache.

the class BaseReplicationPolicyTest method setupCluster.

@Before
public void setupCluster() throws Exception {
    Configuration conf = new HdfsConfiguration();
    dataNodes = getDatanodeDescriptors(conf);
    FileSystem.setDefaultUri(conf, "hdfs://localhost:0");
    conf.set(DFSConfigKeys.DFS_NAMENODE_HTTP_ADDRESS_KEY, "0.0.0.0:0");
    File baseDir = PathUtils.getTestDir(TestReplicationPolicy.class);
    conf.set(DFSConfigKeys.DFS_NAMENODE_NAME_DIR_KEY, new File(baseDir, "name").getPath());
    conf.set(DFSConfigKeys.DFS_BLOCK_REPLICATOR_CLASSNAME_KEY, blockPlacementPolicy);
    conf.setBoolean(DFSConfigKeys.DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_READ_KEY, true);
    conf.setBoolean(DFSConfigKeys.DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_WRITE_KEY, true);
    DFSTestUtil.formatNameNode(conf);
    namenode = new NameNode(conf);
    nameNodeRpc = namenode.getRpcServer();
    final BlockManager bm = namenode.getNamesystem().getBlockManager();
    replicator = bm.getBlockPlacementPolicy();
    cluster = bm.getDatanodeManager().getNetworkTopology();
    dnManager = bm.getDatanodeManager();
    // construct network topology
    for (int i = 0; i < dataNodes.length; i++) {
        cluster.add(dataNodes[i]);
        bm.getDatanodeManager().getHeartbeatManager().addDatanode(dataNodes[i]);
        bm.getDatanodeManager().getHeartbeatManager().updateDnStat(dataNodes[i]);
    }
    updateHeartbeatWithUsage();
}
Also used : NameNode(org.apache.hadoop.hdfs.server.namenode.NameNode) HdfsConfiguration(org.apache.hadoop.hdfs.HdfsConfiguration) Configuration(org.apache.hadoop.conf.Configuration) HdfsConfiguration(org.apache.hadoop.hdfs.HdfsConfiguration) File(java.io.File) Before(org.junit.Before)

Example 85 with HdfsConfiguration

use of org.apache.hadoop.hdfs.HdfsConfiguration in project hadoop by apache.

the class TestBalancer method testBalancerCliWithIncludeListWithPortsInAFile.

/**
   * Test a cluster with even distribution,
   * then three nodes are added to the cluster,
   * runs balancer with two of the nodes in the include list
   */
@Test(timeout = 100000)
public void testBalancerCliWithIncludeListWithPortsInAFile() throws Exception {
    final Configuration conf = new HdfsConfiguration();
    initConf(conf);
    doTest(conf, new long[] { CAPACITY, CAPACITY }, new String[] { RACK0, RACK1 }, CAPACITY, RACK2, new PortNumberBasedNodes(3, 0, 1), true, true);
}
Also used : Configuration(org.apache.hadoop.conf.Configuration) HdfsConfiguration(org.apache.hadoop.hdfs.HdfsConfiguration) HdfsConfiguration(org.apache.hadoop.hdfs.HdfsConfiguration) Test(org.junit.Test)

Aggregations

HdfsConfiguration (org.apache.hadoop.hdfs.HdfsConfiguration)454 Configuration (org.apache.hadoop.conf.Configuration)311 Test (org.junit.Test)311 MiniDFSCluster (org.apache.hadoop.hdfs.MiniDFSCluster)267 Path (org.apache.hadoop.fs.Path)152 FileSystem (org.apache.hadoop.fs.FileSystem)94 DistributedFileSystem (org.apache.hadoop.hdfs.DistributedFileSystem)92 File (java.io.File)72 IOException (java.io.IOException)69 Before (org.junit.Before)56 ExtendedBlock (org.apache.hadoop.hdfs.protocol.ExtendedBlock)40 FSDataOutputStream (org.apache.hadoop.fs.FSDataOutputStream)35 MetricsRecordBuilder (org.apache.hadoop.metrics2.MetricsRecordBuilder)33 DataNode (org.apache.hadoop.hdfs.server.datanode.DataNode)30 LocatedBlock (org.apache.hadoop.hdfs.protocol.LocatedBlock)27 RandomAccessFile (java.io.RandomAccessFile)22 ArrayList (java.util.ArrayList)20 NameNodeFile (org.apache.hadoop.hdfs.server.namenode.NNStorage.NameNodeFile)20 URI (java.net.URI)19 FsPermission (org.apache.hadoop.fs.permission.FsPermission)19