Search in sources :

Example 1 with NoMlockCacheManipulator

use of org.apache.hadoop.io.nativeio.NativeIO.POSIX.NoMlockCacheManipulator in project hadoop by apache.

the class TestFsDatasetCache method setUp.

@Before
public void setUp() throws Exception {
    conf = new HdfsConfiguration();
    conf.setLong(DFSConfigKeys.DFS_NAMENODE_PATH_BASED_CACHE_REFRESH_INTERVAL_MS, 100);
    conf.setLong(DFSConfigKeys.DFS_CACHEREPORT_INTERVAL_MSEC_KEY, 500);
    conf.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, BLOCK_SIZE);
    conf.setLong(DFSConfigKeys.DFS_DATANODE_MAX_LOCKED_MEMORY_KEY, CACHE_CAPACITY);
    conf.setLong(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, 1);
    conf.setInt(DFS_DATANODE_FSDATASETCACHE_MAX_THREADS_PER_VOLUME_KEY, 10);
    prevCacheManipulator = NativeIO.POSIX.getCacheManipulator();
    NativeIO.POSIX.setCacheManipulator(new NoMlockCacheManipulator());
    cluster = new MiniDFSCluster.Builder(conf).numDataNodes(1).build();
    cluster.waitActive();
    fs = cluster.getFileSystem();
    nn = cluster.getNameNode();
    fsImage = nn.getFSImage();
    dn = cluster.getDataNodes().get(0);
    fsd = dn.getFSDataset();
    spyNN = InternalDataNodeTestUtils.spyOnBposToNN(dn, nn);
}
Also used : NoMlockCacheManipulator(org.apache.hadoop.io.nativeio.NativeIO.POSIX.NoMlockCacheManipulator) MiniDFSCluster(org.apache.hadoop.hdfs.MiniDFSCluster) HdfsConfiguration(org.apache.hadoop.hdfs.HdfsConfiguration) Before(org.junit.Before)

Example 2 with NoMlockCacheManipulator

use of org.apache.hadoop.io.nativeio.NativeIO.POSIX.NoMlockCacheManipulator in project hadoop by apache.

the class TestFsDatasetCache method testUncachingBlocksBeforeCachingFinishes.

@Test(timeout = 600000)
public void testUncachingBlocksBeforeCachingFinishes() throws Exception {
    LOG.info("beginning testUncachingBlocksBeforeCachingFinishes");
    final int NUM_BLOCKS = 5;
    DFSTestUtil.verifyExpectedCacheUsage(0, 0, fsd);
    // Write a test file
    final Path testFile = new Path("/testCacheBlock");
    final long testFileLen = BLOCK_SIZE * NUM_BLOCKS;
    DFSTestUtil.createFile(fs, testFile, testFileLen, (short) 1, 0xABBAl);
    // Get the details of the written file
    HdfsBlockLocation[] locs = (HdfsBlockLocation[]) fs.getFileBlockLocations(testFile, 0, testFileLen);
    assertEquals("Unexpected number of blocks", NUM_BLOCKS, locs.length);
    final long[] blockSizes = getBlockSizes(locs);
    // Check initial state
    final long cacheCapacity = fsd.getCacheCapacity();
    long cacheUsed = fsd.getCacheUsed();
    long current = 0;
    assertEquals("Unexpected cache capacity", CACHE_CAPACITY, cacheCapacity);
    assertEquals("Unexpected amount of cache used", current, cacheUsed);
    NativeIO.POSIX.setCacheManipulator(new NoMlockCacheManipulator() {

        @Override
        public void mlock(String identifier, ByteBuffer mmap, long length) throws IOException {
            LOG.info("An mlock operation is starting on " + identifier);
            try {
                Thread.sleep(3000);
            } catch (InterruptedException e) {
                Assert.fail();
            }
        }
    });
    // should increase, even though caching doesn't complete on any of them.
    for (int i = 0; i < NUM_BLOCKS; i++) {
        setHeartbeatResponse(cacheBlock(locs[i]));
        current = DFSTestUtil.verifyExpectedCacheUsage(current + blockSizes[i], i + 1, fsd);
    }
    setHeartbeatResponse(new DatanodeCommand[] { getResponse(locs, DatanodeProtocol.DNA_UNCACHE) });
    // wait until all caching jobs are finished cancelling.
    current = DFSTestUtil.verifyExpectedCacheUsage(0, 0, fsd);
    LOG.info("finishing testUncachingBlocksBeforeCachingFinishes");
}
Also used : Path(org.apache.hadoop.fs.Path) NoMlockCacheManipulator(org.apache.hadoop.io.nativeio.NativeIO.POSIX.NoMlockCacheManipulator) HdfsBlockLocation(org.apache.hadoop.fs.HdfsBlockLocation) IOException(java.io.IOException) ByteBuffer(java.nio.ByteBuffer) Test(org.junit.Test)

Example 3 with NoMlockCacheManipulator

use of org.apache.hadoop.io.nativeio.NativeIO.POSIX.NoMlockCacheManipulator in project hadoop by apache.

the class TestFsDatasetCacheRevocation method setUp.

@Before
public void setUp() throws Exception {
    prevCacheManipulator = NativeIO.POSIX.getCacheManipulator();
    NativeIO.POSIX.setCacheManipulator(new NoMlockCacheManipulator());
    DomainSocket.disableBindPathValidation();
    sockDir = new TemporarySocketDirectory();
}
Also used : NoMlockCacheManipulator(org.apache.hadoop.io.nativeio.NativeIO.POSIX.NoMlockCacheManipulator) TemporarySocketDirectory(org.apache.hadoop.net.unix.TemporarySocketDirectory) Before(org.junit.Before)

Example 4 with NoMlockCacheManipulator

use of org.apache.hadoop.io.nativeio.NativeIO.POSIX.NoMlockCacheManipulator in project hadoop by apache.

the class TestCacheDirectives method setup.

@Before
public void setup() throws Exception {
    conf = createCachingConf();
    cluster = new MiniDFSCluster.Builder(conf).numDataNodes(NUM_DATANODES).build();
    cluster.waitActive();
    dfs = cluster.getFileSystem();
    proto = cluster.getNameNodeRpc();
    namenode = cluster.getNameNode();
    prevCacheManipulator = NativeIO.POSIX.getCacheManipulator();
    NativeIO.POSIX.setCacheManipulator(new NoMlockCacheManipulator());
    BlockReaderTestUtil.enableHdfsCachingTracing();
}
Also used : NoMlockCacheManipulator(org.apache.hadoop.io.nativeio.NativeIO.POSIX.NoMlockCacheManipulator) MiniDFSCluster(org.apache.hadoop.hdfs.MiniDFSCluster) Before(org.junit.Before)

Example 5 with NoMlockCacheManipulator

use of org.apache.hadoop.io.nativeio.NativeIO.POSIX.NoMlockCacheManipulator in project hadoop by apache.

the class TestFsDatasetCache method testCacheAndUncacheBlockWithRetries.

/**
   * Run testCacheAndUncacheBlock with some failures injected into the mlock
   * call.  This tests the ability of the NameNode to resend commands.
   */
@Test(timeout = 600000)
public void testCacheAndUncacheBlockWithRetries() throws Exception {
    // We don't have to save the previous cacheManipulator
    // because it will be reinstalled by the @After function.
    NativeIO.POSIX.setCacheManipulator(new NoMlockCacheManipulator() {

        private final Set<String> seenIdentifiers = new HashSet<String>();

        @Override
        public void mlock(String identifier, ByteBuffer mmap, long length) throws IOException {
            if (seenIdentifiers.contains(identifier)) {
                // mlock succeeds the second time.
                LOG.info("mlocking " + identifier);
                return;
            }
            seenIdentifiers.add(identifier);
            throw new IOException("injecting IOException during mlock of " + identifier);
        }
    });
    testCacheAndUncacheBlock();
}
Also used : NoMlockCacheManipulator(org.apache.hadoop.io.nativeio.NativeIO.POSIX.NoMlockCacheManipulator) IOException(java.io.IOException) ByteBuffer(java.nio.ByteBuffer) HashSet(java.util.HashSet) Test(org.junit.Test)

Aggregations

NoMlockCacheManipulator (org.apache.hadoop.io.nativeio.NativeIO.POSIX.NoMlockCacheManipulator)5 Before (org.junit.Before)3 IOException (java.io.IOException)2 ByteBuffer (java.nio.ByteBuffer)2 MiniDFSCluster (org.apache.hadoop.hdfs.MiniDFSCluster)2 Test (org.junit.Test)2 HashSet (java.util.HashSet)1 HdfsBlockLocation (org.apache.hadoop.fs.HdfsBlockLocation)1 Path (org.apache.hadoop.fs.Path)1 HdfsConfiguration (org.apache.hadoop.hdfs.HdfsConfiguration)1 TemporarySocketDirectory (org.apache.hadoop.net.unix.TemporarySocketDirectory)1