Search in sources :

Example 6 with INodesInPath

use of org.apache.hadoop.hdfs.server.namenode.INodesInPath in project hadoop by apache.

the class TestSnapshotManager method testSnapshotLimits.

/**
   * Test that the global limit on snapshots is honored.
   */
@Test(timeout = 10000)
public void testSnapshotLimits() throws Exception {
    // Setup mock objects for SnapshotManager.createSnapshot.
    //
    INodeDirectory ids = mock(INodeDirectory.class);
    FSDirectory fsdir = mock(FSDirectory.class);
    INodesInPath iip = mock(INodesInPath.class);
    SnapshotManager sm = spy(new SnapshotManager(fsdir));
    doReturn(ids).when(sm).getSnapshottableRoot((INodesInPath) anyObject());
    doReturn(testMaxSnapshotLimit).when(sm).getMaxSnapshotID();
    //
    for (Integer i = 0; i < testMaxSnapshotLimit; ++i) {
        sm.createSnapshot(iip, "dummy", i.toString());
    }
    //
    try {
        sm.createSnapshot(iip, "dummy", "shouldFailSnapshot");
        Assert.fail("Expected SnapshotException not thrown");
    } catch (SnapshotException se) {
        Assert.assertTrue(StringUtils.toLowerCase(se.getMessage()).contains("rollover"));
    }
    // Delete a snapshot to free up a slot.
    //
    sm.deleteSnapshot(iip, "", mock(INode.ReclaimContext.class));
    //
    try {
        sm.createSnapshot(iip, "dummy", "shouldFailSnapshot2");
        Assert.fail("Expected SnapshotException not thrown");
    } catch (SnapshotException se) {
        Assert.assertTrue(StringUtils.toLowerCase(se.getMessage()).contains("rollover"));
    }
}
Also used : INodeDirectory(org.apache.hadoop.hdfs.server.namenode.INodeDirectory) INodesInPath(org.apache.hadoop.hdfs.server.namenode.INodesInPath) FSDirectory(org.apache.hadoop.hdfs.server.namenode.FSDirectory) SnapshotException(org.apache.hadoop.hdfs.protocol.SnapshotException) Test(org.junit.Test)

Example 7 with INodesInPath

use of org.apache.hadoop.hdfs.server.namenode.INodesInPath in project hadoop by apache.

the class TestSnapshotReplication method checkSnapshotFileReplication.

/**
   * Check the replication for both the current file and all its prior snapshots
   * 
   * @param currentFile
   *          the Path of the current file
   * @param snapshotRepMap
   *          A map maintaining all the snapshots of the current file, as well
   *          as their expected replication number stored in their corresponding
   *          INodes
   * @param expectedBlockRep
   *          The expected replication number
   * @throws Exception
   */
private void checkSnapshotFileReplication(Path currentFile, Map<Path, Short> snapshotRepMap, short expectedBlockRep) throws Exception {
    // First check the getPreferredBlockReplication for the INode of
    // the currentFile
    final INodeFile inodeOfCurrentFile = getINodeFile(currentFile);
    for (BlockInfo b : inodeOfCurrentFile.getBlocks()) {
        assertEquals(expectedBlockRep, b.getReplication());
    }
    // Then check replication for every snapshot
    for (Path ss : snapshotRepMap.keySet()) {
        final INodesInPath iip = fsdir.getINodesInPath(ss.toString(), DirOp.READ);
        final INodeFile ssInode = iip.getLastINode().asFile();
        // always == expectedBlockRep
        for (BlockInfo b : ssInode.getBlocks()) {
            assertEquals(expectedBlockRep, b.getReplication());
        }
        // Also check the number derived from INodeFile#getFileReplication
        assertEquals(snapshotRepMap.get(ss).shortValue(), ssInode.getFileReplication(iip.getPathSnapshotId()));
    }
}
Also used : Path(org.apache.hadoop.fs.Path) INodesInPath(org.apache.hadoop.hdfs.server.namenode.INodesInPath) INodesInPath(org.apache.hadoop.hdfs.server.namenode.INodesInPath) BlockInfo(org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo) INodeFile(org.apache.hadoop.hdfs.server.namenode.INodeFile)

Aggregations

INodesInPath (org.apache.hadoop.hdfs.server.namenode.INodesInPath)7 INodeDirectory (org.apache.hadoop.hdfs.server.namenode.INodeDirectory)5 FSDirectory (org.apache.hadoop.hdfs.server.namenode.FSDirectory)4 Test (org.junit.Test)4 Path (org.apache.hadoop.fs.Path)3 NSQuotaExceededException (org.apache.hadoop.hdfs.protocol.NSQuotaExceededException)2 SnapshotException (org.apache.hadoop.hdfs.protocol.SnapshotException)2 INode (org.apache.hadoop.hdfs.server.namenode.INode)2 INodeFile (org.apache.hadoop.hdfs.server.namenode.INodeFile)2 QuotaCounts (org.apache.hadoop.hdfs.server.namenode.QuotaCounts)2 DirectoryDiff (org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature.DirectoryDiff)2 IOException (java.io.IOException)1 BlockInfo (org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo)1