Search in sources :

Example 1 with FakeTimer

use of org.apache.hadoop.util.FakeTimer in project hadoop by apache.

the class TestFSNamesystemLock method testDetailedHoldMetrics.

@Test
public void testDetailedHoldMetrics() throws Exception {
    Configuration conf = new Configuration();
    conf.setBoolean(DFSConfigKeys.DFS_NAMENODE_LOCK_DETAILED_METRICS_KEY, true);
    FakeTimer timer = new FakeTimer();
    MetricsRegistry registry = new MetricsRegistry("Test");
    MutableRatesWithAggregation rates = registry.newRatesWithAggregation("Test");
    FSNamesystemLock fsLock = new FSNamesystemLock(conf, rates, timer);
    fsLock.readLock();
    timer.advance(1);
    fsLock.readUnlock("foo");
    fsLock.readLock();
    timer.advance(2);
    fsLock.readUnlock("foo");
    fsLock.readLock();
    timer.advance(1);
    fsLock.readLock();
    timer.advance(1);
    fsLock.readUnlock("bar");
    fsLock.readUnlock("bar");
    fsLock.writeLock();
    timer.advance(1);
    fsLock.writeUnlock("baz");
    MetricsRecordBuilder rb = MetricsAsserts.mockMetricsRecordBuilder();
    rates.snapshot(rb, true);
    assertGauge("FSNReadLockFooAvgTime", 1.5, rb);
    assertCounter("FSNReadLockFooNumOps", 2L, rb);
    assertGauge("FSNReadLockBarAvgTime", 2.0, rb);
    assertCounter("FSNReadLockBarNumOps", 1L, rb);
    assertGauge("FSNWriteLockBazAvgTime", 1.0, rb);
    assertCounter("FSNWriteLockBazNumOps", 1L, rb);
}
Also used : MutableRatesWithAggregation(org.apache.hadoop.metrics2.lib.MutableRatesWithAggregation) MetricsRegistry(org.apache.hadoop.metrics2.lib.MetricsRegistry) Configuration(org.apache.hadoop.conf.Configuration) FakeTimer(org.apache.hadoop.util.FakeTimer) MetricsRecordBuilder(org.apache.hadoop.metrics2.MetricsRecordBuilder) Test(org.junit.Test)

Example 2 with FakeTimer

use of org.apache.hadoop.util.FakeTimer in project hadoop by apache.

the class TestFSNamesystemLock method testFSWriteLockLongHoldingReport.

/**
   * Test when FSNamesystem write lock is held for a long time,
   * logger will report it.
   */
@Test(timeout = 45000)
public void testFSWriteLockLongHoldingReport() throws Exception {
    final long writeLockReportingThreshold = 100L;
    final long writeLockSuppressWarningInterval = 10000L;
    Configuration conf = new Configuration();
    conf.setLong(DFSConfigKeys.DFS_NAMENODE_WRITE_LOCK_REPORTING_THRESHOLD_MS_KEY, writeLockReportingThreshold);
    conf.setTimeDuration(DFSConfigKeys.DFS_LOCK_SUPPRESS_WARNING_INTERVAL_KEY, writeLockSuppressWarningInterval, TimeUnit.MILLISECONDS);
    final FakeTimer timer = new FakeTimer();
    final FSNamesystemLock fsnLock = new FSNamesystemLock(conf, null, timer);
    timer.advance(writeLockSuppressWarningInterval);
    LogCapturer logs = LogCapturer.captureLogs(FSNamesystem.LOG);
    GenericTestUtils.setLogLevel(FSNamesystem.LOG, Level.INFO);
    // Don't report if the write lock is held for a short time
    fsnLock.writeLock();
    fsnLock.writeUnlock();
    assertFalse(logs.getOutput().contains(GenericTestUtils.getMethodName()));
    // Report if the write lock is held for a long time
    fsnLock.writeLock();
    timer.advance(writeLockReportingThreshold + 10);
    logs.clearOutput();
    fsnLock.writeUnlock();
    assertTrue(logs.getOutput().contains(GenericTestUtils.getMethodName()));
    // Track but do not report if the write lock is held (interruptibly) for
    // a long time but time since last report does not exceed the suppress
    // warning interval
    fsnLock.writeLockInterruptibly();
    timer.advance(writeLockReportingThreshold + 10);
    logs.clearOutput();
    fsnLock.writeUnlock();
    assertFalse(logs.getOutput().contains(GenericTestUtils.getMethodName()));
    // Track but do not report if it's held for a long time when re-entering
    // write lock but time since last report does not exceed the suppress
    // warning interval
    fsnLock.writeLock();
    timer.advance(writeLockReportingThreshold / 2 + 1);
    fsnLock.writeLockInterruptibly();
    timer.advance(writeLockReportingThreshold / 2 + 1);
    fsnLock.writeLock();
    timer.advance(writeLockReportingThreshold / 2);
    logs.clearOutput();
    fsnLock.writeUnlock();
    assertFalse(logs.getOutput().contains(GenericTestUtils.getMethodName()));
    logs.clearOutput();
    fsnLock.writeUnlock();
    assertFalse(logs.getOutput().contains(GenericTestUtils.getMethodName()));
    logs.clearOutput();
    fsnLock.writeUnlock();
    assertFalse(logs.getOutput().contains(GenericTestUtils.getMethodName()));
    // Report if it's held for a long time and time since last report exceeds
    // the supress warning interval
    timer.advance(writeLockSuppressWarningInterval);
    fsnLock.writeLock();
    timer.advance(writeLockReportingThreshold + 100);
    logs.clearOutput();
    fsnLock.writeUnlock();
    assertTrue(logs.getOutput().contains(GenericTestUtils.getMethodName()));
    assertTrue(logs.getOutput().contains("Number of suppressed write-lock reports: 2"));
}
Also used : Configuration(org.apache.hadoop.conf.Configuration) LogCapturer(org.apache.hadoop.test.GenericTestUtils.LogCapturer) FakeTimer(org.apache.hadoop.util.FakeTimer) Test(org.junit.Test)

Example 3 with FakeTimer

use of org.apache.hadoop.util.FakeTimer in project hadoop by apache.

the class TestGroupsCaching method testOnlyOneRequestWhenExpiredEntryExists.

@Test
public void testOnlyOneRequestWhenExpiredEntryExists() throws Exception {
    conf.setLong(CommonConfigurationKeys.HADOOP_SECURITY_GROUPS_CACHE_SECS, 1);
    FakeTimer timer = new FakeTimer();
    final Groups groups = new Groups(conf, timer);
    groups.cacheGroupsAdd(Arrays.asList(myGroups));
    groups.refresh();
    FakeGroupMapping.clearBlackList();
    FakeGroupMapping.setGetGroupsDelayMs(100);
    // We make an initial request to populate the cache
    groups.getGroups("me");
    int startingRequestCount = FakeGroupMapping.getRequestCount();
    // Then expire that entry
    timer.advance(400 * 1000);
    Thread.sleep(100);
    ArrayList<Thread> threads = new ArrayList<Thread>();
    for (int i = 0; i < 10; i++) {
        threads.add(new Thread() {

            public void run() {
                try {
                    assertEquals(2, groups.getGroups("me").size());
                } catch (IOException e) {
                    fail("Should not happen");
                }
            }
        });
    }
    // We start a bunch of threads who all see the cached value
    for (Thread t : threads) {
        t.start();
    }
    for (Thread t : threads) {
        t.join();
    }
    // Only one extra request is made
    assertEquals(startingRequestCount + 1, FakeGroupMapping.getRequestCount());
}
Also used : Groups(org.apache.hadoop.security.Groups) ArrayList(java.util.ArrayList) IOException(java.io.IOException) FakeTimer(org.apache.hadoop.util.FakeTimer) Test(org.junit.Test)

Example 4 with FakeTimer

use of org.apache.hadoop.util.FakeTimer in project hadoop by apache.

the class TestGroupsCaching method testCacheEntriesExpire.

@Test
public void testCacheEntriesExpire() throws Exception {
    conf.setLong(CommonConfigurationKeys.HADOOP_SECURITY_GROUPS_CACHE_SECS, 1);
    FakeTimer timer = new FakeTimer();
    final Groups groups = new Groups(conf, timer);
    groups.cacheGroupsAdd(Arrays.asList(myGroups));
    groups.refresh();
    FakeGroupMapping.clearBlackList();
    // We make an entry
    groups.getGroups("me");
    int startingRequestCount = FakeGroupMapping.getRequestCount();
    timer.advance(20 * 1000);
    // Cache entry has expired so it results in a new fetch
    groups.getGroups("me");
    assertEquals(startingRequestCount + 1, FakeGroupMapping.getRequestCount());
}
Also used : Groups(org.apache.hadoop.security.Groups) FakeTimer(org.apache.hadoop.util.FakeTimer) Test(org.junit.Test)

Example 5 with FakeTimer

use of org.apache.hadoop.util.FakeTimer in project hadoop by apache.

the class TestGroupsCaching method testExceptionOnBackgroundRefreshHandled.

@Test
public void testExceptionOnBackgroundRefreshHandled() throws Exception {
    conf.setLong(CommonConfigurationKeys.HADOOP_SECURITY_GROUPS_CACHE_SECS, 1);
    conf.setBoolean(CommonConfigurationKeys.HADOOP_SECURITY_GROUPS_CACHE_BACKGROUND_RELOAD, true);
    FakeTimer timer = new FakeTimer();
    final Groups groups = new Groups(conf, timer);
    groups.cacheGroupsAdd(Arrays.asList(myGroups));
    groups.refresh();
    FakeGroupMapping.clearBlackList();
    // We make an initial request to populate the cache
    groups.getGroups("me");
    // add another group
    groups.cacheGroupsAdd(Arrays.asList("grp3"));
    int startingRequestCount = FakeGroupMapping.getRequestCount();
    // Arrange for an exception to occur only on the
    // second call
    FakeGroupMapping.setThrowException(true);
    // Then expire that entry
    timer.advance(4 * 1000);
    // Now get the cache entry - it should return immediately
    // with the old value and the cache will not have completed
    // a request to getGroups yet.
    assertEquals(groups.getGroups("me").size(), 2);
    assertEquals(startingRequestCount, FakeGroupMapping.getRequestCount());
    // Now sleep for a short time and re-check the request count. It should have
    // increased, but the exception means the cache will not have updated
    Thread.sleep(50);
    FakeGroupMapping.setThrowException(false);
    assertEquals(startingRequestCount + 1, FakeGroupMapping.getRequestCount());
    assertEquals(groups.getGroups("me").size(), 2);
    // Now sleep another short time - the 3rd call to getGroups above
    // will have kicked off another refresh that updates the cache
    Thread.sleep(50);
    assertEquals(startingRequestCount + 2, FakeGroupMapping.getRequestCount());
    assertEquals(groups.getGroups("me").size(), 3);
}
Also used : Groups(org.apache.hadoop.security.Groups) FakeTimer(org.apache.hadoop.util.FakeTimer) Test(org.junit.Test)

Aggregations

FakeTimer (org.apache.hadoop.util.FakeTimer)33 Test (org.junit.Test)30 Groups (org.apache.hadoop.security.Groups)10 HdfsConfiguration (org.apache.hadoop.hdfs.HdfsConfiguration)9 Configuration (org.apache.hadoop.conf.Configuration)8 StorageLocation (org.apache.hadoop.hdfs.server.datanode.StorageLocation)6 ListenableFuture (com.google.common.util.concurrent.ListenableFuture)5 IOException (java.io.IOException)5 AtomicBoolean (java.util.concurrent.atomic.AtomicBoolean)3 AtomicLong (java.util.concurrent.atomic.AtomicLong)3 TimeoutException (java.util.concurrent.TimeoutException)2 LogCapturer (org.apache.hadoop.test.GenericTestUtils.LogCapturer)2 Before (org.junit.Before)2 Optional (com.google.common.base.Optional)1 File (java.io.File)1 FileOutputStream (java.io.FileOutputStream)1 OutputStreamWriter (java.io.OutputStreamWriter)1 Writer (java.io.Writer)1 ArrayList (java.util.ArrayList)1 CountDownLatch (java.util.concurrent.CountDownLatch)1