Search in sources :

Example 1 with TopAuditLogger

use of org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger in project hadoop by apache.

the class TestAuditLogger method testDisableTopAuditLogger.

/**
   * Tests that TopAuditLogger can be disabled
   */
@Test
public void testDisableTopAuditLogger() throws IOException {
    Configuration conf = new HdfsConfiguration();
    conf.setBoolean(NNTOP_ENABLED_KEY, false);
    MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).build();
    try {
        cluster.waitClusterUp();
        List<AuditLogger> auditLoggers = cluster.getNameNode().getNamesystem().getAuditLoggers();
        for (AuditLogger auditLogger : auditLoggers) {
            assertFalse("top audit logger is still hooked in after it is disabled", auditLogger instanceof TopAuditLogger);
        }
    } finally {
        cluster.shutdown();
    }
}
Also used : TopAuditLogger(org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger) MiniDFSCluster(org.apache.hadoop.hdfs.MiniDFSCluster) TopAuditLogger(org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger) Configuration(org.apache.hadoop.conf.Configuration) HdfsConfiguration(org.apache.hadoop.hdfs.HdfsConfiguration) HdfsConfiguration(org.apache.hadoop.hdfs.HdfsConfiguration) Test(org.junit.Test)

Example 2 with TopAuditLogger

use of org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger in project hadoop by apache.

the class FSNamesystem method initAuditLoggers.

private List<AuditLogger> initAuditLoggers(Configuration conf) {
    // Initialize the custom access loggers if configured.
    Collection<String> alClasses = conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
    List<AuditLogger> auditLoggers = Lists.newArrayList();
    if (alClasses != null && !alClasses.isEmpty()) {
        for (String className : alClasses) {
            try {
                AuditLogger logger;
                if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
                    logger = new DefaultAuditLogger();
                } else {
                    logger = (AuditLogger) Class.forName(className).newInstance();
                }
                logger.initialize(conf);
                auditLoggers.add(logger);
            } catch (RuntimeException re) {
                throw re;
            } catch (Exception e) {
                throw new RuntimeException(e);
            }
        }
    }
    // Make sure there is at least one logger installed.
    if (auditLoggers.isEmpty()) {
        auditLoggers.add(new DefaultAuditLogger());
    }
    // Add audit logger to calculate top users
    if (topConf.isEnabled) {
        topMetrics = new TopMetrics(conf, topConf.nntopReportingPeriodsMs);
        if (DefaultMetricsSystem.instance().getSource(TOPMETRICS_METRICS_SOURCE_NAME) == null) {
            DefaultMetricsSystem.instance().register(TOPMETRICS_METRICS_SOURCE_NAME, "Top N operations by user", topMetrics);
        }
        auditLoggers.add(new TopAuditLogger(topMetrics));
    }
    return Collections.unmodifiableList(auditLoggers);
}
Also used : TopAuditLogger(org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger) TopAuditLogger(org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger) TopMetrics(org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics) AlreadyBeingCreatedException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException) InvalidPathException(org.apache.hadoop.fs.InvalidPathException) StandbyException(org.apache.hadoop.ipc.StandbyException) IOException(java.io.IOException) RecoveryInProgressException(org.apache.hadoop.hdfs.protocol.RecoveryInProgressException) SnapshotException(org.apache.hadoop.hdfs.protocol.SnapshotException) HadoopIllegalArgumentException(org.apache.hadoop.HadoopIllegalArgumentException) NotCompliantMBeanException(javax.management.NotCompliantMBeanException) RetriableException(org.apache.hadoop.ipc.RetriableException) UnknownCryptoProtocolVersionException(org.apache.hadoop.hdfs.UnknownCryptoProtocolVersionException) UnresolvedLinkException(org.apache.hadoop.fs.UnresolvedLinkException) FileNotFoundException(java.io.FileNotFoundException) ServiceFailedException(org.apache.hadoop.ha.ServiceFailedException) RollingUpgradeException(org.apache.hadoop.hdfs.protocol.RollingUpgradeException) AccessControlException(org.apache.hadoop.security.AccessControlException) SnapshotAccessControlException(org.apache.hadoop.hdfs.protocol.SnapshotAccessControlException)

Aggregations

TopAuditLogger (org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger)2 FileNotFoundException (java.io.FileNotFoundException)1 IOException (java.io.IOException)1 NotCompliantMBeanException (javax.management.NotCompliantMBeanException)1 HadoopIllegalArgumentException (org.apache.hadoop.HadoopIllegalArgumentException)1 Configuration (org.apache.hadoop.conf.Configuration)1 InvalidPathException (org.apache.hadoop.fs.InvalidPathException)1 UnresolvedLinkException (org.apache.hadoop.fs.UnresolvedLinkException)1 ServiceFailedException (org.apache.hadoop.ha.ServiceFailedException)1 HdfsConfiguration (org.apache.hadoop.hdfs.HdfsConfiguration)1 MiniDFSCluster (org.apache.hadoop.hdfs.MiniDFSCluster)1 UnknownCryptoProtocolVersionException (org.apache.hadoop.hdfs.UnknownCryptoProtocolVersionException)1 AlreadyBeingCreatedException (org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException)1 RecoveryInProgressException (org.apache.hadoop.hdfs.protocol.RecoveryInProgressException)1 RollingUpgradeException (org.apache.hadoop.hdfs.protocol.RollingUpgradeException)1 SnapshotAccessControlException (org.apache.hadoop.hdfs.protocol.SnapshotAccessControlException)1 SnapshotException (org.apache.hadoop.hdfs.protocol.SnapshotException)1 TopMetrics (org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics)1 RetriableException (org.apache.hadoop.ipc.RetriableException)1 StandbyException (org.apache.hadoop.ipc.StandbyException)1