Search in sources :

Example 1 with ProxyAndInfo

use of org.apache.hadoop.hdfs.NameNodeProxiesClient.ProxyAndInfo in project hadoop by apache.

the class NameNodeProxies method createNonHAProxy.

/**
   * Creates an explicitly non-HA-enabled proxy object. Most of the time you
   * don't want to use this, and should instead use {@link NameNodeProxies#createProxy}.
   *
   * @param conf the configuration object
   * @param nnAddr address of the remote NN to connect to
   * @param xface the IPC interface which should be created
   * @param ugi the user who is making the calls on the proxy object
   * @param withRetries certain interfaces have a non-standard retry policy
   * @param fallbackToSimpleAuth - set to true or false during this method to
   *   indicate if a secure client falls back to simple auth
   * @return an object containing both the proxy and the associated
   *         delegation token service it corresponds to
   * @throws IOException
   */
@SuppressWarnings("unchecked")
public static <T> ProxyAndInfo<T> createNonHAProxy(Configuration conf, InetSocketAddress nnAddr, Class<T> xface, UserGroupInformation ugi, boolean withRetries, AtomicBoolean fallbackToSimpleAuth) throws IOException {
    Text dtService = SecurityUtil.buildTokenService(nnAddr);
    T proxy;
    if (xface == ClientProtocol.class) {
        proxy = (T) NameNodeProxiesClient.createNonHAProxyWithClientProtocol(nnAddr, conf, ugi, withRetries, fallbackToSimpleAuth);
    } else if (xface == JournalProtocol.class) {
        proxy = (T) createNNProxyWithJournalProtocol(nnAddr, conf, ugi);
    } else if (xface == NamenodeProtocol.class) {
        proxy = (T) createNNProxyWithNamenodeProtocol(nnAddr, conf, ugi, withRetries);
    } else if (xface == GetUserMappingsProtocol.class) {
        proxy = (T) createNNProxyWithGetUserMappingsProtocol(nnAddr, conf, ugi);
    } else if (xface == RefreshUserMappingsProtocol.class) {
        proxy = (T) createNNProxyWithRefreshUserMappingsProtocol(nnAddr, conf, ugi);
    } else if (xface == RefreshAuthorizationPolicyProtocol.class) {
        proxy = (T) createNNProxyWithRefreshAuthorizationPolicyProtocol(nnAddr, conf, ugi);
    } else if (xface == RefreshCallQueueProtocol.class) {
        proxy = (T) createNNProxyWithRefreshCallQueueProtocol(nnAddr, conf, ugi);
    } else {
        String message = "Unsupported protocol found when creating the proxy " + "connection to NameNode: " + ((xface != null) ? xface.getClass().getName() : "null");
        LOG.error(message);
        throw new IllegalStateException(message);
    }
    return new ProxyAndInfo<T>(proxy, dtService, nnAddr);
}
Also used : ProxyAndInfo(org.apache.hadoop.hdfs.NameNodeProxiesClient.ProxyAndInfo) GetUserMappingsProtocol(org.apache.hadoop.tools.GetUserMappingsProtocol) Text(org.apache.hadoop.io.Text) RefreshAuthorizationPolicyProtocol(org.apache.hadoop.security.authorize.RefreshAuthorizationPolicyProtocol) JournalProtocol(org.apache.hadoop.hdfs.server.protocol.JournalProtocol)

Example 2 with ProxyAndInfo

use of org.apache.hadoop.hdfs.NameNodeProxiesClient.ProxyAndInfo in project hadoop by apache.

the class DFSAdmin method saveNamespace.

/**
   * Command to ask the namenode to save the namespace.
   * Usage: hdfs dfsadmin -saveNamespace
   * @see ClientProtocol#saveNamespace(long, long)
   */
public int saveNamespace(String[] argv) throws IOException {
    final DistributedFileSystem dfs = getDFS();
    final Configuration dfsConf = dfs.getConf();
    long timeWindow = 0;
    long txGap = 0;
    if (argv.length > 1 && "-beforeShutdown".equals(argv[1])) {
        final long checkpointPeriod = dfsConf.getTimeDuration(DFSConfigKeys.DFS_NAMENODE_CHECKPOINT_PERIOD_KEY, DFSConfigKeys.DFS_NAMENODE_CHECKPOINT_PERIOD_DEFAULT, TimeUnit.SECONDS);
        final long checkpointTxnCount = dfsConf.getLong(DFSConfigKeys.DFS_NAMENODE_CHECKPOINT_TXNS_KEY, DFSConfigKeys.DFS_NAMENODE_CHECKPOINT_TXNS_DEFAULT);
        final int toleratePeriodNum = dfsConf.getInt(DFSConfigKeys.DFS_NAMENODE_MISSING_CHECKPOINT_PERIODS_BEFORE_SHUTDOWN_KEY, DFSConfigKeys.DFS_NAMENODE_MISSING_CHECKPOINT_PERIODS_BEFORE_SHUTDOWN_DEFAULT);
        timeWindow = checkpointPeriod * toleratePeriodNum;
        txGap = checkpointTxnCount * toleratePeriodNum;
        System.out.println("Do checkpoint if necessary before stopping " + "namenode. The time window is " + timeWindow + " seconds, and the " + "transaction gap is " + txGap);
    }
    URI dfsUri = dfs.getUri();
    boolean isHaEnabled = HAUtilClient.isLogicalUri(dfsConf, dfsUri);
    if (isHaEnabled) {
        String nsId = dfsUri.getHost();
        List<ProxyAndInfo<ClientProtocol>> proxies = HAUtil.getProxiesForAllNameNodesInNameservice(dfsConf, nsId, ClientProtocol.class);
        for (ProxyAndInfo<ClientProtocol> proxy : proxies) {
            boolean saved = proxy.getProxy().saveNamespace(timeWindow, txGap);
            if (saved) {
                System.out.println("Save namespace successful for " + proxy.getAddress());
            } else {
                System.out.println("No extra checkpoint has been made for " + proxy.getAddress());
            }
        }
    } else {
        boolean saved = dfs.saveNamespace(timeWindow, txGap);
        if (saved) {
            System.out.println("Save namespace successful");
        } else {
            System.out.println("No extra checkpoint has been made");
        }
    }
    return 0;
}
Also used : Configuration(org.apache.hadoop.conf.Configuration) HdfsConfiguration(org.apache.hadoop.hdfs.HdfsConfiguration) ProxyAndInfo(org.apache.hadoop.hdfs.NameNodeProxiesClient.ProxyAndInfo) DistributedFileSystem(org.apache.hadoop.hdfs.DistributedFileSystem) ClientProtocol(org.apache.hadoop.hdfs.protocol.ClientProtocol) URI(java.net.URI)

Example 3 with ProxyAndInfo

use of org.apache.hadoop.hdfs.NameNodeProxiesClient.ProxyAndInfo in project hadoop by apache.

the class DFSAdmin method refreshNodes.

/**
   * Command to ask the namenode to reread the hosts and excluded hosts 
   * file.
   * Usage: hdfs dfsadmin -refreshNodes
   * @exception IOException 
   */
public int refreshNodes() throws IOException {
    int exitCode = -1;
    DistributedFileSystem dfs = getDFS();
    Configuration dfsConf = dfs.getConf();
    URI dfsUri = dfs.getUri();
    boolean isHaEnabled = HAUtilClient.isLogicalUri(dfsConf, dfsUri);
    if (isHaEnabled) {
        String nsId = dfsUri.getHost();
        List<ProxyAndInfo<ClientProtocol>> proxies = HAUtil.getProxiesForAllNameNodesInNameservice(dfsConf, nsId, ClientProtocol.class);
        for (ProxyAndInfo<ClientProtocol> proxy : proxies) {
            proxy.getProxy().refreshNodes();
            System.out.println("Refresh nodes successful for " + proxy.getAddress());
        }
    } else {
        dfs.refreshNodes();
        System.out.println("Refresh nodes successful");
    }
    exitCode = 0;
    return exitCode;
}
Also used : Configuration(org.apache.hadoop.conf.Configuration) HdfsConfiguration(org.apache.hadoop.hdfs.HdfsConfiguration) ProxyAndInfo(org.apache.hadoop.hdfs.NameNodeProxiesClient.ProxyAndInfo) DistributedFileSystem(org.apache.hadoop.hdfs.DistributedFileSystem) ClientProtocol(org.apache.hadoop.hdfs.protocol.ClientProtocol) URI(java.net.URI)

Example 4 with ProxyAndInfo

use of org.apache.hadoop.hdfs.NameNodeProxiesClient.ProxyAndInfo in project hadoop by apache.

the class DFSAdmin method refreshUserToGroupsMappings.

/**
   * Refresh the user-to-groups mappings on the {@link NameNode}.
   * @return exitcode 0 on success, non-zero on failure
   * @throws IOException
   */
public int refreshUserToGroupsMappings() throws IOException {
    // Get the current configuration
    Configuration conf = getConf();
    // for security authorization
    // server principal for this call   
    // should be NN's one.
    conf.set(CommonConfigurationKeys.HADOOP_SECURITY_SERVICE_USER_NAME_KEY, conf.get(DFSConfigKeys.DFS_NAMENODE_KERBEROS_PRINCIPAL_KEY, ""));
    DistributedFileSystem dfs = getDFS();
    URI dfsUri = dfs.getUri();
    boolean isHaEnabled = HAUtilClient.isLogicalUri(conf, dfsUri);
    if (isHaEnabled) {
        // Run refreshUserToGroupsMapings for all NNs if HA is enabled
        String nsId = dfsUri.getHost();
        List<ProxyAndInfo<RefreshUserMappingsProtocol>> proxies = HAUtil.getProxiesForAllNameNodesInNameservice(conf, nsId, RefreshUserMappingsProtocol.class);
        for (ProxyAndInfo<RefreshUserMappingsProtocol> proxy : proxies) {
            proxy.getProxy().refreshUserToGroupsMappings();
            System.out.println("Refresh user to groups mapping successful for " + proxy.getAddress());
        }
    } else {
        // Create the client
        RefreshUserMappingsProtocol refreshProtocol = NameNodeProxies.createProxy(conf, FileSystem.getDefaultUri(conf), RefreshUserMappingsProtocol.class).getProxy();
        // Refresh the user-to-groups mappings
        refreshProtocol.refreshUserToGroupsMappings();
        System.out.println("Refresh user to groups mapping successful");
    }
    return 0;
}
Also used : Configuration(org.apache.hadoop.conf.Configuration) HdfsConfiguration(org.apache.hadoop.hdfs.HdfsConfiguration) ProxyAndInfo(org.apache.hadoop.hdfs.NameNodeProxiesClient.ProxyAndInfo) RefreshUserMappingsProtocol(org.apache.hadoop.security.RefreshUserMappingsProtocol) DistributedFileSystem(org.apache.hadoop.hdfs.DistributedFileSystem) URI(java.net.URI)

Example 5 with ProxyAndInfo

use of org.apache.hadoop.hdfs.NameNodeProxiesClient.ProxyAndInfo in project hadoop by apache.

the class DFSAdmin method refreshServiceAcl.

/**
   * Refresh the authorization policy on the {@link NameNode}.
   * @return exitcode 0 on success, non-zero on failure
   * @throws IOException
   */
public int refreshServiceAcl() throws IOException {
    // Get the current configuration
    Configuration conf = getConf();
    // for security authorization
    // server principal for this call   
    // should be NN's one.
    conf.set(CommonConfigurationKeys.HADOOP_SECURITY_SERVICE_USER_NAME_KEY, conf.get(DFSConfigKeys.DFS_NAMENODE_KERBEROS_PRINCIPAL_KEY, ""));
    DistributedFileSystem dfs = getDFS();
    URI dfsUri = dfs.getUri();
    boolean isHaEnabled = HAUtilClient.isLogicalUri(conf, dfsUri);
    if (isHaEnabled) {
        // Run refreshServiceAcl for all NNs if HA is enabled
        String nsId = dfsUri.getHost();
        List<ProxyAndInfo<RefreshAuthorizationPolicyProtocol>> proxies = HAUtil.getProxiesForAllNameNodesInNameservice(conf, nsId, RefreshAuthorizationPolicyProtocol.class);
        for (ProxyAndInfo<RefreshAuthorizationPolicyProtocol> proxy : proxies) {
            proxy.getProxy().refreshServiceAcl();
            System.out.println("Refresh service acl successful for " + proxy.getAddress());
        }
    } else {
        // Create the client
        RefreshAuthorizationPolicyProtocol refreshProtocol = NameNodeProxies.createProxy(conf, FileSystem.getDefaultUri(conf), RefreshAuthorizationPolicyProtocol.class).getProxy();
        // Refresh the authorization policy in-effect
        refreshProtocol.refreshServiceAcl();
        System.out.println("Refresh service acl successful");
    }
    return 0;
}
Also used : Configuration(org.apache.hadoop.conf.Configuration) HdfsConfiguration(org.apache.hadoop.hdfs.HdfsConfiguration) ProxyAndInfo(org.apache.hadoop.hdfs.NameNodeProxiesClient.ProxyAndInfo) RefreshAuthorizationPolicyProtocol(org.apache.hadoop.security.authorize.RefreshAuthorizationPolicyProtocol) DistributedFileSystem(org.apache.hadoop.hdfs.DistributedFileSystem) URI(java.net.URI)

Aggregations

ProxyAndInfo (org.apache.hadoop.hdfs.NameNodeProxiesClient.ProxyAndInfo)13 URI (java.net.URI)11 Configuration (org.apache.hadoop.conf.Configuration)11 DistributedFileSystem (org.apache.hadoop.hdfs.DistributedFileSystem)11 HdfsConfiguration (org.apache.hadoop.hdfs.HdfsConfiguration)11 ClientProtocol (org.apache.hadoop.hdfs.protocol.ClientProtocol)7 RefreshUserMappingsProtocol (org.apache.hadoop.security.RefreshUserMappingsProtocol)2 RefreshAuthorizationPolicyProtocol (org.apache.hadoop.security.authorize.RefreshAuthorizationPolicyProtocol)2 IOException (java.io.IOException)1 InetSocketAddress (java.net.InetSocketAddress)1 ArrayList (java.util.ArrayList)1 FileSystem (org.apache.hadoop.fs.FileSystem)1 HdfsConstants (org.apache.hadoop.hdfs.protocol.HdfsConstants)1 SafeModeAction (org.apache.hadoop.hdfs.protocol.HdfsConstants.SafeModeAction)1 JournalProtocol (org.apache.hadoop.hdfs.server.protocol.JournalProtocol)1 Text (org.apache.hadoop.io.Text)1 RefreshCallQueueProtocol (org.apache.hadoop.ipc.RefreshCallQueueProtocol)1 GetUserMappingsProtocol (org.apache.hadoop.tools.GetUserMappingsProtocol)1