Search in sources :

Example 11 with ProxyAndInfo

use of org.apache.hadoop.hdfs.NameNodeProxiesClient.ProxyAndInfo in project hadoop by apache.

the class DFSAdmin method restoreFailedStorage.

/**
   * Command to enable/disable/check restoring of failed storage replicas in the namenode.
   * Usage: hdfs dfsadmin -restoreFailedStorage true|false|check
   * @exception IOException 
   * @see org.apache.hadoop.hdfs.protocol.ClientProtocol#restoreFailedStorage(String arg)
   */
public int restoreFailedStorage(String arg) throws IOException {
    int exitCode = -1;
    if (!arg.equals("check") && !arg.equals("true") && !arg.equals("false")) {
        System.err.println("restoreFailedStorage valid args are true|false|check");
        return exitCode;
    }
    DistributedFileSystem dfs = getDFS();
    Configuration dfsConf = dfs.getConf();
    URI dfsUri = dfs.getUri();
    boolean isHaEnabled = HAUtilClient.isLogicalUri(dfsConf, dfsUri);
    if (isHaEnabled) {
        String nsId = dfsUri.getHost();
        List<ProxyAndInfo<ClientProtocol>> proxies = HAUtil.getProxiesForAllNameNodesInNameservice(dfsConf, nsId, ClientProtocol.class);
        for (ProxyAndInfo<ClientProtocol> proxy : proxies) {
            Boolean res = proxy.getProxy().restoreFailedStorage(arg);
            System.out.println("restoreFailedStorage is set to " + res + " for " + proxy.getAddress());
        }
    } else {
        Boolean res = dfs.restoreFailedStorage(arg);
        System.out.println("restoreFailedStorage is set to " + res);
    }
    exitCode = 0;
    return exitCode;
}
Also used : Configuration(org.apache.hadoop.conf.Configuration) HdfsConfiguration(org.apache.hadoop.hdfs.HdfsConfiguration) ProxyAndInfo(org.apache.hadoop.hdfs.NameNodeProxiesClient.ProxyAndInfo) DistributedFileSystem(org.apache.hadoop.hdfs.DistributedFileSystem) ClientProtocol(org.apache.hadoop.hdfs.protocol.ClientProtocol) URI(java.net.URI)

Example 12 with ProxyAndInfo

use of org.apache.hadoop.hdfs.NameNodeProxiesClient.ProxyAndInfo in project hadoop by apache.

the class DFSAdmin method refreshCallQueue.

public int refreshCallQueue() throws IOException {
    // Get the current configuration
    Configuration conf = getConf();
    // for security authorization
    // server principal for this call   
    // should be NN's one.
    conf.set(CommonConfigurationKeys.HADOOP_SECURITY_SERVICE_USER_NAME_KEY, conf.get(DFSConfigKeys.DFS_NAMENODE_KERBEROS_PRINCIPAL_KEY, ""));
    DistributedFileSystem dfs = getDFS();
    URI dfsUri = dfs.getUri();
    boolean isHaEnabled = HAUtilClient.isLogicalUri(conf, dfsUri);
    if (isHaEnabled) {
        // Run refreshCallQueue for all NNs if HA is enabled
        String nsId = dfsUri.getHost();
        List<ProxyAndInfo<RefreshCallQueueProtocol>> proxies = HAUtil.getProxiesForAllNameNodesInNameservice(conf, nsId, RefreshCallQueueProtocol.class);
        for (ProxyAndInfo<RefreshCallQueueProtocol> proxy : proxies) {
            proxy.getProxy().refreshCallQueue();
            System.out.println("Refresh call queue successful for " + proxy.getAddress());
        }
    } else {
        // Create the client
        RefreshCallQueueProtocol refreshProtocol = NameNodeProxies.createProxy(conf, FileSystem.getDefaultUri(conf), RefreshCallQueueProtocol.class).getProxy();
        // Refresh the call queue
        refreshProtocol.refreshCallQueue();
        System.out.println("Refresh call queue successful");
    }
    return 0;
}
Also used : Configuration(org.apache.hadoop.conf.Configuration) HdfsConfiguration(org.apache.hadoop.hdfs.HdfsConfiguration) ProxyAndInfo(org.apache.hadoop.hdfs.NameNodeProxiesClient.ProxyAndInfo) DistributedFileSystem(org.apache.hadoop.hdfs.DistributedFileSystem) URI(java.net.URI) RefreshCallQueueProtocol(org.apache.hadoop.ipc.RefreshCallQueueProtocol)

Example 13 with ProxyAndInfo

use of org.apache.hadoop.hdfs.NameNodeProxiesClient.ProxyAndInfo in project hadoop by apache.

the class DFSAdmin method refreshSuperUserGroupsConfiguration.

/**
   * refreshSuperUserGroupsConfiguration {@link NameNode}.
   * @return exitcode 0 on success, non-zero on failure
   * @throws IOException
   */
public int refreshSuperUserGroupsConfiguration() throws IOException {
    // Get the current configuration
    Configuration conf = getConf();
    // for security authorization
    // server principal for this call 
    // should be NAMENODE's one.
    conf.set(CommonConfigurationKeys.HADOOP_SECURITY_SERVICE_USER_NAME_KEY, conf.get(DFSConfigKeys.DFS_NAMENODE_KERBEROS_PRINCIPAL_KEY, ""));
    DistributedFileSystem dfs = getDFS();
    URI dfsUri = dfs.getUri();
    boolean isHaEnabled = HAUtilClient.isLogicalUri(conf, dfsUri);
    if (isHaEnabled) {
        // Run refreshSuperUserGroupsConfiguration for all NNs if HA is enabled
        String nsId = dfsUri.getHost();
        List<ProxyAndInfo<RefreshUserMappingsProtocol>> proxies = HAUtil.getProxiesForAllNameNodesInNameservice(conf, nsId, RefreshUserMappingsProtocol.class);
        for (ProxyAndInfo<RefreshUserMappingsProtocol> proxy : proxies) {
            proxy.getProxy().refreshSuperUserGroupsConfiguration();
            System.out.println("Refresh super user groups configuration " + "successful for " + proxy.getAddress());
        }
    } else {
        // Create the client
        RefreshUserMappingsProtocol refreshProtocol = NameNodeProxies.createProxy(conf, FileSystem.getDefaultUri(conf), RefreshUserMappingsProtocol.class).getProxy();
        // Refresh the user-to-groups mappings
        refreshProtocol.refreshSuperUserGroupsConfiguration();
        System.out.println("Refresh super user groups configuration successful");
    }
    return 0;
}
Also used : Configuration(org.apache.hadoop.conf.Configuration) HdfsConfiguration(org.apache.hadoop.hdfs.HdfsConfiguration) ProxyAndInfo(org.apache.hadoop.hdfs.NameNodeProxiesClient.ProxyAndInfo) RefreshUserMappingsProtocol(org.apache.hadoop.security.RefreshUserMappingsProtocol) DistributedFileSystem(org.apache.hadoop.hdfs.DistributedFileSystem) URI(java.net.URI)

Aggregations

ProxyAndInfo (org.apache.hadoop.hdfs.NameNodeProxiesClient.ProxyAndInfo)13 URI (java.net.URI)11 Configuration (org.apache.hadoop.conf.Configuration)11 DistributedFileSystem (org.apache.hadoop.hdfs.DistributedFileSystem)11 HdfsConfiguration (org.apache.hadoop.hdfs.HdfsConfiguration)11 ClientProtocol (org.apache.hadoop.hdfs.protocol.ClientProtocol)7 RefreshUserMappingsProtocol (org.apache.hadoop.security.RefreshUserMappingsProtocol)2 RefreshAuthorizationPolicyProtocol (org.apache.hadoop.security.authorize.RefreshAuthorizationPolicyProtocol)2 IOException (java.io.IOException)1 InetSocketAddress (java.net.InetSocketAddress)1 ArrayList (java.util.ArrayList)1 FileSystem (org.apache.hadoop.fs.FileSystem)1 HdfsConstants (org.apache.hadoop.hdfs.protocol.HdfsConstants)1 SafeModeAction (org.apache.hadoop.hdfs.protocol.HdfsConstants.SafeModeAction)1 JournalProtocol (org.apache.hadoop.hdfs.server.protocol.JournalProtocol)1 Text (org.apache.hadoop.io.Text)1 RefreshCallQueueProtocol (org.apache.hadoop.ipc.RefreshCallQueueProtocol)1 GetUserMappingsProtocol (org.apache.hadoop.tools.GetUserMappingsProtocol)1