Search in sources :

Example 51 with DFSClient

use of org.apache.hadoop.hdfs.DFSClient in project hbase by apache.

the class HFileSystem method addLocationsOrderInterceptor.

/**
   * Add an interceptor on the calls to the namenode#getBlockLocations from the DFSClient
   * linked to this FileSystem. See HBASE-6435 for the background.
   * <p/>
   * There should be no reason, except testing, to create a specific ReorderBlocks.
   *
   * @return true if the interceptor was added, false otherwise.
   */
static boolean addLocationsOrderInterceptor(Configuration conf, final ReorderBlocks lrb) {
    if (!conf.getBoolean("hbase.filesystem.reorder.blocks", true)) {
        // activated by default
        LOG.debug("addLocationsOrderInterceptor configured to false");
        return false;
    }
    FileSystem fs;
    try {
        fs = FileSystem.get(conf);
    } catch (IOException e) {
        LOG.warn("Can't get the file system from the conf.", e);
        return false;
    }
    if (!(fs instanceof DistributedFileSystem)) {
        LOG.debug("The file system is not a DistributedFileSystem. " + "Skipping on block location reordering");
        return false;
    }
    DistributedFileSystem dfs = (DistributedFileSystem) fs;
    DFSClient dfsc = dfs.getClient();
    if (dfsc == null) {
        LOG.warn("The DistributedFileSystem does not contain a DFSClient. Can't add the location " + "block reordering interceptor. Continuing, but this is unexpected.");
        return false;
    }
    try {
        Field nf = DFSClient.class.getDeclaredField("namenode");
        nf.setAccessible(true);
        Field modifiersField = Field.class.getDeclaredField("modifiers");
        modifiersField.setAccessible(true);
        modifiersField.setInt(nf, nf.getModifiers() & ~Modifier.FINAL);
        ClientProtocol namenode = (ClientProtocol) nf.get(dfsc);
        if (namenode == null) {
            LOG.warn("The DFSClient is not linked to a namenode. Can't add the location block" + " reordering interceptor. Continuing, but this is unexpected.");
            return false;
        }
        ClientProtocol cp1 = createReorderingProxy(namenode, lrb, conf);
        nf.set(dfsc, cp1);
        LOG.info("Added intercepting call to namenode#getBlockLocations so can do block reordering" + " using class " + lrb.getClass().getName());
    } catch (NoSuchFieldException e) {
        LOG.warn("Can't modify the DFSClient#namenode field to add the location reorder.", e);
        return false;
    } catch (IllegalAccessException e) {
        LOG.warn("Can't modify the DFSClient#namenode field to add the location reorder.", e);
        return false;
    }
    return true;
}
Also used : DFSClient(org.apache.hadoop.hdfs.DFSClient) Field(java.lang.reflect.Field) FileSystem(org.apache.hadoop.fs.FileSystem) FilterFileSystem(org.apache.hadoop.fs.FilterFileSystem) DistributedFileSystem(org.apache.hadoop.hdfs.DistributedFileSystem) LocalFileSystem(org.apache.hadoop.fs.LocalFileSystem) IOException(java.io.IOException) DistributedFileSystem(org.apache.hadoop.hdfs.DistributedFileSystem) ClientProtocol(org.apache.hadoop.hdfs.protocol.ClientProtocol)

Example 52 with DFSClient

use of org.apache.hadoop.hdfs.DFSClient in project hadoop by apache.

the class TestLeaseRenewer method createMockClient.

private DFSClient createMockClient() {
    final DfsClientConf mockConf = Mockito.mock(DfsClientConf.class);
    Mockito.doReturn((int) FAST_GRACE_PERIOD).when(mockConf).getHdfsTimeout();
    DFSClient mock = Mockito.mock(DFSClient.class);
    Mockito.doReturn(true).when(mock).isClientRunning();
    Mockito.doReturn(mockConf).when(mock).getConf();
    Mockito.doReturn("myclient").when(mock).getClientName();
    return mock;
}
Also used : DFSClient(org.apache.hadoop.hdfs.DFSClient)

Example 53 with DFSClient

use of org.apache.hadoop.hdfs.DFSClient in project hadoop by apache.

the class TestLeaseRenewer method testManyDfsClientsWhereSomeNotOpen.

/**
   * Regression test for HDFS-2810. In this bug, the LeaseRenewer has handles
   * to several DFSClients with the same name, the first of which has no files
   * open. Previously, this was causing the lease to not get renewed.
   */
@Test
public void testManyDfsClientsWhereSomeNotOpen() throws Exception {
    // First DFSClient has no files open so doesn't renew leases.
    final DFSClient mockClient1 = createMockClient();
    Mockito.doReturn(false).when(mockClient1).renewLease();
    assertSame(renewer, LeaseRenewer.getInstance(FAKE_AUTHORITY, FAKE_UGI_A, mockClient1));
    // Set up a file so that we start renewing our lease.
    DFSOutputStream mockStream1 = Mockito.mock(DFSOutputStream.class);
    long fileId = 456L;
    renewer.put(fileId, mockStream1, mockClient1);
    // Second DFSClient does renew lease
    final DFSClient mockClient2 = createMockClient();
    Mockito.doReturn(true).when(mockClient2).renewLease();
    assertSame(renewer, LeaseRenewer.getInstance(FAKE_AUTHORITY, FAKE_UGI_A, mockClient2));
    // Set up a file so that we start renewing our lease.
    DFSOutputStream mockStream2 = Mockito.mock(DFSOutputStream.class);
    renewer.put(fileId, mockStream2, mockClient2);
    // Wait for lease to get renewed
    GenericTestUtils.waitFor(new Supplier<Boolean>() {

        @Override
        public Boolean get() {
            try {
                Mockito.verify(mockClient1, Mockito.atLeastOnce()).renewLease();
                Mockito.verify(mockClient2, Mockito.atLeastOnce()).renewLease();
                return true;
            } catch (AssertionError err) {
                LeaseRenewer.LOG.warn("Not yet satisfied", err);
                return false;
            } catch (IOException e) {
                // should not throw!
                throw new RuntimeException(e);
            }
        }
    }, 100, 10000);
    renewer.closeFile(fileId, mockClient1);
    renewer.closeFile(fileId, mockClient2);
}
Also used : DFSClient(org.apache.hadoop.hdfs.DFSClient) IOException(java.io.IOException) DFSOutputStream(org.apache.hadoop.hdfs.DFSOutputStream) Test(org.junit.Test)

Example 54 with DFSClient

use of org.apache.hadoop.hdfs.DFSClient in project hadoop by apache.

the class LeaseRenewer method closeClient.

/** Close the given client. */
public synchronized void closeClient(final DFSClient dfsc) {
    dfsclients.remove(dfsc);
    if (dfsclients.isEmpty()) {
        if (!isRunning() || isRenewerExpired()) {
            Factory.INSTANCE.remove(LeaseRenewer.this);
            return;
        }
        if (emptyTime == Long.MAX_VALUE) {
            //discover the first time that the client list is empty.
            emptyTime = Time.monotonicNow();
        }
    }
    //update renewal time
    if (renewal == dfsc.getConf().getHdfsTimeout() / 2) {
        long min = HdfsConstants.LEASE_SOFTLIMIT_PERIOD;
        for (DFSClient c : dfsclients) {
            final int timeout = c.getConf().getHdfsTimeout();
            if (timeout > 0 && timeout < min) {
                min = timeout;
            }
        }
        renewal = min / 2;
    }
}
Also used : DFSClient(org.apache.hadoop.hdfs.DFSClient)

Example 55 with DFSClient

use of org.apache.hadoop.hdfs.DFSClient in project hadoop by apache.

the class RpcProgramNfs3 method access.

@VisibleForTesting
ACCESS3Response access(XDR xdr, SecurityHandler securityHandler, SocketAddress remoteAddress) {
    ACCESS3Response response = new ACCESS3Response(Nfs3Status.NFS3_OK);
    if (!checkAccessPrivilege(remoteAddress, AccessPrivilege.READ_ONLY)) {
        response.setStatus(Nfs3Status.NFS3ERR_ACCES);
        return response;
    }
    DFSClient dfsClient = clientCache.getDfsClient(securityHandler.getUser());
    if (dfsClient == null) {
        response.setStatus(Nfs3Status.NFS3ERR_SERVERFAULT);
        return response;
    }
    ACCESS3Request request;
    try {
        request = ACCESS3Request.deserialize(xdr);
    } catch (IOException e) {
        LOG.error("Invalid ACCESS request");
        return new ACCESS3Response(Nfs3Status.NFS3ERR_INVAL);
    }
    FileHandle handle = request.getHandle();
    Nfs3FileAttributes attrs;
    if (LOG.isDebugEnabled()) {
        LOG.debug("NFS ACCESS fileId: " + handle.getFileId() + " client: " + remoteAddress);
    }
    try {
        attrs = writeManager.getFileAttr(dfsClient, handle, iug);
        if (attrs == null) {
            LOG.error("Can't get path for fileId: " + handle.getFileId());
            return new ACCESS3Response(Nfs3Status.NFS3ERR_STALE);
        }
        if (iug.getUserName(securityHandler.getUid(), "unknown").equals(superuser)) {
            int access = Nfs3Constant.ACCESS3_LOOKUP | Nfs3Constant.ACCESS3_DELETE | Nfs3Constant.ACCESS3_EXECUTE | Nfs3Constant.ACCESS3_EXTEND | Nfs3Constant.ACCESS3_MODIFY | Nfs3Constant.ACCESS3_READ;
            return new ACCESS3Response(Nfs3Status.NFS3_OK, attrs, access);
        }
        int access = Nfs3Utils.getAccessRightsForUserGroup(securityHandler.getUid(), securityHandler.getGid(), securityHandler.getAuxGids(), attrs);
        return new ACCESS3Response(Nfs3Status.NFS3_OK, attrs, access);
    } catch (RemoteException r) {
        LOG.warn("Exception ", r);
        IOException io = r.unwrapRemoteException();
        /**
       * AuthorizationException can be thrown if the user can't be proxy'ed.
       */
        if (io instanceof AuthorizationException) {
            return new ACCESS3Response(Nfs3Status.NFS3ERR_ACCES);
        } else {
            return new ACCESS3Response(Nfs3Status.NFS3ERR_IO);
        }
    } catch (IOException e) {
        LOG.warn("Exception ", e);
        int status = mapErrorStatus(e);
        return new ACCESS3Response(status);
    }
}
Also used : DFSClient(org.apache.hadoop.hdfs.DFSClient) ACCESS3Request(org.apache.hadoop.nfs.nfs3.request.ACCESS3Request) AuthorizationException(org.apache.hadoop.security.authorize.AuthorizationException) FileHandle(org.apache.hadoop.nfs.nfs3.FileHandle) ACCESS3Response(org.apache.hadoop.nfs.nfs3.response.ACCESS3Response) Nfs3FileAttributes(org.apache.hadoop.nfs.nfs3.Nfs3FileAttributes) IOException(java.io.IOException) RemoteException(org.apache.hadoop.ipc.RemoteException) VisibleForTesting(com.google.common.annotations.VisibleForTesting)

Aggregations

DFSClient (org.apache.hadoop.hdfs.DFSClient)97 Test (org.junit.Test)53 IOException (java.io.IOException)35 Nfs3FileAttributes (org.apache.hadoop.nfs.nfs3.Nfs3FileAttributes)27 FileHandle (org.apache.hadoop.nfs.nfs3.FileHandle)26 VisibleForTesting (com.google.common.annotations.VisibleForTesting)18 Path (org.apache.hadoop.fs.Path)18 DistributedFileSystem (org.apache.hadoop.hdfs.DistributedFileSystem)17 InetSocketAddress (java.net.InetSocketAddress)13 MiniDFSCluster (org.apache.hadoop.hdfs.MiniDFSCluster)13 Configuration (org.apache.hadoop.conf.Configuration)12 NfsConfiguration (org.apache.hadoop.hdfs.nfs.conf.NfsConfiguration)12 FileSystem (org.apache.hadoop.fs.FileSystem)11 HdfsFileStatus (org.apache.hadoop.hdfs.protocol.HdfsFileStatus)11 HdfsDataOutputStream (org.apache.hadoop.hdfs.client.HdfsDataOutputStream)9 WccData (org.apache.hadoop.nfs.nfs3.response.WccData)9 ShellBasedIdMapping (org.apache.hadoop.security.ShellBasedIdMapping)8 ExtendedBlock (org.apache.hadoop.hdfs.protocol.ExtendedBlock)7 LocatedBlock (org.apache.hadoop.hdfs.protocol.LocatedBlock)7 ArrayList (java.util.ArrayList)6