Search in sources :

Example 1 with Peer

use of org.apache.hadoop.hdfs.net.Peer in project hadoop by apache.

the class TestDistributedFileSystem method testDFSClientPeerReadTimeout.

@Test(timeout = 10000)
public void testDFSClientPeerReadTimeout() throws IOException {
    final int timeout = 1000;
    final Configuration conf = new HdfsConfiguration();
    conf.setInt(HdfsClientConfigKeys.DFS_CLIENT_SOCKET_TIMEOUT_KEY, timeout);
    // only need cluster to create a dfs client to get a peer
    final MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).build();
    try {
        cluster.waitActive();
        DistributedFileSystem dfs = cluster.getFileSystem();
        // use a dummy socket to ensure the read timesout
        ServerSocket socket = new ServerSocket(0);
        Peer peer = dfs.getClient().newConnectedPeer((InetSocketAddress) socket.getLocalSocketAddress(), null, null);
        long start = Time.now();
        try {
            peer.getInputStream().read();
            Assert.fail("read should timeout");
        } catch (SocketTimeoutException ste) {
            long delta = Time.now() - start;
            if (delta < timeout * 0.9) {
                throw new IOException("read timedout too soon in " + delta + " ms.", ste);
            }
            if (delta > timeout * 1.1) {
                throw new IOException("read timedout too late in " + delta + " ms.", ste);
            }
        }
    } finally {
        cluster.shutdown();
    }
}
Also used : SocketTimeoutException(java.net.SocketTimeoutException) Configuration(org.apache.hadoop.conf.Configuration) Peer(org.apache.hadoop.hdfs.net.Peer) ServerSocket(java.net.ServerSocket) IOException(java.io.IOException) Test(org.junit.Test)

Example 2 with Peer

use of org.apache.hadoop.hdfs.net.Peer in project hadoop by apache.

the class TestDistributedFileSystem method testDFSClientPeerWriteTimeout.

@Test(timeout = 10000)
public void testDFSClientPeerWriteTimeout() throws IOException {
    final int timeout = 1000;
    final Configuration conf = new HdfsConfiguration();
    conf.setInt(HdfsClientConfigKeys.DFS_CLIENT_SOCKET_TIMEOUT_KEY, timeout);
    // only need cluster to create a dfs client to get a peer
    final MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).build();
    try {
        cluster.waitActive();
        DistributedFileSystem dfs = cluster.getFileSystem();
        // Write 10 MB to a dummy socket to ensure the write times out
        ServerSocket socket = new ServerSocket(0);
        Peer peer = dfs.getClient().newConnectedPeer((InetSocketAddress) socket.getLocalSocketAddress(), null, null);
        long start = Time.now();
        try {
            byte[] buf = new byte[10 * 1024 * 1024];
            peer.getOutputStream().write(buf);
            long delta = Time.now() - start;
            Assert.fail("write finish in " + delta + " ms" + "but should timedout");
        } catch (SocketTimeoutException ste) {
            long delta = Time.now() - start;
            if (delta < timeout * 0.9) {
                throw new IOException("write timedout too soon in " + delta + " ms.", ste);
            }
            if (delta > timeout * 1.2) {
                throw new IOException("write timedout too late in " + delta + " ms.", ste);
            }
        }
    } finally {
        cluster.shutdown();
    }
}
Also used : SocketTimeoutException(java.net.SocketTimeoutException) Configuration(org.apache.hadoop.conf.Configuration) Peer(org.apache.hadoop.hdfs.net.Peer) ServerSocket(java.net.ServerSocket) IOException(java.io.IOException) Test(org.junit.Test)

Example 3 with Peer

use of org.apache.hadoop.hdfs.net.Peer in project hadoop by apache.

the class BlockReaderFactory method getRemoteBlockReaderFromTcp.

/**
   * Get a BlockReaderRemote that communicates over a TCP socket.
   *
   * @return The new BlockReader.  We will not return null, but instead throw
   *         an exception if this fails.
   *
   * @throws InvalidToken
   *             If the block token was invalid.
   *         InvalidEncryptionKeyException
   *             If the encryption key was invalid.
   *         Other IOException
   *             If there was another problem.
   */
private BlockReader getRemoteBlockReaderFromTcp() throws IOException {
    LOG.trace("{}: trying to create a remote block reader from a TCP socket", this);
    BlockReader blockReader = null;
    while (true) {
        BlockReaderPeer curPeer = null;
        Peer peer = null;
        try {
            curPeer = nextTcpPeer();
            if (curPeer.fromCache)
                remainingCacheTries--;
            peer = curPeer.peer;
            blockReader = getRemoteBlockReader(peer);
            return blockReader;
        } catch (IOException ioe) {
            if (isSecurityException(ioe)) {
                LOG.trace("{}: got security exception while constructing a remote " + "block reader from {}", this, peer, ioe);
                throw ioe;
            }
            if ((curPeer != null) && curPeer.fromCache) {
                // Handle an I/O error we got when using a cached peer.  These are
                // considered less serious, because the underlying socket may be
                // stale.
                LOG.debug("Closed potentially stale remote peer {}", peer, ioe);
            } else {
                // Handle an I/O error we got when using a newly created peer.
                LOG.warn("I/O error constructing remote block reader.", ioe);
                throw ioe;
            }
        } finally {
            if (blockReader == null) {
                IOUtilsClient.cleanup(LOG, peer);
            }
        }
    }
}
Also used : BlockReader(org.apache.hadoop.hdfs.BlockReader) DomainPeer(org.apache.hadoop.hdfs.net.DomainPeer) Peer(org.apache.hadoop.hdfs.net.Peer) IOException(java.io.IOException)

Example 4 with Peer

use of org.apache.hadoop.hdfs.net.Peer in project hadoop by apache.

the class BlockReaderFactory method nextTcpPeer.

/**
   * Get the next TCP-based peer-- either from the cache or by creating it.
   *
   * @return the next Peer, or null if we could not construct one.
   *
   * @throws IOException  If there was an error while constructing the peer
   *                      (such as an InvalidEncryptionKeyException)
   */
private BlockReaderPeer nextTcpPeer() throws IOException {
    if (remainingCacheTries > 0) {
        Peer peer = clientContext.getPeerCache().get(datanode, false);
        if (peer != null) {
            LOG.trace("nextTcpPeer: reusing existing peer {}", peer);
            return new BlockReaderPeer(peer, true);
        }
    }
    try {
        Peer peer = remotePeerFactory.newConnectedPeer(inetSocketAddress, token, datanode);
        LOG.trace("nextTcpPeer: created newConnectedPeer {}", peer);
        return new BlockReaderPeer(peer, false);
    } catch (IOException e) {
        LOG.trace("nextTcpPeer: failed to create newConnectedPeer connected to" + "{}", datanode);
        throw e;
    }
}
Also used : DomainPeer(org.apache.hadoop.hdfs.net.DomainPeer) Peer(org.apache.hadoop.hdfs.net.Peer) IOException(java.io.IOException)

Example 5 with Peer

use of org.apache.hadoop.hdfs.net.Peer in project hadoop by apache.

the class PeerCache method getInternal.

private synchronized Peer getInternal(DatanodeID dnId, boolean isDomain) {
    List<Value> sockStreamList = multimap.get(new Key(dnId, isDomain));
    if (sockStreamList == null) {
        return null;
    }
    Iterator<Value> iter = sockStreamList.iterator();
    while (iter.hasNext()) {
        Value candidate = iter.next();
        iter.remove();
        long ageMs = Time.monotonicNow() - candidate.getTime();
        Peer peer = candidate.getPeer();
        if (ageMs >= expiryPeriod) {
            try {
                peer.close();
            } catch (IOException e) {
                LOG.warn("got IOException closing stale peer " + peer + ", which is " + ageMs + " ms old");
            }
        } else if (!peer.isClosed()) {
            return peer;
        }
    }
    return null;
}
Also used : Peer(org.apache.hadoop.hdfs.net.Peer) IOException(java.io.IOException)

Aggregations

Peer (org.apache.hadoop.hdfs.net.Peer)22 IOException (java.io.IOException)9 DatanodeID (org.apache.hadoop.hdfs.protocol.DatanodeID)8 Test (org.junit.Test)8 Socket (java.net.Socket)7 InetSocketAddress (java.net.InetSocketAddress)5 SocketTimeoutException (java.net.SocketTimeoutException)4 Configuration (org.apache.hadoop.conf.Configuration)4 BlockReader (org.apache.hadoop.hdfs.BlockReader)4 RemotePeerFactory (org.apache.hadoop.hdfs.RemotePeerFactory)4 ExtendedBlock (org.apache.hadoop.hdfs.protocol.ExtendedBlock)4 ServerSocket (java.net.ServerSocket)3 BlockReaderFactory (org.apache.hadoop.hdfs.client.impl.BlockReaderFactory)3 DatanodeInfo (org.apache.hadoop.hdfs.protocol.DatanodeInfo)3 BlockTokenIdentifier (org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier)3 Token (org.apache.hadoop.security.token.Token)3 DfsClientConf (org.apache.hadoop.hdfs.client.impl.DfsClientConf)2 BasicInetPeer (org.apache.hadoop.hdfs.net.BasicInetPeer)2 DomainPeer (org.apache.hadoop.hdfs.net.DomainPeer)2 NioInetPeer (org.apache.hadoop.hdfs.net.NioInetPeer)2