Search in sources :

Example 1 with RetriableException

use of org.apache.hadoop.ipc.RetriableException in project hadoop by apache.

the class TestWebHDFS method testRaceWhileNNStartup.

/**
   * Make sure a RetriableException is thrown when rpcServer is null in
   * NamenodeWebHdfsMethods.
   */
@Test
public void testRaceWhileNNStartup() throws Exception {
    MiniDFSCluster cluster = null;
    final Configuration conf = WebHdfsTestUtil.createConf();
    try {
        cluster = new MiniDFSCluster.Builder(conf).numDataNodes(0).build();
        cluster.waitActive();
        final NameNode namenode = cluster.getNameNode();
        final NamenodeProtocols rpcServer = namenode.getRpcServer();
        Whitebox.setInternalState(namenode, "rpcServer", null);
        final Path foo = new Path("/foo");
        final FileSystem webHdfs = WebHdfsTestUtil.getWebHdfsFileSystem(conf, WebHdfsConstants.WEBHDFS_SCHEME);
        try {
            webHdfs.mkdirs(foo);
            fail("Expected RetriableException");
        } catch (RetriableException e) {
            GenericTestUtils.assertExceptionContains("Namenode is in startup mode", e);
        }
        Whitebox.setInternalState(namenode, "rpcServer", rpcServer);
    } finally {
        if (cluster != null) {
            cluster.shutdown();
        }
    }
}
Also used : Path(org.apache.hadoop.fs.Path) NameNode(org.apache.hadoop.hdfs.server.namenode.NameNode) NamenodeProtocols(org.apache.hadoop.hdfs.server.protocol.NamenodeProtocols) MiniDFSCluster(org.apache.hadoop.hdfs.MiniDFSCluster) Configuration(org.apache.hadoop.conf.Configuration) HdfsConfiguration(org.apache.hadoop.hdfs.HdfsConfiguration) FileSystem(org.apache.hadoop.fs.FileSystem) DistributedFileSystem(org.apache.hadoop.hdfs.DistributedFileSystem) RetriableException(org.apache.hadoop.ipc.RetriableException) Test(org.junit.Test) HttpServerFunctionalTest(org.apache.hadoop.http.HttpServerFunctionalTest)

Example 2 with RetriableException

use of org.apache.hadoop.ipc.RetriableException in project hadoop by apache.

the class FSNamesystem method getBlockLocations.

/**
   * Get block locations within the specified range.
   * @see ClientProtocol#getBlockLocations(String, long, long)
   */
LocatedBlocks getBlockLocations(String clientMachine, String srcArg, long offset, long length) throws IOException {
    final String operationName = "open";
    checkOperation(OperationCategory.READ);
    GetBlockLocationsResult res = null;
    FSPermissionChecker pc = getPermissionChecker();
    readLock();
    try {
        checkOperation(OperationCategory.READ);
        res = FSDirStatAndListingOp.getBlockLocations(dir, pc, srcArg, offset, length, true);
        if (isInSafeMode()) {
            for (LocatedBlock b : res.blocks.getLocatedBlocks()) {
                // if safemode & no block locations yet then throw safemodeException
                if ((b.getLocations() == null) || (b.getLocations().length == 0)) {
                    SafeModeException se = newSafemodeException("Zero blocklocations for " + srcArg);
                    if (haEnabled && haContext != null && haContext.getState().getServiceState() == HAServiceState.ACTIVE) {
                        throw new RetriableException(se);
                    } else {
                        throw se;
                    }
                }
            }
        }
    } catch (AccessControlException e) {
        logAuditEvent(false, operationName, srcArg);
        throw e;
    } finally {
        readUnlock(operationName);
    }
    logAuditEvent(true, operationName, srcArg);
    if (!isInSafeMode() && res.updateAccessTime()) {
        String src = srcArg;
        writeLock();
        final long now = now();
        try {
            checkOperation(OperationCategory.WRITE);
            /**
         * Resolve the path again and update the atime only when the file
         * exists.
         *
         * XXX: Races can still occur even after resolving the path again.
         * For example:
         *
         * <ul>
         *   <li>Get the block location for "/a/b"</li>
         *   <li>Rename "/a/b" to "/c/b"</li>
         *   <li>The second resolution still points to "/a/b", which is
         *   wrong.</li>
         * </ul>
         *
         * The behavior is incorrect but consistent with the one before
         * HDFS-7463. A better fix is to change the edit log of SetTime to
         * use inode id instead of a path.
         */
            final INodesInPath iip = dir.resolvePath(pc, srcArg, DirOp.READ);
            src = iip.getPath();
            INode inode = iip.getLastINode();
            boolean updateAccessTime = inode != null && now > inode.getAccessTime() + dir.getAccessTimePrecision();
            if (!isInSafeMode() && updateAccessTime) {
                boolean changed = FSDirAttrOp.setTimes(dir, iip, -1, now, false);
                if (changed) {
                    getEditLog().logTimes(src, -1, now);
                }
            }
        } catch (Throwable e) {
            LOG.warn("Failed to update the access time of " + src, e);
        } finally {
            writeUnlock(operationName);
        }
    }
    LocatedBlocks blocks = res.blocks;
    sortLocatedBlocks(clientMachine, blocks);
    return blocks;
}
Also used : LocatedBlocks(org.apache.hadoop.hdfs.protocol.LocatedBlocks) LocatedBlock(org.apache.hadoop.hdfs.protocol.LocatedBlock) AccessControlException(org.apache.hadoop.security.AccessControlException) SnapshotAccessControlException(org.apache.hadoop.hdfs.protocol.SnapshotAccessControlException) RetriableException(org.apache.hadoop.ipc.RetriableException)

Example 3 with RetriableException

use of org.apache.hadoop.ipc.RetriableException in project hadoop by apache.

the class TestDefaultRetryPolicy method testWithRetriable.

/**
   * Verify that the default retry policy correctly retries
   * RetriableException when defaultRetryPolicyEnabled is enabled.
   *
   * @throws IOException
   */
@Test
public void testWithRetriable() throws Exception {
    Configuration conf = new Configuration();
    RetryPolicy policy = RetryUtils.getDefaultRetryPolicy(conf, "Test.No.Such.Key", // defaultRetryPolicyEnabled = true
    true, "Test.No.Such.Key", "10000,6", null);
    RetryPolicy.RetryAction action = policy.shouldRetry(new RetriableException("Dummy exception"), 0, 0, true);
    assertThat(action.action, is(RetryPolicy.RetryAction.RetryDecision.RETRY));
}
Also used : Configuration(org.apache.hadoop.conf.Configuration) RetriableException(org.apache.hadoop.ipc.RetriableException) Test(org.junit.Test)

Example 4 with RetriableException

use of org.apache.hadoop.ipc.RetriableException in project hadoop by apache.

the class ShortCircuitCache method fetch.

/**
   * Fetch an existing ReplicaInfo object.
   *
   * @param key       The key that we're using.
   * @param waitable  The waitable object to wait on.
   * @return          The existing ReplicaInfo object, or null if there is
   *                  none.
   *
   * @throws RetriableException   If the caller needs to retry.
   */
private ShortCircuitReplicaInfo fetch(ExtendedBlockId key, Waitable<ShortCircuitReplicaInfo> waitable) throws RetriableException {
    // Another thread is already in the process of loading this
    // ShortCircuitReplica.  So we simply wait for it to complete.
    ShortCircuitReplicaInfo info;
    try {
        LOG.trace("{}: found waitable for {}", this, key);
        info = waitable.await();
    } catch (InterruptedException e) {
        LOG.info(this + ": interrupted while waiting for " + key);
        Thread.currentThread().interrupt();
        throw new RetriableException("interrupted");
    }
    if (info.getInvalidTokenException() != null) {
        LOG.info(this + ": could not get " + key + " due to InvalidToken " + "exception.", info.getInvalidTokenException());
        return info;
    }
    ShortCircuitReplica replica = info.getReplica();
    if (replica == null) {
        LOG.warn(this + ": failed to get " + key);
        return info;
    }
    if (replica.purged) {
        // Ignore replicas that have already been purged from the cache.
        throw new RetriableException("Ignoring purged replica " + replica + ".  Retrying.");
    }
    // If it is, purge it and retry.
    if (replica.isStale()) {
        LOG.info(this + ": got stale replica " + replica + ".  Removing " + "this replica from the replicaInfoMap and retrying.");
        // Remove the cache's reference to the replica.  This may or may not
        // trigger a close.
        purge(replica);
        throw new RetriableException("ignoring stale replica " + replica);
    }
    ref(replica);
    return info;
}
Also used : RetriableException(org.apache.hadoop.ipc.RetriableException)

Example 5 with RetriableException

use of org.apache.hadoop.ipc.RetriableException in project hadoop by apache.

the class FSDirAppendOp method appendFile.

/**
   * Append to an existing file.
   * <p>
   *
   * The method returns the last block of the file if this is a partial block,
   * which can still be used for writing more data. The client uses the
   * returned block locations to form the data pipeline for this block.<br>
   * The {@link LocatedBlock} will be null if the last block is full.
   * The client then allocates a new block with the next call using
   * {@link org.apache.hadoop.hdfs.protocol.ClientProtocol#addBlock}.
   * <p>
   *
   * For description of parameters and exceptions thrown see
   * {@link org.apache.hadoop.hdfs.protocol.ClientProtocol#append}
   *
   * @param fsn namespace
   * @param srcArg path name
   * @param pc permission checker to check fs permission
   * @param holder client name
   * @param clientMachine client machine info
   * @param newBlock if the data is appended to a new block
   * @param logRetryCache whether to record RPC ids in editlog for retry cache
   *                      rebuilding
   * @return the last block with status
   */
static LastBlockWithStatus appendFile(final FSNamesystem fsn, final String srcArg, final FSPermissionChecker pc, final String holder, final String clientMachine, final boolean newBlock, final boolean logRetryCache) throws IOException {
    assert fsn.hasWriteLock();
    final LocatedBlock lb;
    final FSDirectory fsd = fsn.getFSDirectory();
    final INodesInPath iip;
    fsd.writeLock();
    try {
        iip = fsd.resolvePath(pc, srcArg, DirOp.WRITE);
        // Verify that the destination does not exist as a directory already
        final INode inode = iip.getLastINode();
        final String path = iip.getPath();
        if (inode != null && inode.isDirectory()) {
            throw new FileAlreadyExistsException("Cannot append to directory " + path + "; already exists as a directory.");
        }
        if (fsd.isPermissionEnabled()) {
            fsd.checkPathAccess(pc, iip, FsAction.WRITE);
        }
        if (inode == null) {
            throw new FileNotFoundException("Failed to append to non-existent file " + path + " for client " + clientMachine);
        }
        final INodeFile file = INodeFile.valueOf(inode, path, true);
        // not support appending file with striped blocks
        if (file.isStriped()) {
            throw new UnsupportedOperationException("Cannot append to files with striped block " + path);
        }
        BlockManager blockManager = fsd.getBlockManager();
        final BlockStoragePolicy lpPolicy = blockManager.getStoragePolicy("LAZY_PERSIST");
        if (lpPolicy != null && lpPolicy.getId() == file.getStoragePolicyID()) {
            throw new UnsupportedOperationException("Cannot append to lazy persist file " + path);
        }
        // Opening an existing file for append - may need to recover lease.
        fsn.recoverLeaseInternal(RecoverLeaseOp.APPEND_FILE, iip, path, holder, clientMachine, false);
        final BlockInfo lastBlock = file.getLastBlock();
        // Check that the block has at least minimum replication.
        if (lastBlock != null) {
            if (lastBlock.getBlockUCState() == BlockUCState.COMMITTED) {
                throw new RetriableException(new NotReplicatedYetException("append: lastBlock=" + lastBlock + " of src=" + path + " is COMMITTED but not yet COMPLETE."));
            } else if (lastBlock.isComplete() && !blockManager.isSufficientlyReplicated(lastBlock)) {
                throw new IOException("append: lastBlock=" + lastBlock + " of src=" + path + " is not sufficiently replicated yet.");
            }
        }
        lb = prepareFileForAppend(fsn, iip, holder, clientMachine, newBlock, true, logRetryCache);
    } catch (IOException ie) {
        NameNode.stateChangeLog.warn("DIR* NameSystem.append: " + ie.getMessage());
        throw ie;
    } finally {
        fsd.writeUnlock();
    }
    HdfsFileStatus stat = FSDirStatAndListingOp.getFileInfo(fsd, iip);
    if (lb != null) {
        NameNode.stateChangeLog.debug("DIR* NameSystem.appendFile: file {} for {} at {} block {} block" + " size {}", srcArg, holder, clientMachine, lb.getBlock(), lb.getBlock().getNumBytes());
    }
    return new LastBlockWithStatus(lb, stat);
}
Also used : FileAlreadyExistsException(org.apache.hadoop.fs.FileAlreadyExistsException) LastBlockWithStatus(org.apache.hadoop.hdfs.protocol.LastBlockWithStatus) FileNotFoundException(java.io.FileNotFoundException) LocatedBlock(org.apache.hadoop.hdfs.protocol.LocatedBlock) IOException(java.io.IOException) BlockManager(org.apache.hadoop.hdfs.server.blockmanagement.BlockManager) BlockInfo(org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo) HdfsFileStatus(org.apache.hadoop.hdfs.protocol.HdfsFileStatus) BlockStoragePolicy(org.apache.hadoop.hdfs.protocol.BlockStoragePolicy) RetriableException(org.apache.hadoop.ipc.RetriableException)

Aggregations

RetriableException (org.apache.hadoop.ipc.RetriableException)8 Configuration (org.apache.hadoop.conf.Configuration)4 Test (org.junit.Test)4 IOException (java.io.IOException)3 LocatedBlock (org.apache.hadoop.hdfs.protocol.LocatedBlock)2 ByteArrayInputStream (java.io.ByteArrayInputStream)1 DataInputStream (java.io.DataInputStream)1 FileNotFoundException (java.io.FileNotFoundException)1 InterruptedIOException (java.io.InterruptedIOException)1 LinkedList (java.util.LinkedList)1 FileAlreadyExistsException (org.apache.hadoop.fs.FileAlreadyExistsException)1 FileSystem (org.apache.hadoop.fs.FileSystem)1 Path (org.apache.hadoop.fs.Path)1 DistributedFileSystem (org.apache.hadoop.hdfs.DistributedFileSystem)1 HdfsConfiguration (org.apache.hadoop.hdfs.HdfsConfiguration)1 MiniDFSCluster (org.apache.hadoop.hdfs.MiniDFSCluster)1 DfsClientConf (org.apache.hadoop.hdfs.client.impl.DfsClientConf)1 BlockStoragePolicy (org.apache.hadoop.hdfs.protocol.BlockStoragePolicy)1 ClientDatanodeProtocol (org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol)1 DatanodeInfo (org.apache.hadoop.hdfs.protocol.DatanodeInfo)1