Search in sources :

Example 6 with LastBlockWithStatus

use of org.apache.hadoop.hdfs.protocol.LastBlockWithStatus in project hadoop by apache.

the class FSNamesystem method appendFile.

/**
   * Append to an existing file in the namespace.
   */
LastBlockWithStatus appendFile(String srcArg, String holder, String clientMachine, EnumSet<CreateFlag> flag, boolean logRetryCache) throws IOException {
    final String operationName = "append";
    boolean newBlock = flag.contains(CreateFlag.NEW_BLOCK);
    if (newBlock) {
        requireEffectiveLayoutVersionForFeature(Feature.APPEND_NEW_BLOCK);
    }
    NameNode.stateChangeLog.debug("DIR* NameSystem.appendFile: src={}, holder={}, clientMachine={}", srcArg, holder, clientMachine);
    try {
        boolean skipSync = false;
        LastBlockWithStatus lbs = null;
        final FSPermissionChecker pc = getPermissionChecker();
        checkOperation(OperationCategory.WRITE);
        writeLock();
        try {
            checkOperation(OperationCategory.WRITE);
            checkNameNodeSafeMode("Cannot append to file" + srcArg);
            lbs = FSDirAppendOp.appendFile(this, srcArg, pc, holder, clientMachine, newBlock, logRetryCache);
        } catch (StandbyException se) {
            skipSync = true;
            throw se;
        } finally {
            writeUnlock(operationName);
            // They need to be sync'ed even when an exception was thrown.
            if (!skipSync) {
                getEditLog().logSync();
            }
        }
        logAuditEvent(true, operationName, srcArg);
        return lbs;
    } catch (AccessControlException e) {
        logAuditEvent(false, operationName, srcArg);
        throw e;
    }
}
Also used : StandbyException(org.apache.hadoop.ipc.StandbyException) LastBlockWithStatus(org.apache.hadoop.hdfs.protocol.LastBlockWithStatus) AccessControlException(org.apache.hadoop.security.AccessControlException) SnapshotAccessControlException(org.apache.hadoop.hdfs.protocol.SnapshotAccessControlException)

Example 7 with LastBlockWithStatus

use of org.apache.hadoop.hdfs.protocol.LastBlockWithStatus in project hadoop by apache.

the class NameNodeRpcServer method append.

// ClientProtocol
@Override
public LastBlockWithStatus append(String src, String clientName, EnumSetWritable<CreateFlag> flag) throws IOException {
    checkNNStartup();
    String clientMachine = getClientMachine();
    if (stateChangeLog.isDebugEnabled()) {
        stateChangeLog.debug("*DIR* NameNode.append: file " + src + " for " + clientName + " at " + clientMachine);
    }
    namesystem.checkOperation(OperationCategory.WRITE);
    CacheEntryWithPayload cacheEntry = RetryCache.waitForCompletion(retryCache, null);
    if (cacheEntry != null && cacheEntry.isSuccess()) {
        return (LastBlockWithStatus) cacheEntry.getPayload();
    }
    LastBlockWithStatus info = null;
    boolean success = false;
    try {
        info = namesystem.appendFile(src, clientName, clientMachine, flag.get(), cacheEntry != null);
        success = true;
    } finally {
        RetryCache.setState(cacheEntry, success, info);
    }
    metrics.incrFilesAppended();
    return info;
}
Also used : LastBlockWithStatus(org.apache.hadoop.hdfs.protocol.LastBlockWithStatus) CacheEntryWithPayload(org.apache.hadoop.ipc.RetryCache.CacheEntryWithPayload)

Example 8 with LastBlockWithStatus

use of org.apache.hadoop.hdfs.protocol.LastBlockWithStatus in project hadoop by apache.

the class TestNamenodeRetryCache method testAppend.

/**
   * Test for rename1
   */
@Test
public void testAppend() throws Exception {
    String src = "/testNamenodeRetryCache/testAppend/src";
    resetCall();
    // Create a file with partial block
    DFSTestUtil.createFile(filesystem, new Path(src), 128, (short) 1, 0L);
    // Retried append requests succeed
    newCall();
    LastBlockWithStatus b = nnRpc.append(src, "holder", new EnumSetWritable<>(EnumSet.of(CreateFlag.APPEND)));
    Assert.assertEquals(b, nnRpc.append(src, "holder", new EnumSetWritable<>(EnumSet.of(CreateFlag.APPEND))));
    Assert.assertEquals(b, nnRpc.append(src, "holder", new EnumSetWritable<>(EnumSet.of(CreateFlag.APPEND))));
    // non-retried call fails
    newCall();
    try {
        nnRpc.append(src, "holder", new EnumSetWritable<>(EnumSet.of(CreateFlag.APPEND)));
        Assert.fail("testAppend - expected exception is not thrown");
    } catch (Exception e) {
    // Expected
    }
}
Also used : Path(org.apache.hadoop.fs.Path) EnumSetWritable(org.apache.hadoop.io.EnumSetWritable) LastBlockWithStatus(org.apache.hadoop.hdfs.protocol.LastBlockWithStatus) UnresolvedLinkException(org.apache.hadoop.fs.UnresolvedLinkException) StandbyException(org.apache.hadoop.ipc.StandbyException) IOException(java.io.IOException) AccessControlException(org.apache.hadoop.security.AccessControlException) Test(org.junit.Test)

Aggregations

LastBlockWithStatus (org.apache.hadoop.hdfs.protocol.LastBlockWithStatus)8 IOException (java.io.IOException)4 HdfsFileStatus (org.apache.hadoop.hdfs.protocol.HdfsFileStatus)4 LocatedBlock (org.apache.hadoop.hdfs.protocol.LocatedBlock)3 ServiceException (com.google.protobuf.ServiceException)2 FileNotFoundException (java.io.FileNotFoundException)2 CreateFlag (org.apache.hadoop.fs.CreateFlag)2 SnapshotAccessControlException (org.apache.hadoop.hdfs.protocol.SnapshotAccessControlException)2 AppendResponseProto (org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.AppendResponseProto)2 StandbyException (org.apache.hadoop.ipc.StandbyException)2 AccessControlException (org.apache.hadoop.security.AccessControlException)2 List (java.util.List)1 FileAlreadyExistsException (org.apache.hadoop.fs.FileAlreadyExistsException)1 Path (org.apache.hadoop.fs.Path)1 UnresolvedLinkException (org.apache.hadoop.fs.UnresolvedLinkException)1 BlockStoragePolicy (org.apache.hadoop.hdfs.protocol.BlockStoragePolicy)1 CacheDirectiveInfo (org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo)1 DSQuotaExceededException (org.apache.hadoop.hdfs.protocol.DSQuotaExceededException)1 ErasureCodingPolicy (org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy)1 QuotaByStorageTypeExceededException (org.apache.hadoop.hdfs.protocol.QuotaByStorageTypeExceededException)1