Search in sources :

Example 6 with WccAttr

use of org.apache.hadoop.nfs.nfs3.response.WccAttr in project hadoop by apache.

the class RpcProgramNfs3 method setattr.

@VisibleForTesting
SETATTR3Response setattr(XDR xdr, SecurityHandler securityHandler, SocketAddress remoteAddress) {
    SETATTR3Response response = new SETATTR3Response(Nfs3Status.NFS3_OK);
    DFSClient dfsClient = clientCache.getDfsClient(securityHandler.getUser());
    if (dfsClient == null) {
        response.setStatus(Nfs3Status.NFS3ERR_SERVERFAULT);
        return response;
    }
    SETATTR3Request request;
    try {
        request = SETATTR3Request.deserialize(xdr);
    } catch (IOException e) {
        LOG.error("Invalid SETATTR request");
        response.setStatus(Nfs3Status.NFS3ERR_INVAL);
        return response;
    }
    FileHandle handle = request.getHandle();
    if (LOG.isDebugEnabled()) {
        LOG.debug("NFS SETATTR fileId: " + handle.getFileId() + " client: " + remoteAddress);
    }
    if (request.getAttr().getUpdateFields().contains(SetAttrField.SIZE)) {
        LOG.error("Setting file size is not supported when setattr, fileId: " + handle.getFileId());
        response.setStatus(Nfs3Status.NFS3ERR_INVAL);
        return response;
    }
    String fileIdPath = Nfs3Utils.getFileIdPath(handle);
    Nfs3FileAttributes preOpAttr = null;
    try {
        preOpAttr = Nfs3Utils.getFileAttr(dfsClient, fileIdPath, iug);
        if (preOpAttr == null) {
            LOG.info("Can't get path for fileId: " + handle.getFileId());
            response.setStatus(Nfs3Status.NFS3ERR_STALE);
            return response;
        }
        WccAttr preOpWcc = Nfs3Utils.getWccAttr(preOpAttr);
        if (request.isCheck()) {
            if (!preOpAttr.getCtime().equals(request.getCtime())) {
                WccData wccData = new WccData(preOpWcc, preOpAttr);
                return new SETATTR3Response(Nfs3Status.NFS3ERR_NOT_SYNC, wccData);
            }
        }
        // check the write access privilege
        if (!checkAccessPrivilege(remoteAddress, AccessPrivilege.READ_WRITE)) {
            return new SETATTR3Response(Nfs3Status.NFS3ERR_ACCES, new WccData(preOpWcc, preOpAttr));
        }
        setattrInternal(dfsClient, fileIdPath, request.getAttr(), true);
        Nfs3FileAttributes postOpAttr = Nfs3Utils.getFileAttr(dfsClient, fileIdPath, iug);
        WccData wccData = new WccData(preOpWcc, postOpAttr);
        return new SETATTR3Response(Nfs3Status.NFS3_OK, wccData);
    } catch (IOException e) {
        LOG.warn("Exception ", e);
        WccData wccData = null;
        try {
            wccData = Nfs3Utils.createWccData(Nfs3Utils.getWccAttr(preOpAttr), dfsClient, fileIdPath, iug);
        } catch (IOException e1) {
            LOG.info("Can't get postOpAttr for fileIdPath: " + fileIdPath, e1);
        }
        int status = mapErrorStatus(e);
        return new SETATTR3Response(status, wccData);
    }
}
Also used : DFSClient(org.apache.hadoop.hdfs.DFSClient) WccData(org.apache.hadoop.nfs.nfs3.response.WccData) FileHandle(org.apache.hadoop.nfs.nfs3.FileHandle) Nfs3FileAttributes(org.apache.hadoop.nfs.nfs3.Nfs3FileAttributes) WccAttr(org.apache.hadoop.nfs.nfs3.response.WccAttr) SETATTR3Response(org.apache.hadoop.nfs.nfs3.response.SETATTR3Response) SETATTR3Request(org.apache.hadoop.nfs.nfs3.request.SETATTR3Request) IOException(java.io.IOException) VisibleForTesting(com.google.common.annotations.VisibleForTesting)

Example 7 with WccAttr

use of org.apache.hadoop.nfs.nfs3.response.WccAttr in project hadoop by apache.

the class OpenFileCtx method receivedNewWriteInternal.

private void receivedNewWriteInternal(DFSClient dfsClient, WRITE3Request request, Channel channel, int xid, AsyncDataService asyncDataService, IdMappingServiceProvider iug) {
    WriteStableHow stableHow = request.getStableHow();
    WccAttr preOpAttr = latestAttr.getWccAttr();
    int count = request.getCount();
    WriteCtx writeCtx = addWritesToCache(request, channel, xid);
    if (writeCtx == null) {
        // offset < nextOffset
        processOverWrite(dfsClient, request, channel, xid, iug);
    } else {
        // The write is added to pendingWrites.
        // Check and start writing back if necessary
        boolean startWriting = checkAndStartWrite(asyncDataService, writeCtx);
        if (!startWriting) {
            // offset > nextOffset. check if we need to dump data
            waitForDump();
            // for unstable non-sequential write
            if (stableHow != WriteStableHow.UNSTABLE) {
                LOG.info("Have to change stable write to unstable write: " + request.getStableHow());
                stableHow = WriteStableHow.UNSTABLE;
            }
            if (LOG.isDebugEnabled()) {
                LOG.debug("UNSTABLE write request, send response for offset: " + writeCtx.getOffset());
            }
            WccData fileWcc = new WccData(preOpAttr, latestAttr);
            WRITE3Response response = new WRITE3Response(Nfs3Status.NFS3_OK, fileWcc, count, stableHow, Nfs3Constant.WRITE_COMMIT_VERF);
            RpcProgramNfs3.metrics.addWrite(Nfs3Utils.getElapsedTime(writeCtx.startTime));
            Nfs3Utils.writeChannel(channel, response.serialize(new XDR(), xid, new VerifierNone()), xid);
            writeCtx.setReplied(true);
        }
    }
}
Also used : WccData(org.apache.hadoop.nfs.nfs3.response.WccData) WriteStableHow(org.apache.hadoop.nfs.nfs3.Nfs3Constant.WriteStableHow) XDR(org.apache.hadoop.oncrpc.XDR) VerifierNone(org.apache.hadoop.oncrpc.security.VerifierNone) WccAttr(org.apache.hadoop.nfs.nfs3.response.WccAttr) WRITE3Response(org.apache.hadoop.nfs.nfs3.response.WRITE3Response)

Example 8 with WccAttr

use of org.apache.hadoop.nfs.nfs3.response.WccAttr in project hadoop by apache.

the class Nfs3Utils method getWccAttr.

public static WccAttr getWccAttr(DFSClient client, String fileIdPath) throws IOException {
    HdfsFileStatus fstat = getFileStatus(client, fileIdPath);
    if (fstat == null) {
        return null;
    }
    long size = fstat.isDir() ? getDirSize(fstat.getChildrenNum()) : fstat.getLen();
    return new WccAttr(size, new NfsTime(fstat.getModificationTime()), new NfsTime(fstat.getModificationTime()));
}
Also used : HdfsFileStatus(org.apache.hadoop.hdfs.protocol.HdfsFileStatus) WccAttr(org.apache.hadoop.nfs.nfs3.response.WccAttr) NfsTime(org.apache.hadoop.nfs.NfsTime)

Aggregations

WccAttr (org.apache.hadoop.nfs.nfs3.response.WccAttr)7 WccData (org.apache.hadoop.nfs.nfs3.response.WccData)6 IOException (java.io.IOException)5 FileHandle (org.apache.hadoop.nfs.nfs3.FileHandle)4 Nfs3FileAttributes (org.apache.hadoop.nfs.nfs3.Nfs3FileAttributes)4 WRITE3Response (org.apache.hadoop.nfs.nfs3.response.WRITE3Response)4 VisibleForTesting (com.google.common.annotations.VisibleForTesting)3 DFSClient (org.apache.hadoop.hdfs.DFSClient)3 WriteStableHow (org.apache.hadoop.nfs.nfs3.Nfs3Constant.WriteStableHow)3 XDR (org.apache.hadoop.oncrpc.XDR)3 VerifierNone (org.apache.hadoop.oncrpc.security.VerifierNone)3 HdfsFileStatus (org.apache.hadoop.hdfs.protocol.HdfsFileStatus)2 File (java.io.File)1 RandomAccessFile (java.io.RandomAccessFile)1 NfsTime (org.apache.hadoop.nfs.NfsTime)1 SETATTR3Request (org.apache.hadoop.nfs.nfs3.request.SETATTR3Request)1 SYMLINK3Request (org.apache.hadoop.nfs.nfs3.request.SYMLINK3Request)1 WRITE3Request (org.apache.hadoop.nfs.nfs3.request.WRITE3Request)1 SETATTR3Response (org.apache.hadoop.nfs.nfs3.response.SETATTR3Response)1 SYMLINK3Response (org.apache.hadoop.nfs.nfs3.response.SYMLINK3Response)1