Search in sources :

Example 1 with CallerContext

use of org.apache.hadoop.ipc.CallerContext in project hadoop by apache.

the class ApplicationStateDataPBImpl method getCallerContext.

@Override
public CallerContext getCallerContext() {
    ApplicationStateDataProtoOrBuilder p = viaProto ? proto : builder;
    RpcHeaderProtos.RPCCallerContextProto pbContext = p.getCallerContext();
    if (pbContext != null) {
        CallerContext context = new CallerContext.Builder(pbContext.getContext()).setSignature(pbContext.getSignature().toByteArray()).build();
        return context;
    }
    return null;
}
Also used : ApplicationStateDataProtoOrBuilder(org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos.ApplicationStateDataProtoOrBuilder) CallerContext(org.apache.hadoop.ipc.CallerContext) ApplicationStateDataProtoOrBuilder(org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos.ApplicationStateDataProtoOrBuilder) RpcHeaderProtos(org.apache.hadoop.ipc.protobuf.RpcHeaderProtos)

Example 2 with CallerContext

use of org.apache.hadoop.ipc.CallerContext in project hadoop by apache.

the class ProtoUtil method makeRpcRequestHeader.

public static RpcRequestHeaderProto makeRpcRequestHeader(RPC.RpcKind rpcKind, RpcRequestHeaderProto.OperationProto operation, int callId, int retryCount, byte[] uuid) {
    RpcRequestHeaderProto.Builder result = RpcRequestHeaderProto.newBuilder();
    result.setRpcKind(convert(rpcKind)).setRpcOp(operation).setCallId(callId).setRetryCount(retryCount).setClientId(ByteString.copyFrom(uuid));
    // Add tracing info if we are currently tracing.
    Span span = Tracer.getCurrentSpan();
    if (span != null) {
        result.setTraceInfo(RPCTraceInfoProto.newBuilder().setTraceId(span.getSpanId().getHigh()).setParentId(span.getSpanId().getLow()).build());
    }
    // Add caller context if it is not null
    CallerContext callerContext = CallerContext.getCurrent();
    if (callerContext != null && callerContext.isContextValid()) {
        RPCCallerContextProto.Builder contextBuilder = RPCCallerContextProto.newBuilder().setContext(callerContext.getContext());
        if (callerContext.getSignature() != null) {
            contextBuilder.setSignature(ByteString.copyFrom(callerContext.getSignature()));
        }
        result.setCallerContext(contextBuilder);
    }
    return result.build();
}
Also used : CallerContext(org.apache.hadoop.ipc.CallerContext) Span(org.apache.htrace.core.Span)

Example 3 with CallerContext

use of org.apache.hadoop.ipc.CallerContext in project hadoop by apache.

the class ToolRunner method run.

/**
   * Runs the given <code>Tool</code> by {@link Tool#run(String[])}, after 
   * parsing with the given generic arguments. Uses the given 
   * <code>Configuration</code>, or builds one if null.
   * 
   * Sets the <code>Tool</code>'s configuration with the possibly modified 
   * version of the <code>conf</code>.  
   * 
   * @param conf <code>Configuration</code> for the <code>Tool</code>.
   * @param tool <code>Tool</code> to run.
   * @param args command-line arguments to the tool.
   * @return exit code of the {@link Tool#run(String[])} method.
   */
public static int run(Configuration conf, Tool tool, String[] args) throws Exception {
    if (CallerContext.getCurrent() == null) {
        CallerContext ctx = new CallerContext.Builder("CLI").build();
        CallerContext.setCurrent(ctx);
    }
    if (conf == null) {
        conf = new Configuration();
    }
    GenericOptionsParser parser = new GenericOptionsParser(conf, args);
    //set the configuration back, so that Tool can configure itself
    tool.setConf(conf);
    //get the args w/o generic hadoop args
    String[] toolArgs = parser.getRemainingArgs();
    return tool.run(toolArgs);
}
Also used : CallerContext(org.apache.hadoop.ipc.CallerContext) Configuration(org.apache.hadoop.conf.Configuration)

Example 4 with CallerContext

use of org.apache.hadoop.ipc.CallerContext in project hadoop by apache.

the class TestAuditLogger method testAuditLoggerWithCallContext.

/**
   * Verify that the audit logger is aware of the call context
   */
@Test
public void testAuditLoggerWithCallContext() throws IOException {
    Configuration conf = new HdfsConfiguration();
    conf.setBoolean(HADOOP_CALLER_CONTEXT_ENABLED_KEY, true);
    conf.setInt(HADOOP_CALLER_CONTEXT_MAX_SIZE_KEY, 128);
    conf.setInt(HADOOP_CALLER_CONTEXT_SIGNATURE_MAX_SIZE_KEY, 40);
    MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).build();
    LogCapturer auditlog = LogCapturer.captureLogs(FSNamesystem.auditLog);
    try {
        cluster.waitClusterUp();
        final FileSystem fs = cluster.getFileSystem();
        final long time = System.currentTimeMillis();
        final Path p = new Path("/");
        assertNull(CallerContext.getCurrent());
        // context-only
        CallerContext context = new CallerContext.Builder("setTimes").build();
        CallerContext.setCurrent(context);
        LOG.info("Set current caller context as {}", CallerContext.getCurrent());
        fs.setTimes(p, time, time);
        assertTrue(auditlog.getOutput().endsWith(String.format("callerContext=setTimes%n")));
        auditlog.clearOutput();
        // context with signature
        context = new CallerContext.Builder("setTimes").setSignature("L".getBytes(CallerContext.SIGNATURE_ENCODING)).build();
        CallerContext.setCurrent(context);
        LOG.info("Set current caller context as {}", CallerContext.getCurrent());
        fs.setTimes(p, time, time);
        assertTrue(auditlog.getOutput().endsWith(String.format("callerContext=setTimes:L%n")));
        auditlog.clearOutput();
        // long context is truncated
        final String longContext = StringUtils.repeat("foo", 100);
        context = new CallerContext.Builder(longContext).setSignature("L".getBytes(CallerContext.SIGNATURE_ENCODING)).build();
        CallerContext.setCurrent(context);
        LOG.info("Set current caller context as {}", CallerContext.getCurrent());
        fs.setTimes(p, time, time);
        assertTrue(auditlog.getOutput().endsWith(String.format("callerContext=%s:L%n", longContext.substring(0, 128))));
        auditlog.clearOutput();
        // empty context is ignored
        context = new CallerContext.Builder("").setSignature("L".getBytes(CallerContext.SIGNATURE_ENCODING)).build();
        CallerContext.setCurrent(context);
        LOG.info("Set empty caller context");
        fs.setTimes(p, time, time);
        assertFalse(auditlog.getOutput().contains("callerContext="));
        auditlog.clearOutput();
        // caller context is inherited in child thread
        context = new CallerContext.Builder("setTimes").setSignature("L".getBytes(CallerContext.SIGNATURE_ENCODING)).build();
        CallerContext.setCurrent(context);
        LOG.info("Set current caller context as {}", CallerContext.getCurrent());
        Thread child = new Thread(new Runnable() {

            @Override
            public void run() {
                try {
                    fs.setTimes(p, time, time);
                } catch (IOException e) {
                    fail("Unexpected exception found." + e);
                }
            }
        });
        child.start();
        try {
            child.join();
        } catch (InterruptedException ignored) {
        // Ignore
        }
        assertTrue(auditlog.getOutput().endsWith(String.format("callerContext=setTimes:L%n")));
        auditlog.clearOutput();
        // caller context is overridden in child thread
        final CallerContext childContext = new CallerContext.Builder("setPermission").setSignature("L".getBytes(CallerContext.SIGNATURE_ENCODING)).build();
        LOG.info("Set current caller context as {}", CallerContext.getCurrent());
        child = new Thread(new Runnable() {

            @Override
            public void run() {
                try {
                    CallerContext.setCurrent(childContext);
                    fs.setPermission(p, new FsPermission((short) 777));
                } catch (IOException e) {
                    fail("Unexpected exception found." + e);
                }
            }
        });
        child.start();
        try {
            child.join();
        } catch (InterruptedException ignored) {
        // Ignore
        }
        assertTrue(auditlog.getOutput().endsWith(String.format("callerContext=setPermission:L%n")));
        auditlog.clearOutput();
        // reuse the current context's signature
        context = new CallerContext.Builder("mkdirs").setSignature(CallerContext.getCurrent().getSignature()).build();
        CallerContext.setCurrent(context);
        LOG.info("Set current caller context as {}", CallerContext.getCurrent());
        fs.mkdirs(new Path("/reuse-context-signature"));
        assertTrue(auditlog.getOutput().endsWith(String.format("callerContext=mkdirs:L%n")));
        auditlog.clearOutput();
        // too long signature is ignored
        context = new CallerContext.Builder("setTimes").setSignature(new byte[41]).build();
        CallerContext.setCurrent(context);
        LOG.info("Set current caller context as {}", CallerContext.getCurrent());
        fs.setTimes(p, time, time);
        assertTrue(auditlog.getOutput().endsWith(String.format("callerContext=setTimes%n")));
        auditlog.clearOutput();
        // null signature is ignored
        context = new CallerContext.Builder("setTimes").setSignature(null).build();
        CallerContext.setCurrent(context);
        LOG.info("Set current caller context as {}", CallerContext.getCurrent());
        fs.setTimes(p, time, time);
        assertTrue(auditlog.getOutput().endsWith(String.format("callerContext=setTimes%n")));
        auditlog.clearOutput();
        // empty signature is ignored
        context = new CallerContext.Builder("mkdirs").setSignature("".getBytes(CallerContext.SIGNATURE_ENCODING)).build();
        CallerContext.setCurrent(context);
        LOG.info("Set current caller context as {}", CallerContext.getCurrent());
        fs.mkdirs(new Path("/empty-signature"));
        assertTrue(auditlog.getOutput().endsWith(String.format("callerContext=mkdirs%n")));
        auditlog.clearOutput();
        // invalid context is not passed to the rpc
        context = new CallerContext.Builder(null).build();
        CallerContext.setCurrent(context);
        LOG.info("Set current caller context as {}", CallerContext.getCurrent());
        fs.mkdirs(new Path("/empty-signature"));
        assertFalse(auditlog.getOutput().contains("callerContext="));
        auditlog.clearOutput();
    } finally {
        cluster.shutdown();
    }
}
Also used : Path(org.apache.hadoop.fs.Path) MiniDFSCluster(org.apache.hadoop.hdfs.MiniDFSCluster) CallerContext(org.apache.hadoop.ipc.CallerContext) Configuration(org.apache.hadoop.conf.Configuration) HdfsConfiguration(org.apache.hadoop.hdfs.HdfsConfiguration) IOException(java.io.IOException) HdfsConfiguration(org.apache.hadoop.hdfs.HdfsConfiguration) FileSystem(org.apache.hadoop.fs.FileSystem) LogCapturer(org.apache.hadoop.test.GenericTestUtils.LogCapturer) FsPermission(org.apache.hadoop.fs.permission.FsPermission) Test(org.junit.Test)

Example 5 with CallerContext

use of org.apache.hadoop.ipc.CallerContext in project hadoop by apache.

the class ClientRMService method submitApplication.

@Override
public SubmitApplicationResponse submitApplication(SubmitApplicationRequest request) throws YarnException, IOException {
    ApplicationSubmissionContext submissionContext = request.getApplicationSubmissionContext();
    ApplicationId applicationId = submissionContext.getApplicationId();
    CallerContext callerContext = CallerContext.getCurrent();
    // ApplicationSubmissionContext needs to be validated for safety - only
    // those fields that are independent of the RM's configuration will be
    // checked here, those that are dependent on RM configuration are validated
    // in RMAppManager.
    String user = null;
    try {
        // Safety
        user = UserGroupInformation.getCurrentUser().getShortUserName();
    } catch (IOException ie) {
        LOG.warn("Unable to get the current user.", ie);
        RMAuditLogger.logFailure(user, AuditConstants.SUBMIT_APP_REQUEST, ie.getMessage(), "ClientRMService", "Exception in submitting application", applicationId, callerContext);
        throw RPCUtil.getRemoteException(ie);
    }
    if (YarnConfiguration.timelineServiceV2Enabled(getConfig())) {
        // Sanity check for flow run
        String value = null;
        try {
            for (String tag : submissionContext.getApplicationTags()) {
                if (tag.startsWith(TimelineUtils.FLOW_RUN_ID_TAG_PREFIX + ":") || tag.startsWith(TimelineUtils.FLOW_RUN_ID_TAG_PREFIX.toLowerCase() + ":")) {
                    value = tag.substring(TimelineUtils.FLOW_RUN_ID_TAG_PREFIX.length() + 1);
                    Long.valueOf(value);
                }
            }
        } catch (NumberFormatException e) {
            LOG.warn("Invalid to flow run: " + value + ". Flow run should be a long integer", e);
            RMAuditLogger.logFailure(user, AuditConstants.SUBMIT_APP_REQUEST, e.getMessage(), "ClientRMService", "Exception in submitting application", applicationId);
            throw RPCUtil.getRemoteException(e);
        }
    }
    // If it is, simply return the response
    if (rmContext.getRMApps().get(applicationId) != null) {
        LOG.info("This is an earlier submitted application: " + applicationId);
        return SubmitApplicationResponse.newInstance();
    }
    ByteBuffer tokenConf = submissionContext.getAMContainerSpec().getTokensConf();
    if (tokenConf != null) {
        int maxSize = getConfig().getInt(YarnConfiguration.RM_DELEGATION_TOKEN_MAX_CONF_SIZE, YarnConfiguration.DEFAULT_RM_DELEGATION_TOKEN_MAX_CONF_SIZE_BYTES);
        LOG.info("Using app provided configurations for delegation token renewal," + " total size = " + tokenConf.capacity());
        if (tokenConf.capacity() > maxSize) {
            throw new YarnException("Exceed " + YarnConfiguration.RM_DELEGATION_TOKEN_MAX_CONF_SIZE + " = " + maxSize + " bytes, current conf size = " + tokenConf.capacity() + " bytes.");
        }
    }
    if (submissionContext.getQueue() == null) {
        submissionContext.setQueue(YarnConfiguration.DEFAULT_QUEUE_NAME);
    }
    if (submissionContext.getApplicationName() == null) {
        submissionContext.setApplicationName(YarnConfiguration.DEFAULT_APPLICATION_NAME);
    }
    if (submissionContext.getApplicationType() == null) {
        submissionContext.setApplicationType(YarnConfiguration.DEFAULT_APPLICATION_TYPE);
    } else {
        if (submissionContext.getApplicationType().length() > YarnConfiguration.APPLICATION_TYPE_LENGTH) {
            submissionContext.setApplicationType(submissionContext.getApplicationType().substring(0, YarnConfiguration.APPLICATION_TYPE_LENGTH));
        }
    }
    ReservationId reservationId = request.getApplicationSubmissionContext().getReservationID();
    checkReservationACLs(submissionContext.getQueue(), AuditConstants.SUBMIT_RESERVATION_REQUEST, reservationId);
    try {
        // call RMAppManager to submit application directly
        rmAppManager.submitApplication(submissionContext, System.currentTimeMillis(), user);
        LOG.info("Application with id " + applicationId.getId() + " submitted by user " + user);
        RMAuditLogger.logSuccess(user, AuditConstants.SUBMIT_APP_REQUEST, "ClientRMService", applicationId, callerContext);
    } catch (YarnException e) {
        LOG.info("Exception in submitting " + applicationId, e);
        RMAuditLogger.logFailure(user, AuditConstants.SUBMIT_APP_REQUEST, e.getMessage(), "ClientRMService", "Exception in submitting application", applicationId, callerContext);
        throw e;
    }
    SubmitApplicationResponse response = recordFactory.newRecordInstance(SubmitApplicationResponse.class);
    return response;
}
Also used : CallerContext(org.apache.hadoop.ipc.CallerContext) ReservationId(org.apache.hadoop.yarn.api.records.ReservationId) ApplicationSubmissionContext(org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext) IOException(java.io.IOException) ApplicationId(org.apache.hadoop.yarn.api.records.ApplicationId) SubmitApplicationResponse(org.apache.hadoop.yarn.api.protocolrecords.SubmitApplicationResponse) ByteBuffer(java.nio.ByteBuffer) YarnException(org.apache.hadoop.yarn.exceptions.YarnException)

Aggregations

CallerContext (org.apache.hadoop.ipc.CallerContext)6 IOException (java.io.IOException)3 Configuration (org.apache.hadoop.conf.Configuration)2 ApplicationId (org.apache.hadoop.yarn.api.records.ApplicationId)2 InetAddress (java.net.InetAddress)1 ByteBuffer (java.nio.ByteBuffer)1 AccessControlException (java.security.AccessControlException)1 FileSystem (org.apache.hadoop.fs.FileSystem)1 Path (org.apache.hadoop.fs.Path)1 FsPermission (org.apache.hadoop.fs.permission.FsPermission)1 HdfsConfiguration (org.apache.hadoop.hdfs.HdfsConfiguration)1 MiniDFSCluster (org.apache.hadoop.hdfs.MiniDFSCluster)1 RpcHeaderProtos (org.apache.hadoop.ipc.protobuf.RpcHeaderProtos)1 UserGroupInformation (org.apache.hadoop.security.UserGroupInformation)1 LogCapturer (org.apache.hadoop.test.GenericTestUtils.LogCapturer)1 SubmitApplicationResponse (org.apache.hadoop.yarn.api.protocolrecords.SubmitApplicationResponse)1 ApplicationSubmissionContext (org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext)1 ReservationId (org.apache.hadoop.yarn.api.records.ReservationId)1 ApplicationNotFoundException (org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException)1 YarnException (org.apache.hadoop.yarn.exceptions.YarnException)1