Search in sources :

Example 1 with KillJobRequest

use of org.apache.hadoop.mapreduce.v2.api.protocolrecords.KillJobRequest in project hadoop by apache.

the class TestMRClientService method testViewAclOnlyCannotModify.

@Test
public void testViewAclOnlyCannotModify() throws Exception {
    final MRAppWithClientService app = new MRAppWithClientService(1, 0, false);
    final Configuration conf = new Configuration();
    conf.setBoolean(MRConfig.MR_ACLS_ENABLED, true);
    conf.set(MRJobConfig.JOB_ACL_VIEW_JOB, "viewonlyuser");
    Job job = app.submit(conf);
    app.waitForState(job, JobState.RUNNING);
    Assert.assertEquals("Num tasks not correct", 1, job.getTasks().size());
    Iterator<Task> it = job.getTasks().values().iterator();
    Task task = it.next();
    app.waitForState(task, TaskState.RUNNING);
    TaskAttempt attempt = task.getAttempts().values().iterator().next();
    app.waitForState(attempt, TaskAttemptState.RUNNING);
    UserGroupInformation viewOnlyUser = UserGroupInformation.createUserForTesting("viewonlyuser", new String[] {});
    Assert.assertTrue("viewonlyuser cannot view job", job.checkAccess(viewOnlyUser, JobACL.VIEW_JOB));
    Assert.assertFalse("viewonlyuser can modify job", job.checkAccess(viewOnlyUser, JobACL.MODIFY_JOB));
    MRClientProtocol client = viewOnlyUser.doAs(new PrivilegedExceptionAction<MRClientProtocol>() {

        @Override
        public MRClientProtocol run() throws Exception {
            YarnRPC rpc = YarnRPC.create(conf);
            return (MRClientProtocol) rpc.getProxy(MRClientProtocol.class, app.clientService.getBindAddress(), conf);
        }
    });
    KillJobRequest killJobRequest = recordFactory.newRecordInstance(KillJobRequest.class);
    killJobRequest.setJobId(app.getJobId());
    try {
        client.killJob(killJobRequest);
        fail("viewonlyuser killed job");
    } catch (AccessControlException e) {
    // pass
    }
    KillTaskRequest killTaskRequest = recordFactory.newRecordInstance(KillTaskRequest.class);
    killTaskRequest.setTaskId(task.getID());
    try {
        client.killTask(killTaskRequest);
        fail("viewonlyuser killed task");
    } catch (AccessControlException e) {
    // pass
    }
    KillTaskAttemptRequest killTaskAttemptRequest = recordFactory.newRecordInstance(KillTaskAttemptRequest.class);
    killTaskAttemptRequest.setTaskAttemptId(attempt.getID());
    try {
        client.killTaskAttempt(killTaskAttemptRequest);
        fail("viewonlyuser killed task attempt");
    } catch (AccessControlException e) {
    // pass
    }
    FailTaskAttemptRequest failTaskAttemptRequest = recordFactory.newRecordInstance(FailTaskAttemptRequest.class);
    failTaskAttemptRequest.setTaskAttemptId(attempt.getID());
    try {
        client.failTaskAttempt(failTaskAttemptRequest);
        fail("viewonlyuser killed task attempt");
    } catch (AccessControlException e) {
    // pass
    }
}
Also used : Task(org.apache.hadoop.mapreduce.v2.app.job.Task) Configuration(org.apache.hadoop.conf.Configuration) FailTaskAttemptRequest(org.apache.hadoop.mapreduce.v2.api.protocolrecords.FailTaskAttemptRequest) AccessControlException(org.apache.hadoop.security.AccessControlException) YarnRPC(org.apache.hadoop.yarn.ipc.YarnRPC) IOException(java.io.IOException) AccessControlException(org.apache.hadoop.security.AccessControlException) MRClientProtocol(org.apache.hadoop.mapreduce.v2.api.MRClientProtocol) KillTaskAttemptRequest(org.apache.hadoop.mapreduce.v2.api.protocolrecords.KillTaskAttemptRequest) KillJobRequest(org.apache.hadoop.mapreduce.v2.api.protocolrecords.KillJobRequest) KillTaskRequest(org.apache.hadoop.mapreduce.v2.api.protocolrecords.KillTaskRequest) TaskAttempt(org.apache.hadoop.mapreduce.v2.app.job.TaskAttempt) Job(org.apache.hadoop.mapreduce.v2.app.job.Job) UserGroupInformation(org.apache.hadoop.security.UserGroupInformation) Test(org.junit.Test)

Example 2 with KillJobRequest

use of org.apache.hadoop.mapreduce.v2.api.protocolrecords.KillJobRequest in project hadoop by apache.

the class ClientServiceDelegate method killJob.

public boolean killJob(JobID oldJobID) throws IOException {
    org.apache.hadoop.mapreduce.v2.api.records.JobId jobId = TypeConverter.toYarn(oldJobID);
    KillJobRequest killRequest = recordFactory.newRecordInstance(KillJobRequest.class);
    killRequest.setJobId(jobId);
    invoke("killJob", KillJobRequest.class, killRequest);
    return true;
}
Also used : KillJobRequest(org.apache.hadoop.mapreduce.v2.api.protocolrecords.KillJobRequest)

Aggregations

KillJobRequest (org.apache.hadoop.mapreduce.v2.api.protocolrecords.KillJobRequest)2 IOException (java.io.IOException)1 Configuration (org.apache.hadoop.conf.Configuration)1 MRClientProtocol (org.apache.hadoop.mapreduce.v2.api.MRClientProtocol)1 FailTaskAttemptRequest (org.apache.hadoop.mapreduce.v2.api.protocolrecords.FailTaskAttemptRequest)1 KillTaskAttemptRequest (org.apache.hadoop.mapreduce.v2.api.protocolrecords.KillTaskAttemptRequest)1 KillTaskRequest (org.apache.hadoop.mapreduce.v2.api.protocolrecords.KillTaskRequest)1 Job (org.apache.hadoop.mapreduce.v2.app.job.Job)1 Task (org.apache.hadoop.mapreduce.v2.app.job.Task)1 TaskAttempt (org.apache.hadoop.mapreduce.v2.app.job.TaskAttempt)1 AccessControlException (org.apache.hadoop.security.AccessControlException)1 UserGroupInformation (org.apache.hadoop.security.UserGroupInformation)1 YarnRPC (org.apache.hadoop.yarn.ipc.YarnRPC)1 Test (org.junit.Test)1