Search in sources :

Example 11 with Cluster

use of org.apache.hadoop.mapreduce.Cluster in project hadoop by apache.

the class JobClient method init.

/**
   * Connect to the default cluster
   * @param conf the job configuration.
   * @throws IOException
   */
public void init(JobConf conf) throws IOException {
    setConf(conf);
    cluster = new Cluster(conf);
    clientUgi = UserGroupInformation.getCurrentUser();
    maxRetry = conf.getInt(MRJobConfig.MR_CLIENT_JOB_MAX_RETRIES, MRJobConfig.DEFAULT_MR_CLIENT_JOB_MAX_RETRIES);
    retryInterval = conf.getLong(MRJobConfig.MR_CLIENT_JOB_RETRY_INTERVAL, MRJobConfig.DEFAULT_MR_CLIENT_JOB_RETRY_INTERVAL);
}
Also used : Cluster(org.apache.hadoop.mapreduce.Cluster)

Example 12 with Cluster

use of org.apache.hadoop.mapreduce.Cluster in project hadoop by apache.

the class TestCLI method testListAttemptIdsWithValidInput.

@Test
public void testListAttemptIdsWithValidInput() throws Exception {
    JobID jobId = JobID.forName(jobIdStr);
    Cluster mockCluster = mock(Cluster.class);
    Job job = mock(Job.class);
    CLI cli = spy(new CLI(new Configuration()));
    doReturn(mockCluster).when(cli).createCluster();
    when(job.getTaskReports(TaskType.MAP)).thenReturn(getTaskReports(jobId, TaskType.MAP));
    when(job.getTaskReports(TaskType.REDUCE)).thenReturn(getTaskReports(jobId, TaskType.REDUCE));
    when(mockCluster.getJob(jobId)).thenReturn(job);
    int retCode_MAP = cli.run(new String[] { "-list-attempt-ids", jobIdStr, "MAP", "running" });
    // testing case insensitive behavior
    int retCode_map = cli.run(new String[] { "-list-attempt-ids", jobIdStr, "map", "running" });
    int retCode_REDUCE = cli.run(new String[] { "-list-attempt-ids", jobIdStr, "REDUCE", "running" });
    int retCode_completed = cli.run(new String[] { "-list-attempt-ids", jobIdStr, "REDUCE", "completed" });
    assertEquals("MAP is a valid input,exit code should be 0", 0, retCode_MAP);
    assertEquals("map is a valid input,exit code should be 0", 0, retCode_map);
    assertEquals("REDUCE is a valid input,exit code should be 0", 0, retCode_REDUCE);
    assertEquals("REDUCE and completed are a valid inputs to -list-attempt-ids,exit code should be 0", 0, retCode_completed);
    verify(job, times(2)).getTaskReports(TaskType.MAP);
    verify(job, times(2)).getTaskReports(TaskType.REDUCE);
}
Also used : Configuration(org.apache.hadoop.conf.Configuration) Cluster(org.apache.hadoop.mapreduce.Cluster) Job(org.apache.hadoop.mapreduce.Job) JobID(org.apache.hadoop.mapreduce.JobID) Test(org.junit.Test)

Example 13 with Cluster

use of org.apache.hadoop.mapreduce.Cluster in project hadoop by apache.

the class TestCLI method testJobKIll.

@Test
public void testJobKIll() throws Exception {
    Cluster mockCluster = mock(Cluster.class);
    CLI cli = spy(new CLI(new Configuration()));
    doReturn(mockCluster).when(cli).createCluster();
    String jobId1 = "job_1234654654_001";
    String jobId2 = "job_1234654654_002";
    String jobId3 = "job_1234654654_003";
    String jobId4 = "job_1234654654_004";
    Job mockJob1 = mockJob(mockCluster, jobId1, State.RUNNING);
    Job mockJob2 = mockJob(mockCluster, jobId2, State.KILLED);
    Job mockJob3 = mockJob(mockCluster, jobId3, State.FAILED);
    Job mockJob4 = mockJob(mockCluster, jobId4, State.PREP);
    int exitCode1 = cli.run(new String[] { "-kill", jobId1 });
    assertEquals(0, exitCode1);
    verify(mockJob1, times(1)).killJob();
    int exitCode2 = cli.run(new String[] { "-kill", jobId2 });
    assertEquals(-1, exitCode2);
    verify(mockJob2, times(0)).killJob();
    int exitCode3 = cli.run(new String[] { "-kill", jobId3 });
    assertEquals(-1, exitCode3);
    verify(mockJob3, times(0)).killJob();
    int exitCode4 = cli.run(new String[] { "-kill", jobId4 });
    assertEquals(0, exitCode4);
    verify(mockJob4, times(1)).killJob();
}
Also used : Configuration(org.apache.hadoop.conf.Configuration) Cluster(org.apache.hadoop.mapreduce.Cluster) Job(org.apache.hadoop.mapreduce.Job) Test(org.junit.Test)

Example 14 with Cluster

use of org.apache.hadoop.mapreduce.Cluster in project hadoop by apache.

the class TestCLI method testLogs.

@Test
public void testLogs() throws Exception {
    Cluster mockCluster = mock(Cluster.class);
    CLI cli = spy(new CLI(new Configuration()));
    doReturn(mockCluster).when(cli).createCluster();
    String jobId1 = "job_1234654654_001";
    String jobId2 = "job_1234654656_002";
    Job mockJob1 = mockJob(mockCluster, jobId1, State.SUCCEEDED);
    // Check exiting with non existing job
    int exitCode = cli.run(new String[] { "-logs", jobId2 });
    assertEquals(-1, exitCode);
}
Also used : Configuration(org.apache.hadoop.conf.Configuration) Cluster(org.apache.hadoop.mapreduce.Cluster) Job(org.apache.hadoop.mapreduce.Job) Test(org.junit.Test)

Example 15 with Cluster

use of org.apache.hadoop.mapreduce.Cluster in project hadoop by apache.

the class TestCLI method testGetJobWithoutRetry.

@Test
public void testGetJobWithoutRetry() throws Exception {
    Configuration conf = new Configuration();
    conf.setInt(MRJobConfig.MR_CLIENT_JOB_MAX_RETRIES, 0);
    final Cluster mockCluster = mock(Cluster.class);
    when(mockCluster.getJob(any(JobID.class))).thenReturn(null);
    CLI cli = new CLI(conf);
    cli.cluster = mockCluster;
    Job job = cli.getJob(JobID.forName("job_1234654654_001"));
    Assert.assertTrue("job is not null", job == null);
}
Also used : Configuration(org.apache.hadoop.conf.Configuration) Cluster(org.apache.hadoop.mapreduce.Cluster) Job(org.apache.hadoop.mapreduce.Job) JobID(org.apache.hadoop.mapreduce.JobID) Test(org.junit.Test)

Aggregations

Cluster (org.apache.hadoop.mapreduce.Cluster)22 Test (org.junit.Test)17 Configuration (org.apache.hadoop.conf.Configuration)12 Job (org.apache.hadoop.mapreduce.Job)11 Path (org.apache.hadoop.fs.Path)5 IOException (java.io.IOException)4 JobID (org.apache.hadoop.mapreduce.JobID)4 TaskReport (org.apache.hadoop.mapreduce.TaskReport)4 ArrayList (java.util.ArrayList)2 ByteArrayOutputStream (java.io.ByteArrayOutputStream)1 PrintWriter (java.io.PrintWriter)1 Random (java.util.Random)1 FileStatus (org.apache.hadoop.fs.FileStatus)1 FileSystem (org.apache.hadoop.fs.FileSystem)1 HarFileSystem (org.apache.hadoop.fs.HarFileSystem)1 FsPermission (org.apache.hadoop.fs.permission.FsPermission)1 BackupCopyJob (org.apache.hadoop.hbase.backup.BackupCopyJob)1 SequenceFile (org.apache.hadoop.io.SequenceFile)1 JobStatus (org.apache.hadoop.mapreduce.JobStatus)1 TaskAttemptID (org.apache.hadoop.mapreduce.TaskAttemptID)1