Search in sources :

Example 26 with MRApp

use of org.apache.hadoop.mapreduce.v2.app.MRApp in project hadoop by apache.

the class TestKill method testKillTaskWaitKillJobBeforeTA_DONE.

@Test
public void testKillTaskWaitKillJobBeforeTA_DONE() throws Exception {
    CountDownLatch latch = new CountDownLatch(1);
    final Dispatcher dispatcher = new MyAsyncDispatch(latch, JobEventType.JOB_KILL);
    MRApp app = new MRApp(1, 1, false, this.getClass().getName(), true) {

        @Override
        public Dispatcher createDispatcher() {
            return dispatcher;
        }
    };
    Job job = app.submit(new Configuration());
    JobId jobId = app.getJobId();
    app.waitForState(job, JobState.RUNNING);
    Assert.assertEquals("Num tasks not correct", 2, job.getTasks().size());
    Iterator<Task> it = job.getTasks().values().iterator();
    Task mapTask = it.next();
    Task reduceTask = it.next();
    app.waitForState(mapTask, TaskState.RUNNING);
    app.waitForState(reduceTask, TaskState.RUNNING);
    TaskAttempt mapAttempt = mapTask.getAttempts().values().iterator().next();
    app.waitForState(mapAttempt, TaskAttemptState.RUNNING);
    TaskAttempt reduceAttempt = reduceTask.getAttempts().values().iterator().next();
    app.waitForState(reduceAttempt, TaskAttemptState.RUNNING);
    // The order in the dispatch event queue, from first to last
    // JobEventType.JOB_KILL
    // TA_DONE
    // TaskEventType.T_KILL ( from JobEventType.JOB_KILL handling )
    // TaskAttemptEventType.TA_CONTAINER_COMPLETED ( from TA_DONE handling )
    // TaskAttemptEventType.TA_KILL ( from TaskEventType.T_KILL handling )
    // TaskEventType.T_ATTEMPT_SUCCEEDED ( from TA_CONTAINER_COMPLETED handling )
    // TaskEventType.T_ATTEMPT_KILLED ( from TA_KILL handling )
    // Now kill the job
    app.getContext().getEventHandler().handle(new JobEvent(jobId, JobEventType.JOB_KILL));
    // Finish map
    app.getContext().getEventHandler().handle(new TaskAttemptEvent(mapAttempt.getID(), TaskAttemptEventType.TA_DONE));
    //unblock
    latch.countDown();
    app.waitForInternalState((JobImpl) job, JobStateInternal.KILLED);
}
Also used : Task(org.apache.hadoop.mapreduce.v2.app.job.Task) Configuration(org.apache.hadoop.conf.Configuration) JobEvent(org.apache.hadoop.mapreduce.v2.app.job.event.JobEvent) TaskAttemptEvent(org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent) TaskAttempt(org.apache.hadoop.mapreduce.v2.app.job.TaskAttempt) CountDownLatch(java.util.concurrent.CountDownLatch) Dispatcher(org.apache.hadoop.yarn.event.Dispatcher) AsyncDispatcher(org.apache.hadoop.yarn.event.AsyncDispatcher) Job(org.apache.hadoop.mapreduce.v2.app.job.Job) JobId(org.apache.hadoop.mapreduce.v2.api.records.JobId) Test(org.junit.Test)

Example 27 with MRApp

use of org.apache.hadoop.mapreduce.v2.app.MRApp in project hadoop by apache.

the class TestMRApp method testJobSuccess.

@SuppressWarnings("resource")
@Test
public void testJobSuccess() throws Exception {
    MRApp app = new MRApp(2, 2, true, this.getClass().getName(), true, false);
    JobImpl job = (JobImpl) app.submit(new Configuration());
    app.waitForInternalState(job, JobStateInternal.SUCCEEDED);
    // AM is not unregistered
    Assert.assertEquals(JobState.RUNNING, job.getState());
    // imitate that AM is unregistered
    app.successfullyUnregistered.set(true);
    app.waitForState(job, JobState.SUCCEEDED);
}
Also used : JobImpl(org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl) Configuration(org.apache.hadoop.conf.Configuration) Test(org.junit.Test)

Example 28 with MRApp

use of org.apache.hadoop.mapreduce.v2.app.MRApp in project hadoop by apache.

the class TestMRApp method testCommitPending.

@Test
public void testCommitPending() throws Exception {
    MRApp app = new MRApp(1, 0, false, this.getClass().getName(), true);
    Job job = app.submit(new Configuration());
    app.waitForState(job, JobState.RUNNING);
    Assert.assertEquals("Num tasks not correct", 1, job.getTasks().size());
    Iterator<Task> it = job.getTasks().values().iterator();
    Task task = it.next();
    app.waitForState(task, TaskState.RUNNING);
    TaskAttempt attempt = task.getAttempts().values().iterator().next();
    app.waitForState(attempt, TaskAttemptState.RUNNING);
    //send the commit pending signal to the task
    app.getContext().getEventHandler().handle(new TaskAttemptEvent(attempt.getID(), TaskAttemptEventType.TA_COMMIT_PENDING));
    //wait for first attempt to commit pending
    app.waitForState(attempt, TaskAttemptState.COMMIT_PENDING);
    //re-send the commit pending signal to the task
    app.getContext().getEventHandler().handle(new TaskAttemptEvent(attempt.getID(), TaskAttemptEventType.TA_COMMIT_PENDING));
    //the task attempt should be still at COMMIT_PENDING
    app.waitForState(attempt, TaskAttemptState.COMMIT_PENDING);
    //send the done signal to the task
    app.getContext().getEventHandler().handle(new TaskAttemptEvent(task.getAttempts().values().iterator().next().getID(), TaskAttemptEventType.TA_DONE));
    app.waitForState(job, JobState.SUCCEEDED);
}
Also used : Task(org.apache.hadoop.mapreduce.v2.app.job.Task) Configuration(org.apache.hadoop.conf.Configuration) TaskAttemptEvent(org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent) TaskAttempt(org.apache.hadoop.mapreduce.v2.app.job.TaskAttempt) Job(org.apache.hadoop.mapreduce.v2.app.job.Job) Test(org.junit.Test)

Example 29 with MRApp

use of org.apache.hadoop.mapreduce.v2.app.MRApp in project hadoop by apache.

the class TestMRApp method testZeroMapReduces.

@Test
public void testZeroMapReduces() throws Exception {
    MRApp app = new MRApp(0, 0, true, this.getClass().getName(), true);
    Job job = app.submit(new Configuration());
    app.waitForState(job, JobState.SUCCEEDED);
}
Also used : Configuration(org.apache.hadoop.conf.Configuration) Job(org.apache.hadoop.mapreduce.v2.app.job.Job) Test(org.junit.Test)

Example 30 with MRApp

use of org.apache.hadoop.mapreduce.v2.app.MRApp in project hadoop by apache.

the class TestMRApp method testCompletedMapsForReduceSlowstart.

//@Test
public void testCompletedMapsForReduceSlowstart() throws Exception {
    MRApp app = new MRApp(2, 1, false, this.getClass().getName(), true);
    Configuration conf = new Configuration();
    //after half of the map completion, reduce will start
    conf.setFloat(MRJobConfig.COMPLETED_MAPS_FOR_REDUCE_SLOWSTART, 0.5f);
    //uberization forces full slowstart (1.0), so disable that
    conf.setBoolean(MRJobConfig.JOB_UBERTASK_ENABLE, false);
    Job job = app.submit(conf);
    app.waitForState(job, JobState.RUNNING);
    //all maps would be running
    Assert.assertEquals("Num tasks not correct", 3, job.getTasks().size());
    Iterator<Task> it = job.getTasks().values().iterator();
    Task mapTask1 = it.next();
    Task mapTask2 = it.next();
    Task reduceTask = it.next();
    // all maps must be running
    app.waitForState(mapTask1, TaskState.RUNNING);
    app.waitForState(mapTask2, TaskState.RUNNING);
    TaskAttempt task1Attempt = mapTask1.getAttempts().values().iterator().next();
    TaskAttempt task2Attempt = mapTask2.getAttempts().values().iterator().next();
    //before sending the TA_DONE, event make sure attempt has come to 
    //RUNNING state
    app.waitForState(task1Attempt, TaskAttemptState.RUNNING);
    app.waitForState(task2Attempt, TaskAttemptState.RUNNING);
    // reduces must be in NEW state
    Assert.assertEquals("Reduce Task state not correct", TaskState.NEW, reduceTask.getReport().getTaskState());
    //send the done signal to the 1st map task
    app.getContext().getEventHandler().handle(new TaskAttemptEvent(mapTask1.getAttempts().values().iterator().next().getID(), TaskAttemptEventType.TA_DONE));
    //wait for first map task to complete
    app.waitForState(mapTask1, TaskState.SUCCEEDED);
    //Once the first map completes, it will schedule the reduces
    //now reduce must be running
    app.waitForState(reduceTask, TaskState.RUNNING);
    //send the done signal to 2nd map and the reduce to complete the job
    app.getContext().getEventHandler().handle(new TaskAttemptEvent(mapTask2.getAttempts().values().iterator().next().getID(), TaskAttemptEventType.TA_DONE));
    app.getContext().getEventHandler().handle(new TaskAttemptEvent(reduceTask.getAttempts().values().iterator().next().getID(), TaskAttemptEventType.TA_DONE));
    app.waitForState(job, JobState.SUCCEEDED);
}
Also used : Task(org.apache.hadoop.mapreduce.v2.app.job.Task) Configuration(org.apache.hadoop.conf.Configuration) TaskAttemptEvent(org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent) TaskAttempt(org.apache.hadoop.mapreduce.v2.app.job.TaskAttempt) Job(org.apache.hadoop.mapreduce.v2.app.job.Job)

Aggregations

Test (org.junit.Test)61 Configuration (org.apache.hadoop.conf.Configuration)60 Job (org.apache.hadoop.mapreduce.v2.app.job.Job)57 Task (org.apache.hadoop.mapreduce.v2.app.job.Task)44 TaskAttempt (org.apache.hadoop.mapreduce.v2.app.job.TaskAttempt)37 TaskAttemptEvent (org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent)26 MRApp (org.apache.hadoop.mapreduce.v2.app.MRApp)23 TaskId (org.apache.hadoop.mapreduce.v2.api.records.TaskId)15 JobId (org.apache.hadoop.mapreduce.v2.api.records.JobId)14 TaskAttemptId (org.apache.hadoop.mapreduce.v2.api.records.TaskAttemptId)14 JobEvent (org.apache.hadoop.mapreduce.v2.app.job.event.JobEvent)8 YarnConfiguration (org.apache.hadoop.yarn.conf.YarnConfiguration)8 IOException (java.io.IOException)6 HistoryFileInfo (org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.HistoryFileInfo)6 AppContext (org.apache.hadoop.mapreduce.v2.app.AppContext)5 CountDownLatch (java.util.concurrent.CountDownLatch)4 FSDataInputStream (org.apache.hadoop.fs.FSDataInputStream)4 FileContext (org.apache.hadoop.fs.FileContext)4 Path (org.apache.hadoop.fs.Path)4 TaskAttemptCompletionEvent (org.apache.hadoop.mapreduce.v2.api.records.TaskAttemptCompletionEvent)4