Search in sources :

Example 1 with JobUnsuccessfulCompletionEvent

use of org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletionEvent in project hadoop by apache.

the class JobImpl method unsuccessfulFinish.

private void unsuccessfulFinish(JobStateInternal finalState) {
    if (finishTime == 0)
        setFinishTime();
    cleanupProgress = 1.0f;
    JobUnsuccessfulCompletionEvent unsuccessfulJobEvent = new JobUnsuccessfulCompletionEvent(oldJobId, finishTime, succeededMapTaskCount, succeededReduceTaskCount, finalState.toString(), diagnostics);
    eventHandler.handle(new JobHistoryEvent(jobId, unsuccessfulJobEvent));
    finished(finalState);
}
Also used : JobUnsuccessfulCompletionEvent(org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletionEvent) JobHistoryEvent(org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent)

Example 2 with JobUnsuccessfulCompletionEvent

use of org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletionEvent in project hadoop by apache.

the class MRAppMasterTestLaunchTime method verifyFailedStatus.

private void verifyFailedStatus(MRAppMasterTest appMaster, String expectedJobState) {
    ArgumentCaptor<JobHistoryEvent> captor = ArgumentCaptor.forClass(JobHistoryEvent.class);
    // handle two events: AMStartedEvent and JobUnsuccessfulCompletionEvent
    verify(appMaster.spyHistoryService, times(2)).handleEvent(captor.capture());
    HistoryEvent event = captor.getValue().getHistoryEvent();
    assertTrue(event instanceof JobUnsuccessfulCompletionEvent);
    assertEquals(((JobUnsuccessfulCompletionEvent) event).getStatus(), expectedJobState);
}
Also used : JobHistoryEvent(org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent) JobUnsuccessfulCompletionEvent(org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletionEvent) JobHistoryEvent(org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent) HistoryEvent(org.apache.hadoop.mapreduce.jobhistory.HistoryEvent)

Example 3 with JobUnsuccessfulCompletionEvent

use of org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletionEvent in project hadoop by apache.

the class TestJobHistoryParsing method testMultipleFailedTasks.

@Test
public void testMultipleFailedTasks() throws Exception {
    JobHistoryParser parser = new JobHistoryParser(Mockito.mock(FSDataInputStream.class));
    EventReader reader = Mockito.mock(EventReader.class);
    // Hack!
    final AtomicInteger numEventsRead = new AtomicInteger(0);
    final org.apache.hadoop.mapreduce.TaskType taskType = org.apache.hadoop.mapreduce.TaskType.MAP;
    final TaskID[] tids = new TaskID[2];
    final JobID jid = new JobID("1", 1);
    tids[0] = new TaskID(jid, taskType, 0);
    tids[1] = new TaskID(jid, taskType, 1);
    Mockito.when(reader.getNextEvent()).thenAnswer(new Answer<HistoryEvent>() {

        public HistoryEvent answer(InvocationOnMock invocation) throws IOException {
            // send two task start and two task fail events for tasks 0 and 1
            int eventId = numEventsRead.getAndIncrement();
            TaskID tid = tids[eventId & 0x1];
            if (eventId < 2) {
                return new TaskStartedEvent(tid, 0, taskType, "");
            }
            if (eventId < 4) {
                TaskFailedEvent tfe = new TaskFailedEvent(tid, 0, taskType, "failed", "FAILED", null, new Counters());
                tfe.setDatum(tfe.getDatum());
                return tfe;
            }
            if (eventId < 5) {
                JobUnsuccessfulCompletionEvent juce = new JobUnsuccessfulCompletionEvent(jid, 100L, 2, 0, "JOB_FAILED", Collections.singletonList("Task failed: " + tids[0].toString()));
                return juce;
            }
            return null;
        }
    });
    JobInfo info = parser.parse(reader);
    assertTrue("Task 0 not implicated", info.getErrorInfo().contains(tids[0].toString()));
}
Also used : EventReader(org.apache.hadoop.mapreduce.jobhistory.EventReader) TaskID(org.apache.hadoop.mapreduce.TaskID) JobUnsuccessfulCompletionEvent(org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletionEvent) IOException(java.io.IOException) HistoryEvent(org.apache.hadoop.mapreduce.jobhistory.HistoryEvent) TaskStartedEvent(org.apache.hadoop.mapreduce.jobhistory.TaskStartedEvent) JobHistoryParser(org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) JobInfo(org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.JobInfo) InvocationOnMock(org.mockito.invocation.InvocationOnMock) TaskFailedEvent(org.apache.hadoop.mapreduce.jobhistory.TaskFailedEvent) FSDataInputStream(org.apache.hadoop.fs.FSDataInputStream) Counters(org.apache.hadoop.mapreduce.Counters) JobID(org.apache.hadoop.mapreduce.JobID) Test(org.junit.Test)

Aggregations

JobUnsuccessfulCompletionEvent (org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletionEvent)3 HistoryEvent (org.apache.hadoop.mapreduce.jobhistory.HistoryEvent)2 JobHistoryEvent (org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent)2 IOException (java.io.IOException)1 AtomicInteger (java.util.concurrent.atomic.AtomicInteger)1 FSDataInputStream (org.apache.hadoop.fs.FSDataInputStream)1 Counters (org.apache.hadoop.mapreduce.Counters)1 JobID (org.apache.hadoop.mapreduce.JobID)1 TaskID (org.apache.hadoop.mapreduce.TaskID)1 EventReader (org.apache.hadoop.mapreduce.jobhistory.EventReader)1 JobHistoryParser (org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser)1 JobInfo (org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.JobInfo)1 TaskFailedEvent (org.apache.hadoop.mapreduce.jobhistory.TaskFailedEvent)1 TaskStartedEvent (org.apache.hadoop.mapreduce.jobhistory.TaskStartedEvent)1 Test (org.junit.Test)1 InvocationOnMock (org.mockito.invocation.InvocationOnMock)1