Search in sources :

Example 1 with JobState

use of org.apache.hadoop.mapreduce.v2.api.records.JobState in project hadoop by apache.

the class JobImpl method getReport.

@Override
public JobReport getReport() {
    readLock.lock();
    try {
        JobState state = getState();
        // jobFile can be null if the job is not yet inited.
        String jobFile = remoteJobConfFile == null ? "" : remoteJobConfFile.toString();
        StringBuilder diagsb = new StringBuilder();
        for (String s : getDiagnostics()) {
            diagsb.append(s).append("\n");
        }
        if (getInternalState() == JobStateInternal.NEW) {
            return MRBuilderUtils.newJobReport(jobId, jobName, reporterUserName, state, appSubmitTime, startTime, finishTime, setupProgress, 0.0f, 0.0f, cleanupProgress, jobFile, amInfos, isUber, diagsb.toString());
        }
        computeProgress();
        JobReport report = MRBuilderUtils.newJobReport(jobId, jobName, reporterUserName, state, appSubmitTime, startTime, finishTime, setupProgress, this.mapProgress, this.reduceProgress, cleanupProgress, jobFile, amInfos, isUber, diagsb.toString(), jobPriority);
        return report;
    } finally {
        readLock.unlock();
    }
}
Also used : JobState(org.apache.hadoop.mapreduce.v2.api.records.JobState) JobReport(org.apache.hadoop.mapreduce.v2.api.records.JobReport)

Example 2 with JobState

use of org.apache.hadoop.mapreduce.v2.api.records.JobState in project hadoop by apache.

the class MRApp method waitForState.

public void waitForState(Job job, JobState finalState) throws Exception {
    int timeoutSecs = 0;
    JobReport report = job.getReport();
    while (!finalState.equals(report.getJobState()) && timeoutSecs++ < 20) {
        System.out.println("Job State is : " + report.getJobState() + " Waiting for state : " + finalState + "   map progress : " + report.getMapProgress() + "   reduce progress : " + report.getReduceProgress());
        report = job.getReport();
        Thread.sleep(500);
    }
    System.out.println("Job State is : " + report.getJobState());
    Assert.assertEquals("Job state is not correct (timedout)", finalState, job.getState());
}
Also used : JobReport(org.apache.hadoop.mapreduce.v2.api.records.JobReport)

Example 3 with JobState

use of org.apache.hadoop.mapreduce.v2.api.records.JobState in project hadoop by apache.

the class TestTypeConverter method testFromYarnJobReport.

@Test
public void testFromYarnJobReport() throws Exception {
    int jobStartTime = 612354;
    int jobFinishTime = 612355;
    JobState state = JobState.RUNNING;
    JobId jobId = Records.newRecord(JobId.class);
    JobReport jobReport = Records.newRecord(JobReport.class);
    ApplicationId applicationId = ApplicationId.newInstance(0, 0);
    jobId.setAppId(applicationId);
    jobId.setId(0);
    jobReport.setJobId(jobId);
    jobReport.setJobState(state);
    jobReport.setStartTime(jobStartTime);
    jobReport.setFinishTime(jobFinishTime);
    jobReport.setUser("TestTypeConverter-user");
    jobReport.setJobPriority(Priority.newInstance(0));
    JobStatus jobStatus = TypeConverter.fromYarn(jobReport, "dummy-jobfile");
    Assert.assertEquals(jobStartTime, jobStatus.getStartTime());
    Assert.assertEquals(jobFinishTime, jobStatus.getFinishTime());
    Assert.assertEquals(state.toString(), jobStatus.getState().toString());
    Assert.assertEquals(JobPriority.DEFAULT, jobStatus.getPriority());
}
Also used : JobState(org.apache.hadoop.mapreduce.v2.api.records.JobState) ApplicationId(org.apache.hadoop.yarn.api.records.ApplicationId) JobId(org.apache.hadoop.mapreduce.v2.api.records.JobId) JobReport(org.apache.hadoop.mapreduce.v2.api.records.JobReport) Test(org.junit.Test)

Example 4 with JobState

use of org.apache.hadoop.mapreduce.v2.api.records.JobState in project hadoop by apache.

the class MRBuilderUtils method newJobReport.

public static JobReport newJobReport(JobId jobId, String jobName, String userName, JobState state, long submitTime, long startTime, long finishTime, float setupProgress, float mapProgress, float reduceProgress, float cleanupProgress, String jobFile, List<AMInfo> amInfos, boolean isUber, String diagnostics, Priority priority) {
    JobReport report = Records.newRecord(JobReport.class);
    report.setJobId(jobId);
    report.setJobName(jobName);
    report.setUser(userName);
    report.setJobState(state);
    report.setSubmitTime(submitTime);
    report.setStartTime(startTime);
    report.setFinishTime(finishTime);
    report.setSetupProgress(setupProgress);
    report.setCleanupProgress(cleanupProgress);
    report.setMapProgress(mapProgress);
    report.setReduceProgress(reduceProgress);
    report.setJobFile(jobFile);
    report.setAMInfos(amInfos);
    report.setIsUber(isUber);
    report.setDiagnostics(diagnostics);
    report.setJobPriority(priority);
    return report;
}
Also used : JobReport(org.apache.hadoop.mapreduce.v2.api.records.JobReport)

Example 5 with JobState

use of org.apache.hadoop.mapreduce.v2.api.records.JobState in project hadoop by apache.

the class TestHsWebServicesJobsQuery method testJobsQueryStateNone.

@Test
public void testJobsQueryStateNone() throws JSONException, Exception {
    WebResource r = resource();
    ArrayList<JobState> JOB_STATES = new ArrayList<JobState>(Arrays.asList(JobState.values()));
    // find a state that isn't in use
    Map<JobId, Job> jobsMap = appContext.getAllJobs();
    for (Map.Entry<JobId, Job> entry : jobsMap.entrySet()) {
        JOB_STATES.remove(entry.getValue().getState());
    }
    assertTrue("No unused job states", JOB_STATES.size() > 0);
    JobState notInUse = JOB_STATES.get(0);
    ClientResponse response = r.path("ws").path("v1").path("history").path("mapreduce").path("jobs").queryParam("state", notInUse.toString()).accept(MediaType.APPLICATION_JSON).get(ClientResponse.class);
    assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8, response.getType().toString());
    JSONObject json = response.getEntity(JSONObject.class);
    assertEquals("incorrect number of elements", 1, json.length());
    assertEquals("jobs is not empty", new JSONObject().toString(), json.get("jobs").toString());
}
Also used : ClientResponse(com.sun.jersey.api.client.ClientResponse) JSONObject(org.codehaus.jettison.json.JSONObject) ArrayList(java.util.ArrayList) WebResource(com.sun.jersey.api.client.WebResource) JobState(org.apache.hadoop.mapreduce.v2.api.records.JobState) Job(org.apache.hadoop.mapreduce.v2.app.job.Job) Map(java.util.Map) JobId(org.apache.hadoop.mapreduce.v2.api.records.JobId) Test(org.junit.Test)

Aggregations

JobReport (org.apache.hadoop.mapreduce.v2.api.records.JobReport)7 JobState (org.apache.hadoop.mapreduce.v2.api.records.JobState)5 JobId (org.apache.hadoop.mapreduce.v2.api.records.JobId)4 Job (org.apache.hadoop.mapreduce.v2.app.job.Job)4 Test (org.junit.Test)3 Configuration (org.apache.hadoop.conf.Configuration)2 ClientResponse (com.sun.jersey.api.client.ClientResponse)1 WebResource (com.sun.jersey.api.client.WebResource)1 IOException (java.io.IOException)1 ArrayList (java.util.ArrayList)1 HashMap (java.util.HashMap)1 LinkedList (java.util.LinkedList)1 Map (java.util.Map)1 GET (javax.ws.rs.GET)1 Path (javax.ws.rs.Path)1 Produces (javax.ws.rs.Produces)1 FileContext (org.apache.hadoop.fs.FileContext)1 Path (org.apache.hadoop.fs.Path)1 JobACLsManager (org.apache.hadoop.mapred.JobACLsManager)1 TaskCompletionEvent (org.apache.hadoop.mapred.TaskCompletionEvent)1