Search in sources :

Example 1 with EventWriter

use of org.apache.hadoop.mapreduce.jobhistory.EventWriter in project hadoop by apache.

the class MRAppMasterTestLaunchTime method testMRAppMasterJobLaunchTime.

@Test
public void testMRAppMasterJobLaunchTime() throws IOException, InterruptedException {
    String applicationAttemptIdStr = "appattempt_1317529182569_0004_000002";
    String containerIdStr = "container_1317529182569_0004_000002_1";
    String userName = "TestAppMasterUser";
    JobConf conf = new JobConf();
    conf.set(MRJobConfig.MR_AM_STAGING_DIR, stagingDir);
    conf.setInt(MRJobConfig.NUM_REDUCES, 0);
    conf.set(JHAdminConfig.MR_HS_JHIST_FORMAT, "json");
    ApplicationAttemptId applicationAttemptId = ApplicationAttemptId.fromString(applicationAttemptIdStr);
    JobId jobId = TypeConverter.toYarn(TypeConverter.fromYarn(applicationAttemptId.getApplicationId()));
    File dir = new File(MRApps.getStagingAreaDir(conf, userName).toString(), jobId.toString());
    dir.mkdirs();
    File historyFile = new File(JobHistoryUtils.getStagingJobHistoryFile(new Path(dir.toURI().toString()), jobId, (applicationAttemptId.getAttemptId() - 1)).toUri().getRawPath());
    historyFile.createNewFile();
    FSDataOutputStream out = new FSDataOutputStream(new FileOutputStream(historyFile), null);
    EventWriter writer = new EventWriter(out, EventWriter.WriteMode.JSON);
    writer.close();
    FileSystem fs = FileSystem.get(conf);
    JobSplitWriter.createSplitFiles(new Path(dir.getAbsolutePath()), conf, fs, new org.apache.hadoop.mapred.InputSplit[0]);
    ContainerId containerId = ContainerId.fromString(containerIdStr);
    MRAppMasterTestLaunchTime appMaster = new MRAppMasterTestLaunchTime(applicationAttemptId, containerId, "host", -1, -1, System.currentTimeMillis());
    MRAppMaster.initAndStartAppMaster(appMaster, conf, userName);
    appMaster.stop();
    assertTrue("Job launch time should not be negative.", appMaster.jobLaunchTime.get() >= 0);
}
Also used : Path(org.apache.hadoop.fs.Path) ApplicationAttemptId(org.apache.hadoop.yarn.api.records.ApplicationAttemptId) EventWriter(org.apache.hadoop.mapreduce.jobhistory.EventWriter) ContainerId(org.apache.hadoop.yarn.api.records.ContainerId) FileOutputStream(java.io.FileOutputStream) FileSystem(org.apache.hadoop.fs.FileSystem) FSDataOutputStream(org.apache.hadoop.fs.FSDataOutputStream) JobConf(org.apache.hadoop.mapred.JobConf) File(java.io.File) JobId(org.apache.hadoop.mapreduce.v2.api.records.JobId) Test(org.junit.Test)

Aggregations

File (java.io.File)1 FileOutputStream (java.io.FileOutputStream)1 FSDataOutputStream (org.apache.hadoop.fs.FSDataOutputStream)1 FileSystem (org.apache.hadoop.fs.FileSystem)1 Path (org.apache.hadoop.fs.Path)1 JobConf (org.apache.hadoop.mapred.JobConf)1 EventWriter (org.apache.hadoop.mapreduce.jobhistory.EventWriter)1 JobId (org.apache.hadoop.mapreduce.v2.api.records.JobId)1 ApplicationAttemptId (org.apache.hadoop.yarn.api.records.ApplicationAttemptId)1 ContainerId (org.apache.hadoop.yarn.api.records.ContainerId)1 Test (org.junit.Test)1