Search in sources :

Example 11 with MapReduceCounterData

use of com.linkedin.drelephant.mapreduce.data.MapReduceCounterData in project dr-elephant by linkedin.

the class MapperTimeHeuristicTest method analyzeJob.

private Severity analyzeJob(int numTasks, long runtime) throws IOException {
    MapReduceCounterData jobCounter = new MapReduceCounterData();
    MapReduceTaskData[] mappers = new MapReduceTaskData[numTasks + 1];
    MapReduceCounterData taskCounter = new MapReduceCounterData();
    taskCounter.set(MapReduceCounterData.CounterName.HDFS_BYTES_READ, DUMMY_INPUT_SIZE / 4);
    taskCounter.set(MapReduceCounterData.CounterName.S3_BYTES_READ, DUMMY_INPUT_SIZE / 4);
    taskCounter.set(MapReduceCounterData.CounterName.S3A_BYTES_READ, DUMMY_INPUT_SIZE / 4);
    taskCounter.set(MapReduceCounterData.CounterName.S3N_BYTES_READ, DUMMY_INPUT_SIZE / 4);
    int i = 0;
    for (; i < numTasks; i++) {
        mappers[i] = new MapReduceTaskData("task-id-" + i, "task-attempt-id-" + i);
        mappers[i].setTimeAndCounter(new long[] { runtime, 0, 0, 0, 0 }, taskCounter);
    }
    // Non-sampled task, which does not contain time and counter data
    mappers[i] = new MapReduceTaskData("task-id-" + i, "task-attempt-id-" + i);
    MapReduceApplicationData data = new MapReduceApplicationData().setCounters(jobCounter).setMapperData(mappers);
    HeuristicResult result = _heuristic.apply(data);
    return result.getSeverity();
}
Also used : MapReduceApplicationData(com.linkedin.drelephant.mapreduce.data.MapReduceApplicationData) MapReduceCounterData(com.linkedin.drelephant.mapreduce.data.MapReduceCounterData) MapReduceTaskData(com.linkedin.drelephant.mapreduce.data.MapReduceTaskData) HeuristicResult(com.linkedin.drelephant.analysis.HeuristicResult)

Example 12 with MapReduceCounterData

use of com.linkedin.drelephant.mapreduce.data.MapReduceCounterData in project dr-elephant by linkedin.

the class ReducerMemoryHeuristicTest method analyzeJob.

private Severity analyzeJob(long taskAvgMemMB, long containerMemMB) throws IOException {
    MapReduceCounterData jobCounter = new MapReduceCounterData();
    MapReduceTaskData[] reducers = new MapReduceTaskData[NUMTASKS + 1];
    MapReduceCounterData counter = new MapReduceCounterData();
    counter.set(MapReduceCounterData.CounterName.PHYSICAL_MEMORY_BYTES, taskAvgMemMB * FileUtils.ONE_MB);
    Properties p = new Properties();
    p.setProperty(ReducerMemoryHeuristic.REDUCER_MEMORY_CONF, Long.toString(containerMemMB));
    int i = 0;
    for (; i < NUMTASKS; i++) {
        reducers[i] = new MapReduceTaskData("task-id-" + i, "task-attempt-id-" + i);
        reducers[i].setTimeAndCounter(new long[5], counter);
    }
    // Non-sampled task, which does not contain time and counter data
    reducers[i] = new MapReduceTaskData("task-id-" + i, "task-attempt-id-" + i);
    MapReduceApplicationData data = new MapReduceApplicationData().setCounters(jobCounter).setReducerData(reducers);
    data.setJobConf(p);
    HeuristicResult result = _heuristic.apply(data);
    return result.getSeverity();
}
Also used : MapReduceApplicationData(com.linkedin.drelephant.mapreduce.data.MapReduceApplicationData) MapReduceCounterData(com.linkedin.drelephant.mapreduce.data.MapReduceCounterData) MapReduceTaskData(com.linkedin.drelephant.mapreduce.data.MapReduceTaskData) Properties(java.util.Properties) HeuristicResult(com.linkedin.drelephant.analysis.HeuristicResult)

Example 13 with MapReduceCounterData

use of com.linkedin.drelephant.mapreduce.data.MapReduceCounterData in project dr-elephant by linkedin.

the class ReducerSkewHeuristicTest method analyzeJob.

private Severity analyzeJob(int numSmallTasks, int numLargeTasks, long smallInputSize, long largeInputSize) throws IOException {
    MapReduceCounterData jobCounter = new MapReduceCounterData();
    MapReduceTaskData[] reducers = new MapReduceTaskData[numSmallTasks + numLargeTasks + 1];
    MapReduceCounterData smallCounter = new MapReduceCounterData();
    smallCounter.set(MapReduceCounterData.CounterName.REDUCE_SHUFFLE_BYTES, smallInputSize);
    MapReduceCounterData largeCounter = new MapReduceCounterData();
    largeCounter.set(MapReduceCounterData.CounterName.REDUCE_SHUFFLE_BYTES, largeInputSize);
    int i = 0;
    for (; i < numSmallTasks; i++) {
        reducers[i] = new MapReduceTaskData("task-id-" + i, "task-attempt-id-" + i);
        reducers[i].setTimeAndCounter(new long[5], smallCounter);
    }
    for (; i < numSmallTasks + numLargeTasks; i++) {
        reducers[i] = new MapReduceTaskData("task-id-" + i, "task-attempt-id-" + i);
        reducers[i].setTimeAndCounter(new long[5], largeCounter);
    }
    // Non-sampled task, which does not contain time and counter data
    reducers[i] = new MapReduceTaskData("task-id-" + i, "task-attempt-id-" + i);
    MapReduceApplicationData data = new MapReduceApplicationData().setCounters(jobCounter).setReducerData(reducers);
    HeuristicResult result = _heuristic.apply(data);
    return result.getSeverity();
}
Also used : MapReduceApplicationData(com.linkedin.drelephant.mapreduce.data.MapReduceApplicationData) MapReduceCounterData(com.linkedin.drelephant.mapreduce.data.MapReduceCounterData) MapReduceTaskData(com.linkedin.drelephant.mapreduce.data.MapReduceTaskData) HeuristicResult(com.linkedin.drelephant.analysis.HeuristicResult)

Example 14 with MapReduceCounterData

use of com.linkedin.drelephant.mapreduce.data.MapReduceCounterData in project dr-elephant by linkedin.

the class AnalyticJobTest method testGetAnalysis.

@Test
public void testGetAnalysis() throws Exception {
    try {
        // Setup analytic job
        final AnalyticJob analyticJob = new AnalyticJob().setAppId(TEST_JOB_ID1).setAppType(new ApplicationType(TEST_APP_TYPE)).setFinishTime(1462178403).setStartTime(1462178412).setName(TEST_JOB_NAME).setQueueName(TEST_DEFAULT_QUEUE_NAME).setUser(TEST_USERNAME).setTrackingUrl(TEST_TRACKING_URL);
        // Setup job counter data
        String filePath = FILENAME_JOBCOUNTER;
        MapReduceCounterData jobCounter = new MapReduceCounterData();
        setCounterData(jobCounter, filePath);
        // Setup mapper data
        long[][] mapperTasksTime = { { 2563, 0, 0, 0, 0 }, { 2562, 0, 0, 0, 0 }, { 2567, 0, 0, 0, 0 } };
        MapReduceTaskData[] mappers = new MapReduceTaskData[3];
        for (int i = 1; i <= mappers.length; i++) {
            MapReduceCounterData taskCounter = new MapReduceCounterData();
            setCounterData(taskCounter, FILENAME_MAPPERTASK.replaceFirst("\\$", Integer.toString(i)));
            mappers[i - 1] = new MapReduceTaskData("task-id-" + (i - 1), "task-attempt-id-" + (i - 1));
            mappers[i - 1].setTimeAndCounter(mapperTasksTime[i - 1], taskCounter);
        }
        // Setup reducer data
        long[][] reducerTasksTime = { { 1870, 1665, 14, 0, 0 } };
        MapReduceTaskData[] reducers = new MapReduceTaskData[1];
        for (int i = 1; i <= reducers.length; i++) {
            MapReduceCounterData taskCounter = new MapReduceCounterData();
            setCounterData(taskCounter, FILENAME_REDUCERTASK.replaceFirst("\\$", Integer.toString(i)));
            reducers[i - 1] = new MapReduceTaskData("task-id-" + (i - 1), "task-attempt-id-" + (i - 1));
            reducers[i - 1].setTimeAndCounter(reducerTasksTime[i - 1], taskCounter);
        }
        // Setup job configuration data
        filePath = FILENAME_JOBCONF;
        Properties jobConf = TestUtil.loadProperties(filePath);
        // Setup application data
        final MapReduceApplicationData data = new MapReduceApplicationData().setCounters(jobCounter).setMapperData(mappers).setReducerData(reducers).setJobConf(jobConf).setSucceeded(true).setDiagnosticInfo("").setUsername(TEST_USERNAME).setUrl("").setJobName(TEST_JOB_NAME).setStartTime(1462178412).setFinishTime(1462178403).setRetry(false).setAppId(TEST_JOB_ID1);
        // Setup heuristics
        final List<Heuristic> heuristics = loadHeuristics();
        // Setup job type
        final JobType jobType = new JobType(TEST_JOB_TYPE, TEST_JOBCONF_NAME, TEST_JOBCONF_PATTERN);
        // Set expectations in JMockit
        new Expectations() {

            {
                fetcher.fetchData(analyticJob);
                result = data;
                elephantContext.getHeuristicsForApplicationType(analyticJob.getAppType());
                result = heuristics;
                elephantContext.matchJobType(data);
                result = jobType;
            }
        };
        // Call the method under test
        AppResult result = analyticJob.getAnalysis();
        // Make assertions on result
        assertTrue("Result is null", result != null);
        assertTrue("Score did not match", result.score == TEST_SCORE);
        assertTrue("Severity did not match", result.severity.toString().equals(TEST_SEVERITY));
        assertTrue("APP ID did not match", result.id.equals(TEST_JOB_ID1));
        assertTrue("Scheduler did not match", result.scheduler.equals(TEST_SCHEDULER));
    } catch (Exception e) {
        e.printStackTrace();
        assertFalse("Test failed with exception", true);
    }
}
Also used : Expectations(mockit.Expectations) MapReduceCounterData(com.linkedin.drelephant.mapreduce.data.MapReduceCounterData) Properties(java.util.Properties) AppResult(models.AppResult) IOException(java.io.IOException) MapReduceApplicationData(com.linkedin.drelephant.mapreduce.data.MapReduceApplicationData) MapReduceTaskData(com.linkedin.drelephant.mapreduce.data.MapReduceTaskData) MapperSkewHeuristic(com.linkedin.drelephant.mapreduce.heuristics.MapperSkewHeuristic) Test(org.junit.Test)

Example 15 with MapReduceCounterData

use of com.linkedin.drelephant.mapreduce.data.MapReduceCounterData in project dr-elephant by linkedin.

the class MapReduceFSFetcherHadoop2 method fetchData.

@Override
public MapReduceApplicationData fetchData(AnalyticJob job) throws IOException {
    DataFiles files = getHistoryFiles(job);
    String confFile = files.getJobConfPath();
    String histFile = files.getJobHistPath();
    String appId = job.getAppId();
    String jobId = Utils.getJobIdFromApplicationId(appId);
    MapReduceApplicationData jobData = new MapReduceApplicationData();
    jobData.setAppId(appId).setJobId(jobId);
    // Fetch job config
    Configuration jobConf = new Configuration(false);
    jobConf.addResource(_fs.open(new Path(confFile)), confFile);
    Properties jobConfProperties = new Properties();
    for (Map.Entry<String, String> entry : jobConf) {
        jobConfProperties.put(entry.getKey(), entry.getValue());
    }
    jobData.setJobConf(jobConfProperties);
    // Check if job history file is too large and should be throttled
    if (_fs.getFileStatus(new Path(histFile)).getLen() > _maxLogSizeInMB * FileUtils.ONE_MB) {
        String errMsg = "The history log of MapReduce application: " + appId + " is over the limit size of " + _maxLogSizeInMB + " MB, the parsing process gets throttled.";
        logger.warn(errMsg);
        jobData.setDiagnosticInfo(errMsg);
        // set succeeded to false to avoid heuristic analysis
        jobData.setSucceeded(false);
        return jobData;
    }
    // Analyze job history file
    JobHistoryParser parser = new JobHistoryParser(_fs, histFile);
    JobHistoryParser.JobInfo jobInfo = parser.parse();
    IOException parseException = parser.getParseException();
    if (parseException != null) {
        throw new RuntimeException("Could not parse history file " + histFile, parseException);
    }
    jobData.setSubmitTime(jobInfo.getSubmitTime());
    jobData.setStartTime(jobInfo.getLaunchTime());
    jobData.setFinishTime(jobInfo.getFinishTime());
    String state = jobInfo.getJobStatus();
    if (state.equals("SUCCEEDED")) {
        jobData.setSucceeded(true);
    } else if (state.equals("FAILED")) {
        jobData.setSucceeded(false);
        jobData.setDiagnosticInfo(jobInfo.getErrorInfo());
    } else {
        throw new RuntimeException("job neither succeeded or failed. can not process it ");
    }
    // Fetch job counter
    MapReduceCounterData jobCounter = getCounterData(jobInfo.getTotalCounters());
    // Fetch task data
    Map<TaskID, JobHistoryParser.TaskInfo> allTasks = jobInfo.getAllTasks();
    List<JobHistoryParser.TaskInfo> mapperInfoList = new ArrayList<JobHistoryParser.TaskInfo>();
    List<JobHistoryParser.TaskInfo> reducerInfoList = new ArrayList<JobHistoryParser.TaskInfo>();
    for (JobHistoryParser.TaskInfo taskInfo : allTasks.values()) {
        if (taskInfo.getTaskType() == TaskType.MAP) {
            mapperInfoList.add(taskInfo);
        } else {
            reducerInfoList.add(taskInfo);
        }
    }
    if (jobInfo.getTotalMaps() > MAX_SAMPLE_SIZE) {
        logger.debug(jobId + " total mappers: " + mapperInfoList.size());
    }
    if (jobInfo.getTotalReduces() > MAX_SAMPLE_SIZE) {
        logger.debug(jobId + " total reducers: " + reducerInfoList.size());
    }
    MapReduceTaskData[] mapperList = getTaskData(jobId, mapperInfoList);
    MapReduceTaskData[] reducerList = getTaskData(jobId, reducerInfoList);
    jobData.setCounters(jobCounter).setMapperData(mapperList).setReducerData(reducerList);
    return jobData;
}
Also used : Path(org.apache.hadoop.fs.Path) MapReduceCounterData(com.linkedin.drelephant.mapreduce.data.MapReduceCounterData) TaskID(org.apache.hadoop.mapreduce.TaskID) Configuration(org.apache.hadoop.conf.Configuration) ArrayList(java.util.ArrayList) IOException(java.io.IOException) Properties(java.util.Properties) MapReduceApplicationData(com.linkedin.drelephant.mapreduce.data.MapReduceApplicationData) JobHistoryParser(org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser) MapReduceTaskData(com.linkedin.drelephant.mapreduce.data.MapReduceTaskData) Map(java.util.Map)

Aggregations

MapReduceCounterData (com.linkedin.drelephant.mapreduce.data.MapReduceCounterData)17 MapReduceTaskData (com.linkedin.drelephant.mapreduce.data.MapReduceTaskData)17 MapReduceApplicationData (com.linkedin.drelephant.mapreduce.data.MapReduceApplicationData)15 HeuristicResult (com.linkedin.drelephant.analysis.HeuristicResult)12 Properties (java.util.Properties)6 IOException (java.io.IOException)3 ArrayList (java.util.ArrayList)3 JobHistoryParser (org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser)2 Test (org.junit.Test)2 MapperSkewHeuristic (com.linkedin.drelephant.mapreduce.heuristics.MapperSkewHeuristic)1 MalformedURLException (java.net.MalformedURLException)1 URL (java.net.URL)1 List (java.util.List)1 Map (java.util.Map)1 Expectations (mockit.Expectations)1 AppResult (models.AppResult)1 Configuration (org.apache.hadoop.conf.Configuration)1 Path (org.apache.hadoop.fs.Path)1 TaskAttemptID (org.apache.hadoop.mapreduce.TaskAttemptID)1 TaskID (org.apache.hadoop.mapreduce.TaskID)1