Search in sources :

Example 31 with JobContextImpl

use of org.apache.hadoop.mapreduce.task.JobContextImpl in project jena by apache.

the class AbstractNodeTupleInputFormatTests method testSplitInputs.

protected final void testSplitInputs(Configuration config, File[] inputs, int expectedSplits, int expectedTuples) throws IOException, InterruptedException {
    // Set up fake job
    InputFormat<LongWritable, T> inputFormat = this.getInputFormat();
    Job job = Job.getInstance(config);
    job.setInputFormatClass(inputFormat.getClass());
    for (File input : inputs) {
        this.addInputPath(input, job.getConfiguration(), job);
    }
    JobContext context = new JobContextImpl(job.getConfiguration(), job.getJobID());
    Assert.assertEquals(inputs.length, FileInputFormat.getInputPaths(context).length);
    // Check splits
    List<InputSplit> splits = inputFormat.getSplits(context);
    Assert.assertEquals(expectedSplits, splits.size());
    // Check tuples
    int count = 0;
    for (InputSplit split : splits) {
        // Validate split
        Assert.assertTrue(this.isValidSplit(split, config));
        // Read split
        TaskAttemptContext taskContext = new TaskAttemptContextImpl(job.getConfiguration(), new TaskAttemptID());
        RecordReader<LongWritable, T> reader = inputFormat.createRecordReader(split, taskContext);
        reader.initialize(split, taskContext);
        count += this.countTuples(reader);
    }
    Assert.assertEquals(expectedTuples, count);
}
Also used : JobContextImpl(org.apache.hadoop.mapreduce.task.JobContextImpl) TaskAttemptID(org.apache.hadoop.mapreduce.TaskAttemptID) TaskAttemptContext(org.apache.hadoop.mapreduce.TaskAttemptContext) TaskAttemptContextImpl(org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl) LongWritable(org.apache.hadoop.io.LongWritable) JobContext(org.apache.hadoop.mapreduce.JobContext) Job(org.apache.hadoop.mapreduce.Job) File(java.io.File) InputSplit(org.apache.hadoop.mapreduce.InputSplit)

Example 32 with JobContextImpl

use of org.apache.hadoop.mapreduce.task.JobContextImpl in project jena by apache.

the class AbstractNodeTupleInputFormatTests method testMultipleInputs.

/**
     * Runs a multiple input test
     * 
     * @param inputs
     *            Inputs
     * @param expectedSplits
     *            Number of splits expected
     * @param expectedTuples
     *            Number of tuples expected
     * @throws IOException
     * @throws InterruptedException
     */
protected final void testMultipleInputs(File[] inputs, int expectedSplits, int expectedTuples) throws IOException, InterruptedException {
    // Prepare configuration and inputs
    Configuration config = this.prepareConfiguration();
    // Set up fake job
    InputFormat<LongWritable, T> inputFormat = this.getInputFormat();
    Job job = Job.getInstance(config);
    job.setInputFormatClass(inputFormat.getClass());
    for (File input : inputs) {
        this.addInputPath(input, job.getConfiguration(), job);
    }
    JobContext context = new JobContextImpl(job.getConfiguration(), job.getJobID());
    Assert.assertEquals(inputs.length, FileInputFormat.getInputPaths(context).length);
    NLineInputFormat.setNumLinesPerSplit(job, expectedTuples);
    // Check splits
    List<InputSplit> splits = inputFormat.getSplits(context);
    Assert.assertEquals(expectedSplits, splits.size());
    // Check tuples
    int count = 0;
    for (InputSplit split : splits) {
        TaskAttemptContext taskContext = new TaskAttemptContextImpl(job.getConfiguration(), new TaskAttemptID());
        RecordReader<LongWritable, T> reader = inputFormat.createRecordReader(split, taskContext);
        reader.initialize(split, taskContext);
        count += this.countTuples(reader);
    }
    Assert.assertEquals(expectedTuples, count);
}
Also used : JobContextImpl(org.apache.hadoop.mapreduce.task.JobContextImpl) Configuration(org.apache.hadoop.conf.Configuration) TaskAttemptID(org.apache.hadoop.mapreduce.TaskAttemptID) TaskAttemptContext(org.apache.hadoop.mapreduce.TaskAttemptContext) TaskAttemptContextImpl(org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl) LongWritable(org.apache.hadoop.io.LongWritable) JobContext(org.apache.hadoop.mapreduce.JobContext) Job(org.apache.hadoop.mapreduce.Job) File(java.io.File) InputSplit(org.apache.hadoop.mapreduce.InputSplit)

Aggregations

JobContextImpl (org.apache.hadoop.mapreduce.task.JobContextImpl)32 Configuration (org.apache.hadoop.conf.Configuration)29 Job (org.apache.hadoop.mapreduce.Job)22 JobContext (org.apache.hadoop.mapreduce.JobContext)22 TaskAttemptContext (org.apache.hadoop.mapreduce.TaskAttemptContext)21 TaskAttemptContextImpl (org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl)21 Path (org.apache.hadoop.fs.Path)17 File (java.io.File)16 IOException (java.io.IOException)11 RecordWriter (org.apache.hadoop.mapreduce.RecordWriter)10 MapFile (org.apache.hadoop.io.MapFile)9 InputSplit (org.apache.hadoop.mapreduce.InputSplit)8 TaskAttemptID (org.apache.hadoop.mapreduce.TaskAttemptID)8 LongWritable (org.apache.hadoop.io.LongWritable)7 FileSystem (org.apache.hadoop.fs.FileSystem)6 Test (org.junit.Test)6 DistCpOptions (org.apache.hadoop.tools.DistCpOptions)5 FileAttribute (java.nio.file.attribute.FileAttribute)4 ArrayList (java.util.ArrayList)4 HashSet (java.util.HashSet)4