Search in sources :

Example 6 with Cluster

use of org.apache.hadoop.mapreduce.Cluster in project hadoop by apache.

the class TestCLI method testGetJobWithRetry.

@Test
public void testGetJobWithRetry() throws Exception {
    Configuration conf = new Configuration();
    conf.setInt(MRJobConfig.MR_CLIENT_JOB_MAX_RETRIES, 1);
    final Cluster mockCluster = mock(Cluster.class);
    final Job mockJob = Job.getInstance(conf);
    when(mockCluster.getJob(any(JobID.class))).thenReturn(null).thenReturn(mockJob);
    CLI cli = new CLI(conf);
    cli.cluster = mockCluster;
    Job job = cli.getJob(JobID.forName("job_1234654654_001"));
    Assert.assertTrue("job is null", job != null);
}
Also used : Configuration(org.apache.hadoop.conf.Configuration) Cluster(org.apache.hadoop.mapreduce.Cluster) Job(org.apache.hadoop.mapreduce.Job) Test(org.junit.Test)

Example 7 with Cluster

use of org.apache.hadoop.mapreduce.Cluster in project hadoop by apache.

the class JobClient method submitJobInternal.

@InterfaceAudience.Private
public RunningJob submitJobInternal(final JobConf conf) throws FileNotFoundException, IOException {
    try {
        conf.setBooleanIfUnset("mapred.mapper.new-api", false);
        conf.setBooleanIfUnset("mapred.reducer.new-api", false);
        Job job = clientUgi.doAs(new PrivilegedExceptionAction<Job>() {

            @Override
            public Job run() throws IOException, ClassNotFoundException, InterruptedException {
                Job job = Job.getInstance(conf);
                job.submit();
                return job;
            }
        });
        Cluster prev = cluster;
        // update our Cluster instance with the one created by Job for submission
        // (we can't pass our Cluster instance to Job, since Job wraps the config
        // instance, and the two configs would then diverge)
        cluster = job.getCluster();
        // to cleanup resources.
        if (prev != null) {
            prev.close();
        }
        return new NetworkedJob(job);
    } catch (InterruptedException ie) {
        throw new IOException("interrupted", ie);
    }
}
Also used : Cluster(org.apache.hadoop.mapreduce.Cluster) IOException(java.io.IOException) Job(org.apache.hadoop.mapreduce.Job)

Example 8 with Cluster

use of org.apache.hadoop.mapreduce.Cluster in project hadoop by apache.

the class TestExternalCall method testCleanup.

/**
 * test methods run end execute of DistCp class. silple copy file
 * @throws Exception 
 */
@Test
public void testCleanup() throws Exception {
    Configuration conf = getConf();
    Path stagingDir = JobSubmissionFiles.getStagingDir(new Cluster(conf), conf);
    stagingDir.getFileSystem(conf).mkdirs(stagingDir);
    Path soure = createFile("tmp.txt");
    Path target = createFile("target.txt");
    DistCp distcp = new DistCp(conf, null);
    String[] arg = { soure.toString(), target.toString() };
    distcp.run(arg);
    Assert.assertTrue(fs.exists(target));
}
Also used : Path(org.apache.hadoop.fs.Path) Configuration(org.apache.hadoop.conf.Configuration) Cluster(org.apache.hadoop.mapreduce.Cluster) Test(org.junit.Test)

Example 9 with Cluster

use of org.apache.hadoop.mapreduce.Cluster in project hadoop by apache.

the class TestIntegration method testCleanup.

@Test(timeout = 100000)
public void testCleanup() {
    try {
        Path sourcePath = new Path("noscheme:///file");
        List<Path> sources = new ArrayList<Path>();
        sources.add(sourcePath);
        DistCpOptions options = new DistCpOptions(sources, target);
        Configuration conf = getConf();
        Path stagingDir = JobSubmissionFiles.getStagingDir(new Cluster(conf), conf);
        stagingDir.getFileSystem(conf).mkdirs(stagingDir);
        try {
            new DistCp(conf, options).execute();
        } catch (Throwable t) {
            Assert.assertEquals(stagingDir.getFileSystem(conf).listStatus(stagingDir).length, 0);
        }
    } catch (Exception e) {
        LOG.error("Exception encountered ", e);
        Assert.fail("testCleanup failed " + e.getMessage());
    }
}
Also used : Path(org.apache.hadoop.fs.Path) Configuration(org.apache.hadoop.conf.Configuration) ArrayList(java.util.ArrayList) Cluster(org.apache.hadoop.mapreduce.Cluster) IOException(java.io.IOException) Test(org.junit.Test)

Example 10 with Cluster

use of org.apache.hadoop.mapreduce.Cluster in project hbase by apache.

the class MapReduceBackupCopyJob method cancel.

@Override
public void cancel(String jobId) throws IOException {
    JobID id = JobID.forName(jobId);
    Cluster cluster = new Cluster(this.getConf());
    try {
        Job job = cluster.getJob(id);
        if (job == null) {
            LOG.error("No job found for " + id);
            // should we throw exception
            return;
        }
        if (job.isComplete() || job.isRetired()) {
            return;
        }
        job.killJob();
        LOG.debug("Killed copy job " + id);
    } catch (InterruptedException e) {
        throw new IOException(e);
    }
}
Also used : Cluster(org.apache.hadoop.mapreduce.Cluster) IOException(java.io.IOException) BackupCopyJob(org.apache.hadoop.hbase.backup.BackupCopyJob) Job(org.apache.hadoop.mapreduce.Job) JobID(org.apache.hadoop.mapreduce.JobID)

Aggregations

Cluster (org.apache.hadoop.mapreduce.Cluster)22 Test (org.junit.Test)17 Configuration (org.apache.hadoop.conf.Configuration)12 Job (org.apache.hadoop.mapreduce.Job)11 Path (org.apache.hadoop.fs.Path)5 IOException (java.io.IOException)4 JobID (org.apache.hadoop.mapreduce.JobID)4 TaskReport (org.apache.hadoop.mapreduce.TaskReport)4 ArrayList (java.util.ArrayList)2 ByteArrayOutputStream (java.io.ByteArrayOutputStream)1 PrintWriter (java.io.PrintWriter)1 Random (java.util.Random)1 FileStatus (org.apache.hadoop.fs.FileStatus)1 FileSystem (org.apache.hadoop.fs.FileSystem)1 HarFileSystem (org.apache.hadoop.fs.HarFileSystem)1 FsPermission (org.apache.hadoop.fs.permission.FsPermission)1 BackupCopyJob (org.apache.hadoop.hbase.backup.BackupCopyJob)1 SequenceFile (org.apache.hadoop.io.SequenceFile)1 JobStatus (org.apache.hadoop.mapreduce.JobStatus)1 TaskAttemptID (org.apache.hadoop.mapreduce.TaskAttemptID)1