Search in sources :

Example 1 with NullDBWritable

use of org.apache.hadoop.mapred.lib.db.DBInputFormat.NullDBWritable in project hadoop by apache.

the class TestDBInputFormat method testDBRecordReader.

/**
   * 
   * test DBRecordReader. This reader should creates keys, values, know about position.. 
   */
@SuppressWarnings("unchecked")
@Test(timeout = 5000)
public void testDBRecordReader() throws Exception {
    JobConf job = mock(JobConf.class);
    DBConfiguration dbConfig = mock(DBConfiguration.class);
    String[] fields = { "field1", "filed2" };
    @SuppressWarnings("rawtypes") DBRecordReader reader = new DBInputFormat<NullDBWritable>().new DBRecordReader(new DBInputSplit(), NullDBWritable.class, job, DriverForTest.getConnection(), dbConfig, "condition", fields, "table");
    LongWritable key = reader.createKey();
    assertEquals(0, key.get());
    DBWritable value = reader.createValue();
    assertEquals("org.apache.hadoop.mapred.lib.db.DBInputFormat$NullDBWritable", value.getClass().getName());
    assertEquals(0, reader.getPos());
    assertFalse(reader.next(key, value));
}
Also used : DBConfiguration(org.apache.hadoop.mapred.lib.db.DBConfiguration) DBRecordReader(org.apache.hadoop.mapred.lib.db.DBInputFormat.DBRecordReader) NullDBWritable(org.apache.hadoop.mapred.lib.db.DBInputFormat.NullDBWritable) DBInputSplit(org.apache.hadoop.mapred.lib.db.DBInputFormat.DBInputSplit) LongWritable(org.apache.hadoop.io.LongWritable) JobConf(org.apache.hadoop.mapred.JobConf) NullDBWritable(org.apache.hadoop.mapred.lib.db.DBInputFormat.NullDBWritable) DriverForTest(org.apache.hadoop.mapreduce.lib.db.DriverForTest) Test(org.junit.Test)

Example 2 with NullDBWritable

use of org.apache.hadoop.mapred.lib.db.DBInputFormat.NullDBWritable in project hadoop by apache.

the class TestDBInputFormat method testDBInputFormat.

/**
   * test DBInputFormat class. Class should split result for chunks
   * @throws Exception
   */
@Test(timeout = 10000)
public void testDBInputFormat() throws Exception {
    JobConf configuration = new JobConf();
    setupDriver(configuration);
    DBInputFormat<NullDBWritable> format = new DBInputFormat<NullDBWritable>();
    format.setConf(configuration);
    format.setConf(configuration);
    DBInputFormat.DBInputSplit splitter = new DBInputFormat.DBInputSplit(1, 10);
    Reporter reporter = mock(Reporter.class);
    RecordReader<LongWritable, NullDBWritable> reader = format.getRecordReader(splitter, configuration, reporter);
    configuration.setInt(MRJobConfig.NUM_MAPS, 3);
    InputSplit[] lSplits = format.getSplits(configuration, 3);
    assertEquals(5, lSplits[0].getLength());
    assertEquals(3, lSplits.length);
    // test reader .Some simple tests
    assertEquals(LongWritable.class, reader.createKey().getClass());
    assertEquals(0, reader.getPos());
    assertEquals(0, reader.getProgress(), 0.001);
    reader.close();
}
Also used : DBInputSplit(org.apache.hadoop.mapred.lib.db.DBInputFormat.DBInputSplit) NullDBWritable(org.apache.hadoop.mapred.lib.db.DBInputFormat.NullDBWritable) Reporter(org.apache.hadoop.mapred.Reporter) DBInputSplit(org.apache.hadoop.mapred.lib.db.DBInputFormat.DBInputSplit) LongWritable(org.apache.hadoop.io.LongWritable) JobConf(org.apache.hadoop.mapred.JobConf) DBInputSplit(org.apache.hadoop.mapred.lib.db.DBInputFormat.DBInputSplit) InputSplit(org.apache.hadoop.mapred.InputSplit) DriverForTest(org.apache.hadoop.mapreduce.lib.db.DriverForTest) Test(org.junit.Test)

Aggregations

LongWritable (org.apache.hadoop.io.LongWritable)2 JobConf (org.apache.hadoop.mapred.JobConf)2 DBInputSplit (org.apache.hadoop.mapred.lib.db.DBInputFormat.DBInputSplit)2 NullDBWritable (org.apache.hadoop.mapred.lib.db.DBInputFormat.NullDBWritable)2 DriverForTest (org.apache.hadoop.mapreduce.lib.db.DriverForTest)2 Test (org.junit.Test)2 InputSplit (org.apache.hadoop.mapred.InputSplit)1 Reporter (org.apache.hadoop.mapred.Reporter)1 DBConfiguration (org.apache.hadoop.mapred.lib.db.DBConfiguration)1 DBRecordReader (org.apache.hadoop.mapred.lib.db.DBInputFormat.DBRecordReader)1