Search in sources :

Example 1 with Row

use of org.apache.hadoop.hive.llap.Row in project hive by apache.

the class TestJdbcWithMiniLlap method processQuery.

private int processQuery(String query, int numSplits, RowProcessor rowProcessor) throws Exception {
    String url = miniHS2.getJdbcURL();
    String user = System.getProperty("user.name");
    String pwd = user;
    LlapRowInputFormat inputFormat = new LlapRowInputFormat();
    // Get splits
    JobConf job = new JobConf(conf);
    job.set(LlapBaseInputFormat.URL_KEY, url);
    job.set(LlapBaseInputFormat.USER_KEY, user);
    job.set(LlapBaseInputFormat.PWD_KEY, pwd);
    job.set(LlapBaseInputFormat.QUERY_KEY, query);
    InputSplit[] splits = inputFormat.getSplits(job, numSplits);
    assertTrue(splits.length > 0);
    // Fetch rows from splits
    boolean first = true;
    int rowCount = 0;
    for (InputSplit split : splits) {
        System.out.println("Processing split " + split.getLocations());
        int numColumns = 2;
        RecordReader<NullWritable, Row> reader = inputFormat.getRecordReader(split, job, null);
        Row row = reader.createValue();
        while (reader.next(NullWritable.get(), row)) {
            rowProcessor.process(row);
            ++rowCount;
        }
    }
    return rowCount;
}
Also used : LlapRowInputFormat(org.apache.hadoop.hive.llap.LlapRowInputFormat) Row(org.apache.hadoop.hive.llap.Row) JobConf(org.apache.hadoop.mapred.JobConf) InputSplit(org.apache.hadoop.mapred.InputSplit) NullWritable(org.apache.hadoop.io.NullWritable)

Aggregations

LlapRowInputFormat (org.apache.hadoop.hive.llap.LlapRowInputFormat)1 Row (org.apache.hadoop.hive.llap.Row)1 NullWritable (org.apache.hadoop.io.NullWritable)1 InputSplit (org.apache.hadoop.mapred.InputSplit)1 JobConf (org.apache.hadoop.mapred.JobConf)1