Search in sources :

Example 1 with CarbonInputFormat

use of org.apache.carbondata.hadoop.CarbonInputFormat in project carbondata by apache.

the class CarbonInputFormat_FT method testGetFilteredSplits.

@Test
public void testGetFilteredSplits() throws Exception {
    CarbonInputFormat carbonInputFormat = new CarbonInputFormat();
    JobConf jobConf = new JobConf(new Configuration());
    Job job = Job.getInstance(jobConf);
    FileInputFormat.addInputPath(job, new Path("/opt/carbonstore/db/table1"));
    job.getConfiguration().set(CarbonInputFormat.INPUT_SEGMENT_NUMBERS, "1,2");
    Expression expression = new EqualToExpression(new ColumnExpression("c1", DataType.STRING), new LiteralExpression("a", DataType.STRING));
    CarbonInputFormat.setFilterPredicates(job.getConfiguration(), expression);
    List splits = carbonInputFormat.getSplits(job);
    Assert.assertTrue(splits != null);
    Assert.assertTrue(!splits.isEmpty());
}
Also used : Path(org.apache.hadoop.fs.Path) EqualToExpression(org.apache.carbondata.core.scan.expression.conditional.EqualToExpression) Configuration(org.apache.hadoop.conf.Configuration) Expression(org.apache.carbondata.core.scan.expression.Expression) EqualToExpression(org.apache.carbondata.core.scan.expression.conditional.EqualToExpression) ColumnExpression(org.apache.carbondata.core.scan.expression.ColumnExpression) LiteralExpression(org.apache.carbondata.core.scan.expression.LiteralExpression) CarbonInputFormat(org.apache.carbondata.hadoop.CarbonInputFormat) ColumnExpression(org.apache.carbondata.core.scan.expression.ColumnExpression) LiteralExpression(org.apache.carbondata.core.scan.expression.LiteralExpression) List(java.util.List) Job(org.apache.hadoop.mapreduce.Job) JobConf(org.apache.hadoop.mapred.JobConf) Test(org.junit.Test)

Example 2 with CarbonInputFormat

use of org.apache.carbondata.hadoop.CarbonInputFormat in project carbondata by apache.

the class CarbonInputFormat_FT method testGetSplits.

@Test
public void testGetSplits() throws Exception {
    CarbonInputFormat carbonInputFormat = new CarbonInputFormat();
    JobConf jobConf = new JobConf(new Configuration());
    Job job = Job.getInstance(jobConf);
    FileInputFormat.addInputPath(job, new Path("/opt/carbonstore/db/table1"));
    job.getConfiguration().set(CarbonInputFormat.INPUT_SEGMENT_NUMBERS, "1,2");
    List splits = carbonInputFormat.getSplits(job);
    Assert.assertTrue(splits != null);
    Assert.assertTrue(!splits.isEmpty());
}
Also used : Path(org.apache.hadoop.fs.Path) Configuration(org.apache.hadoop.conf.Configuration) CarbonInputFormat(org.apache.carbondata.hadoop.CarbonInputFormat) List(java.util.List) Job(org.apache.hadoop.mapreduce.Job) JobConf(org.apache.hadoop.mapred.JobConf) Test(org.junit.Test)

Example 3 with CarbonInputFormat

use of org.apache.carbondata.hadoop.CarbonInputFormat in project carbondata by apache.

the class CarbonInputFormatUtil method createCarbonInputFormat.

public static <V> CarbonInputFormat<V> createCarbonInputFormat(AbsoluteTableIdentifier identifier, Job job) throws IOException {
    CarbonInputFormat<V> carbonInputFormat = new CarbonInputFormat<>();
    FileInputFormat.addInputPath(job, new Path(identifier.getTablePath()));
    return carbonInputFormat;
}
Also used : Path(org.apache.hadoop.fs.Path) CarbonInputFormat(org.apache.carbondata.hadoop.CarbonInputFormat)

Example 4 with CarbonInputFormat

use of org.apache.carbondata.hadoop.CarbonInputFormat in project carbondata by apache.

the class InputFilesTest method testGetSplits.

@Test
public void testGetSplits() throws Exception {
    CarbonInputFormat carbonInputFormat = new CarbonInputFormat();
    JobConf jobConf = new JobConf(new Configuration());
    Job job = Job.getInstance(jobConf);
    job.getConfiguration().set("query.id", UUID.randomUUID().toString());
    String tblPath = StoreCreator.getAbsoluteTableIdentifier().getTablePath();
    FileInputFormat.addInputPath(job, new Path(tblPath));
    job.getConfiguration().set(CarbonInputFormat.INPUT_SEGMENT_NUMBERS, "0");
    // list files to get the carbondata file
    File segmentDir = new File(tblPath + File.separator + "Fact" + File.separator + "Part0" + File.separator + "Segment_0");
    if (segmentDir.exists() && segmentDir.isDirectory()) {
        File[] files = segmentDir.listFiles(new FileFilter() {

            @Override
            public boolean accept(File pathname) {
                return pathname.getName().endsWith("carbondata");
            }
        });
        if (files != null && files.length > 0) {
            job.getConfiguration().set(CarbonInputFormat.INPUT_FILES, files[0].getName());
        }
    }
    List splits = carbonInputFormat.getSplits(job);
    Assert.assertTrue(splits != null && splits.size() == 1);
}
Also used : Path(org.apache.hadoop.fs.Path) Configuration(org.apache.hadoop.conf.Configuration) CarbonInputFormat(org.apache.carbondata.hadoop.CarbonInputFormat) List(java.util.List) Job(org.apache.hadoop.mapreduce.Job) FileFilter(java.io.FileFilter) JobConf(org.apache.hadoop.mapred.JobConf) File(java.io.File) Test(org.junit.Test)

Aggregations

CarbonInputFormat (org.apache.carbondata.hadoop.CarbonInputFormat)4 Path (org.apache.hadoop.fs.Path)4 List (java.util.List)3 Configuration (org.apache.hadoop.conf.Configuration)3 JobConf (org.apache.hadoop.mapred.JobConf)3 Job (org.apache.hadoop.mapreduce.Job)3 Test (org.junit.Test)3 File (java.io.File)1 FileFilter (java.io.FileFilter)1 ColumnExpression (org.apache.carbondata.core.scan.expression.ColumnExpression)1 Expression (org.apache.carbondata.core.scan.expression.Expression)1 LiteralExpression (org.apache.carbondata.core.scan.expression.LiteralExpression)1 EqualToExpression (org.apache.carbondata.core.scan.expression.conditional.EqualToExpression)1