use of org.apache.drill.metastore.MetastoreColumn in project drill by apache.
the class TestFilterTransformer method testToFilterConditionsFour.
@Test
public void testToFilterConditionsFour() {
Map<MetastoreColumn, Object> conditions = new LinkedHashMap<>();
conditions.put(MetastoreColumn.STORAGE_PLUGIN, "dfs");
conditions.put(MetastoreColumn.WORKSPACE, "tmp");
conditions.put(MetastoreColumn.TABLE_NAME, "nation");
conditions.put(MetastoreColumn.ROW_GROUP_INDEX, 4);
Expression expected = Expressions.and(Expressions.equal(MetastoreColumn.STORAGE_PLUGIN.columnName(), "dfs"), Expressions.equal(MetastoreColumn.WORKSPACE.columnName(), "tmp"), Expressions.equal(MetastoreColumn.TABLE_NAME.columnName(), "nation"), Expressions.equal(MetastoreColumn.ROW_GROUP_INDEX.columnName(), 4));
assertEquals(expected.toString(), transformer.transform(conditions).toString());
}
use of org.apache.drill.metastore.MetastoreColumn in project drill by apache.
the class IcebergRead method internalExecute.
@Override
protected List<T> internalExecute() {
String[] selectedColumns = columns.isEmpty() ? defaultColumns : columns.stream().map(MetastoreColumn::columnName).toArray(String[]::new);
FilterTransformer filterTransformer = context.transformer().filter();
Expression rowFilter = filterTransformer.combine(filterTransformer.transform(metadataTypes), filterTransformer.transform(filter));
Iterable<Record> records = IcebergGenerics.read(context.table()).select(selectedColumns).where(rowFilter).build();
return context.transformer().outputData().columns(selectedColumns).records(Lists.newArrayList(records)).execute();
}
Aggregations