Search in sources :

Example 91 with Filter

use of org.apache.hadoop.hbase.filter.Filter in project cxf by apache.

the class HBaseQueryVisitor method visit.

public void visit(SearchCondition<T> sc) {
    PrimitiveStatement statement = sc.getStatement();
    if (statement != null) {
        if (statement.getProperty() != null) {
            queryStack.peek().add(buildSimpleQuery(sc.getConditionType(), statement.getProperty(), statement.getValue()));
        }
    } else {
        queryStack.push(new ArrayList<>());
        for (SearchCondition<T> condition : sc.getSearchConditions()) {
            condition.accept(this);
        }
        boolean orCondition = sc.getConditionType() == ConditionType.OR;
        List<Filter> queries = queryStack.pop();
        queryStack.peek().add(createCompositeQuery(queries, orCondition));
    }
}
Also used : PrimitiveStatement(org.apache.cxf.jaxrs.ext.search.PrimitiveStatement) Filter(org.apache.hadoop.hbase.filter.Filter) SingleColumnValueFilter(org.apache.hadoop.hbase.filter.SingleColumnValueFilter)

Example 92 with Filter

use of org.apache.hadoop.hbase.filter.Filter in project cxf by apache.

the class HBaseQueryVisitor method createCompositeQuery.

private Filter createCompositeQuery(List<Filter> queries, boolean orCondition) {
    FilterList.Operator oper = orCondition ? FilterList.Operator.MUST_PASS_ONE : FilterList.Operator.MUST_PASS_ALL;
    FilterList list = new FilterList(oper);
    for (Filter query : queries) {
        list.addFilter(query);
    }
    return list;
}
Also used : Filter(org.apache.hadoop.hbase.filter.Filter) SingleColumnValueFilter(org.apache.hadoop.hbase.filter.SingleColumnValueFilter) FilterList(org.apache.hadoop.hbase.filter.FilterList)

Example 93 with Filter

use of org.apache.hadoop.hbase.filter.Filter in project janusgraph by JanusGraph.

the class HBaseKeyColumnValueStore method getFilter.

public static Filter getFilter(SliceQuery query) {
    byte[] colStartBytes = query.getSliceStart().length() > 0 ? query.getSliceStart().as(StaticBuffer.ARRAY_FACTORY) : null;
    byte[] colEndBytes = query.getSliceEnd().length() > 0 ? query.getSliceEnd().as(StaticBuffer.ARRAY_FACTORY) : null;
    Filter filter = new ColumnRangeFilter(colStartBytes, true, colEndBytes, false);
    if (query.hasLimit()) {
        filter = new FilterList(FilterList.Operator.MUST_PASS_ALL, filter, new ColumnPaginationFilter(query.getLimit(), 0));
    }
    logger.debug("Generated HBase Filter {}", filter);
    return filter;
}
Also used : ColumnPaginationFilter(org.apache.hadoop.hbase.filter.ColumnPaginationFilter) Filter(org.apache.hadoop.hbase.filter.Filter) ColumnRangeFilter(org.apache.hadoop.hbase.filter.ColumnRangeFilter) ColumnRangeFilter(org.apache.hadoop.hbase.filter.ColumnRangeFilter) FilterList(org.apache.hadoop.hbase.filter.FilterList) ColumnPaginationFilter(org.apache.hadoop.hbase.filter.ColumnPaginationFilter)

Example 94 with Filter

use of org.apache.hadoop.hbase.filter.Filter in project cdap by caskdata.

the class DequeueScanObserver method preScannerOpen.

@Override
public RegionScanner preScannerOpen(ObserverContext<RegionCoprocessorEnvironment> e, Scan scan, RegionScanner s) throws IOException {
    ConsumerConfig consumerConfig = DequeueScanAttributes.getConsumerConfig(scan);
    Transaction tx = DequeueScanAttributes.getTx(scan);
    if (consumerConfig == null || tx == null) {
        return super.preScannerOpen(e, scan, s);
    }
    Filter dequeueFilter = new DequeueFilter(consumerConfig, tx);
    Filter existing = scan.getFilter();
    if (existing != null) {
        Filter combined = new FilterList(FilterList.Operator.MUST_PASS_ALL, existing, dequeueFilter);
        scan.setFilter(combined);
    } else {
        scan.setFilter(dequeueFilter);
    }
    return super.preScannerOpen(e, scan, s);
}
Also used : Transaction(org.apache.tephra.Transaction) Filter(org.apache.hadoop.hbase.filter.Filter) ConsumerConfig(co.cask.cdap.data2.queue.ConsumerConfig) FilterList(org.apache.hadoop.hbase.filter.FilterList)

Example 95 with Filter

use of org.apache.hadoop.hbase.filter.Filter in project hive by apache.

the class HBaseReadWrite method getPartitionCount.

int getPartitionCount() throws IOException {
    Filter fil = new FirstKeyOnlyFilter();
    Iterator<Result> iter = scan(PART_TABLE, fil);
    return Iterators.size(iter);
}
Also used : RowFilter(org.apache.hadoop.hbase.filter.RowFilter) FirstKeyOnlyFilter(org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter) Filter(org.apache.hadoop.hbase.filter.Filter) CompareFilter(org.apache.hadoop.hbase.filter.CompareFilter) BloomFilter(org.apache.hive.common.util.BloomFilter) FirstKeyOnlyFilter(org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter) Result(org.apache.hadoop.hbase.client.Result)

Aggregations

Filter (org.apache.hadoop.hbase.filter.Filter)179 Test (org.junit.Test)97 Scan (org.apache.hadoop.hbase.client.Scan)95 BaseConnectionlessQueryTest (org.apache.phoenix.query.BaseConnectionlessQueryTest)77 SkipScanFilter (org.apache.phoenix.filter.SkipScanFilter)76 RowKeyComparisonFilter (org.apache.phoenix.filter.RowKeyComparisonFilter)74 SingleKeyValueComparisonFilter (org.apache.phoenix.filter.SingleKeyValueComparisonFilter)45 TestUtil.rowKeyFilter (org.apache.phoenix.util.TestUtil.rowKeyFilter)45 FilterList (org.apache.hadoop.hbase.filter.FilterList)43 RowFilter (org.apache.hadoop.hbase.filter.RowFilter)40 PhoenixConnection (org.apache.phoenix.jdbc.PhoenixConnection)37 TestUtil.multiEncodedKVFilter (org.apache.phoenix.util.TestUtil.multiEncodedKVFilter)33 TestUtil.singleKVFilter (org.apache.phoenix.util.TestUtil.singleKVFilter)33 PhoenixPreparedStatement (org.apache.phoenix.jdbc.PhoenixPreparedStatement)31 FirstKeyOnlyFilter (org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter)27 SingleColumnValueFilter (org.apache.hadoop.hbase.filter.SingleColumnValueFilter)25 CompareFilter (org.apache.hadoop.hbase.filter.CompareFilter)24 PrefixFilter (org.apache.hadoop.hbase.filter.PrefixFilter)24 ArrayList (java.util.ArrayList)22 RegexStringComparator (org.apache.hadoop.hbase.filter.RegexStringComparator)18