Search in sources :

Example 56 with ExtractionDimensionSpec

use of org.apache.druid.query.dimension.ExtractionDimensionSpec in project druid by druid-io.

the class GroupByQueryRunnerTest method testSubqueryWithExtractionFnInOuterQuery.

@Test
public void testSubqueryWithExtractionFnInOuterQuery() {
    // https://github.com/apache/druid/issues/2556
    GroupByQuery subquery = makeQueryBuilder().setDataSource(QueryRunnerTestHelper.DATA_SOURCE).setQuerySegmentSpec(QueryRunnerTestHelper.FIRST_TO_THIRD).setDimensions(new DefaultDimensionSpec("quality", "alias")).setDimFilter(new JavaScriptDimFilter("quality", "function(dim){ return true; }", null, JavaScriptConfig.getEnabledInstance())).setAggregatorSpecs(QueryRunnerTestHelper.ROWS_COUNT, new LongSumAggregatorFactory("idx", "index"), new LongSumAggregatorFactory("indexMaxPlusTen", "indexMaxPlusTen")).setGranularity(QueryRunnerTestHelper.DAY_GRAN).build();
    GroupByQuery query = makeQueryBuilder().setDataSource(subquery).setQuerySegmentSpec(new MultipleIntervalSegmentSpec(ImmutableList.of(Intervals.of("2011-04-01T00:00:00.000Z/2011-04-03T00:00:00.000Z")))).setDimensions(new ExtractionDimensionSpec("alias", "alias", new RegexDimExtractionFn("(a).*", true, "a"))).setAggregatorSpecs(new LongSumAggregatorFactory("rows", "rows"), new LongSumAggregatorFactory("idx", "idx")).setGranularity(QueryRunnerTestHelper.DAY_GRAN).build();
    List<ResultRow> expectedResults = Arrays.asList(makeRow(query, "2011-04-01", "alias", "a", "rows", 13L, "idx", 6619L), makeRow(query, "2011-04-02", "alias", "a", "rows", 13L, "idx", 5827L));
    // Subqueries are handled by the ToolChest
    Iterable<ResultRow> results = GroupByQueryRunnerTestHelper.runQuery(factory, runner, query);
    TestHelper.assertExpectedObjects(expectedResults, results, "subquery-extractionfn");
}
Also used : LongSumAggregatorFactory(org.apache.druid.query.aggregation.LongSumAggregatorFactory) JavaScriptDimFilter(org.apache.druid.query.filter.JavaScriptDimFilter) MultipleIntervalSegmentSpec(org.apache.druid.query.spec.MultipleIntervalSegmentSpec) DefaultDimensionSpec(org.apache.druid.query.dimension.DefaultDimensionSpec) RegexDimExtractionFn(org.apache.druid.query.extraction.RegexDimExtractionFn) ExtractionDimensionSpec(org.apache.druid.query.dimension.ExtractionDimensionSpec) InitializedNullHandlingTest(org.apache.druid.testing.InitializedNullHandlingTest) Test(org.junit.Test)

Example 57 with ExtractionDimensionSpec

use of org.apache.druid.query.dimension.ExtractionDimensionSpec in project druid by druid-io.

the class GroupByQueryRunnerTest method testGroupByStringOutputAsLong.

@Test
public void testGroupByStringOutputAsLong() {
    // Cannot vectorize due to extraction dimension spec.
    cannotVectorize();
    if (config.getDefaultStrategy().equals(GroupByStrategySelector.STRATEGY_V1)) {
        expectedException.expect(UnsupportedOperationException.class);
        expectedException.expectMessage("GroupBy v1 only supports dimensions with an outputType of STRING.");
    }
    ExtractionFn strlenFn = StrlenExtractionFn.instance();
    GroupByQuery query = makeQueryBuilder().setDataSource(QueryRunnerTestHelper.DATA_SOURCE).setQuerySegmentSpec(QueryRunnerTestHelper.FIRST_TO_THIRD).setDimensions(new ExtractionDimensionSpec(QueryRunnerTestHelper.QUALITY_DIMENSION, "alias", ColumnType.LONG, strlenFn)).setDimFilter(new SelectorDimFilter(QueryRunnerTestHelper.QUALITY_DIMENSION, "entertainment", null)).setAggregatorSpecs(QueryRunnerTestHelper.ROWS_COUNT, new LongSumAggregatorFactory("idx", "index")).setGranularity(QueryRunnerTestHelper.DAY_GRAN).build();
    List<ResultRow> expectedResults = Arrays.asList(makeRow(query, "2011-04-01", "alias", 13L, "rows", 1L, "idx", 158L), makeRow(query, "2011-04-02", "alias", 13L, "rows", 1L, "idx", 166L));
    Iterable<ResultRow> results = GroupByQueryRunnerTestHelper.runQuery(factory, runner, query);
    TestHelper.assertExpectedObjects(expectedResults, results, "string-long");
}
Also used : RegexDimExtractionFn(org.apache.druid.query.extraction.RegexDimExtractionFn) StringFormatExtractionFn(org.apache.druid.query.extraction.StringFormatExtractionFn) LookupExtractionFn(org.apache.druid.query.lookup.LookupExtractionFn) CascadeExtractionFn(org.apache.druid.query.extraction.CascadeExtractionFn) StrlenExtractionFn(org.apache.druid.query.extraction.StrlenExtractionFn) SubstringDimExtractionFn(org.apache.druid.query.extraction.SubstringDimExtractionFn) ExtractionFn(org.apache.druid.query.extraction.ExtractionFn) DimExtractionFn(org.apache.druid.query.extraction.DimExtractionFn) JavaScriptExtractionFn(org.apache.druid.query.extraction.JavaScriptExtractionFn) SearchQuerySpecDimExtractionFn(org.apache.druid.query.extraction.SearchQuerySpecDimExtractionFn) TimeFormatExtractionFn(org.apache.druid.query.extraction.TimeFormatExtractionFn) SelectorDimFilter(org.apache.druid.query.filter.SelectorDimFilter) LongSumAggregatorFactory(org.apache.druid.query.aggregation.LongSumAggregatorFactory) ExtractionDimensionSpec(org.apache.druid.query.dimension.ExtractionDimensionSpec) InitializedNullHandlingTest(org.apache.druid.testing.InitializedNullHandlingTest) Test(org.junit.Test)

Example 58 with ExtractionDimensionSpec

use of org.apache.druid.query.dimension.ExtractionDimensionSpec in project druid by druid-io.

the class GroupByQueryRunnerTest method testGroupByWithSimpleRenameAndMissingString.

@Test
public void testGroupByWithSimpleRenameAndMissingString() {
    Map<String, String> map = new HashMap<>();
    map.put("automotive", "automotive0");
    map.put("business", "business0");
    map.put("entertainment", "entertainment0");
    map.put("health", "health0");
    map.put("mezzanine", "mezzanine0");
    map.put("news", "news0");
    map.put("premium", "premium0");
    map.put("technology", "technology0");
    map.put("travel", "travel0");
    GroupByQuery query = makeQueryBuilder().setDataSource(QueryRunnerTestHelper.DATA_SOURCE).setQuerySegmentSpec(QueryRunnerTestHelper.FIRST_TO_THIRD).setDimensions(new ExtractionDimensionSpec("quality", "alias", new LookupExtractionFn(new MapLookupExtractor(map, false), false, "MISSING", true, false))).setAggregatorSpecs(QueryRunnerTestHelper.ROWS_COUNT, new LongSumAggregatorFactory("idx", "index")).setGranularity(QueryRunnerTestHelper.DAY_GRAN).build();
    List<ResultRow> expectedResults = Arrays.asList(makeRow(query, "2011-04-01", "alias", "automotive0", "rows", 1L, "idx", 135L), makeRow(query, "2011-04-01", "alias", "business0", "rows", 1L, "idx", 118L), makeRow(query, "2011-04-01", "alias", "entertainment0", "rows", 1L, "idx", 158L), makeRow(query, "2011-04-01", "alias", "health0", "rows", 1L, "idx", 120L), makeRow(query, "2011-04-01", "alias", "mezzanine0", "rows", 3L, "idx", 2870L), makeRow(query, "2011-04-01", "alias", "news0", "rows", 1L, "idx", 121L), makeRow(query, "2011-04-01", "alias", "premium0", "rows", 3L, "idx", 2900L), makeRow(query, "2011-04-01", "alias", "technology0", "rows", 1L, "idx", 78L), makeRow(query, "2011-04-01", "alias", "travel0", "rows", 1L, "idx", 119L), makeRow(query, "2011-04-02", "alias", "automotive0", "rows", 1L, "idx", 147L), makeRow(query, "2011-04-02", "alias", "business0", "rows", 1L, "idx", 112L), makeRow(query, "2011-04-02", "alias", "entertainment0", "rows", 1L, "idx", 166L), makeRow(query, "2011-04-02", "alias", "health0", "rows", 1L, "idx", 113L), makeRow(query, "2011-04-02", "alias", "mezzanine0", "rows", 3L, "idx", 2447L), makeRow(query, "2011-04-02", "alias", "news0", "rows", 1L, "idx", 114L), makeRow(query, "2011-04-02", "alias", "premium0", "rows", 3L, "idx", 2505L), makeRow(query, "2011-04-02", "alias", "technology0", "rows", 1L, "idx", 97L), makeRow(query, "2011-04-02", "alias", "travel0", "rows", 1L, "idx", 126L));
    Iterable<ResultRow> results = GroupByQueryRunnerTestHelper.runQuery(factory, runner, query);
    TestHelper.assertExpectedObjects(expectedResults, results, "rename-and-missing-string");
}
Also used : LookupExtractionFn(org.apache.druid.query.lookup.LookupExtractionFn) HashMap(java.util.HashMap) LongSumAggregatorFactory(org.apache.druid.query.aggregation.LongSumAggregatorFactory) MapLookupExtractor(org.apache.druid.query.extraction.MapLookupExtractor) ExtractionDimensionSpec(org.apache.druid.query.dimension.ExtractionDimensionSpec) InitializedNullHandlingTest(org.apache.druid.testing.InitializedNullHandlingTest) Test(org.junit.Test)

Example 59 with ExtractionDimensionSpec

use of org.apache.druid.query.dimension.ExtractionDimensionSpec in project druid by druid-io.

the class GroupByQueryRunnerTest method testGroupByWithLookupAndLimitAndSortByDimsFirst.

@Test
public void testGroupByWithLookupAndLimitAndSortByDimsFirst() {
    // Cannot vectorize due to extraction dimension spec.
    cannotVectorize();
    Map<String, String> map = new HashMap<>();
    map.put("automotive", "9");
    map.put("business", "8");
    map.put("entertainment", "7");
    map.put("health", "6");
    map.put("mezzanine", "5");
    map.put("news", "4");
    map.put("premium", "3");
    map.put("technology", "2");
    map.put("travel", "1");
    GroupByQuery query = makeQueryBuilder().setDataSource(QueryRunnerTestHelper.DATA_SOURCE).setQuerySegmentSpec(QueryRunnerTestHelper.FIRST_TO_THIRD).setDimensions(new ExtractionDimensionSpec("quality", "alias", new LookupExtractionFn(new MapLookupExtractor(map, false), false, null, false, false))).setAggregatorSpecs(QueryRunnerTestHelper.ROWS_COUNT, new LongSumAggregatorFactory("idx", "index")).setLimitSpec(new DefaultLimitSpec(Collections.singletonList(new OrderByColumnSpec("alias", null, StringComparators.ALPHANUMERIC)), 11)).setGranularity(QueryRunnerTestHelper.DAY_GRAN).overrideContext(ImmutableMap.of("sortByDimsFirst", true)).build();
    List<ResultRow> expectedResults = Arrays.asList(makeRow(query, "2011-04-01", "alias", "1", "rows", 1L, "idx", 119L), makeRow(query, "2011-04-02", "alias", "1", "rows", 1L, "idx", 126L), makeRow(query, "2011-04-01", "alias", "2", "rows", 1L, "idx", 78L), makeRow(query, "2011-04-02", "alias", "2", "rows", 1L, "idx", 97L), makeRow(query, "2011-04-01", "alias", "3", "rows", 3L, "idx", 2900L), makeRow(query, "2011-04-02", "alias", "3", "rows", 3L, "idx", 2505L), makeRow(query, "2011-04-01", "alias", "4", "rows", 1L, "idx", 121L), makeRow(query, "2011-04-02", "alias", "4", "rows", 1L, "idx", 114L), makeRow(query, "2011-04-01", "alias", "5", "rows", 3L, "idx", 2870L), makeRow(query, "2011-04-02", "alias", "5", "rows", 3L, "idx", 2447L), makeRow(query, "2011-04-01", "alias", "6", "rows", 1L, "idx", 120L));
    Iterable<ResultRow> results = GroupByQueryRunnerTestHelper.runQuery(factory, runner, query);
    TestHelper.assertExpectedObjects(expectedResults, results, "lookup-limit");
}
Also used : DefaultLimitSpec(org.apache.druid.query.groupby.orderby.DefaultLimitSpec) HashMap(java.util.HashMap) LongSumAggregatorFactory(org.apache.druid.query.aggregation.LongSumAggregatorFactory) LookupExtractionFn(org.apache.druid.query.lookup.LookupExtractionFn) OrderByColumnSpec(org.apache.druid.query.groupby.orderby.OrderByColumnSpec) MapLookupExtractor(org.apache.druid.query.extraction.MapLookupExtractor) ExtractionDimensionSpec(org.apache.druid.query.dimension.ExtractionDimensionSpec) InitializedNullHandlingTest(org.apache.druid.testing.InitializedNullHandlingTest) Test(org.junit.Test)

Example 60 with ExtractionDimensionSpec

use of org.apache.druid.query.dimension.ExtractionDimensionSpec in project druid by druid-io.

the class GroupByQueryRunnerTest method testGroupByWithSimpleRenameRetainMissingNonInjective.

@Test
public void testGroupByWithSimpleRenameRetainMissingNonInjective() {
    // Cannot vectorize due to extraction dimension spec.
    cannotVectorize();
    Map<String, String> map = new HashMap<>();
    map.put("automotive", "automotive0");
    map.put("business", "business0");
    map.put("entertainment", "entertainment0");
    map.put("health", "health0");
    map.put("mezzanine", "mezzanine0");
    map.put("news", "news0");
    map.put("premium", "premium0");
    map.put("technology", "technology0");
    map.put("travel", "travel0");
    GroupByQuery query = makeQueryBuilder().setDataSource(QueryRunnerTestHelper.DATA_SOURCE).setQuerySegmentSpec(QueryRunnerTestHelper.FIRST_TO_THIRD).setDimensions(new ExtractionDimensionSpec("quality", "alias", new LookupExtractionFn(new MapLookupExtractor(map, false), true, null, false, false))).setAggregatorSpecs(QueryRunnerTestHelper.ROWS_COUNT, new LongSumAggregatorFactory("idx", "index")).setGranularity(QueryRunnerTestHelper.DAY_GRAN).build();
    List<ResultRow> expectedResults = Arrays.asList(makeRow(query, "2011-04-01", "alias", "automotive0", "rows", 1L, "idx", 135L), makeRow(query, "2011-04-01", "alias", "business0", "rows", 1L, "idx", 118L), makeRow(query, "2011-04-01", "alias", "entertainment0", "rows", 1L, "idx", 158L), makeRow(query, "2011-04-01", "alias", "health0", "rows", 1L, "idx", 120L), makeRow(query, "2011-04-01", "alias", "mezzanine0", "rows", 3L, "idx", 2870L), makeRow(query, "2011-04-01", "alias", "news0", "rows", 1L, "idx", 121L), makeRow(query, "2011-04-01", "alias", "premium0", "rows", 3L, "idx", 2900L), makeRow(query, "2011-04-01", "alias", "technology0", "rows", 1L, "idx", 78L), makeRow(query, "2011-04-01", "alias", "travel0", "rows", 1L, "idx", 119L), makeRow(query, "2011-04-02", "alias", "automotive0", "rows", 1L, "idx", 147L), makeRow(query, "2011-04-02", "alias", "business0", "rows", 1L, "idx", 112L), makeRow(query, "2011-04-02", "alias", "entertainment0", "rows", 1L, "idx", 166L), makeRow(query, "2011-04-02", "alias", "health0", "rows", 1L, "idx", 113L), makeRow(query, "2011-04-02", "alias", "mezzanine0", "rows", 3L, "idx", 2447L), makeRow(query, "2011-04-02", "alias", "news0", "rows", 1L, "idx", 114L), makeRow(query, "2011-04-02", "alias", "premium0", "rows", 3L, "idx", 2505L), makeRow(query, "2011-04-02", "alias", "technology0", "rows", 1L, "idx", 97L), makeRow(query, "2011-04-02", "alias", "travel0", "rows", 1L, "idx", 126L));
    Iterable<ResultRow> results = GroupByQueryRunnerTestHelper.runQuery(factory, runner, query);
    TestHelper.assertExpectedObjects(expectedResults, results, "non-injective");
}
Also used : LookupExtractionFn(org.apache.druid.query.lookup.LookupExtractionFn) HashMap(java.util.HashMap) LongSumAggregatorFactory(org.apache.druid.query.aggregation.LongSumAggregatorFactory) MapLookupExtractor(org.apache.druid.query.extraction.MapLookupExtractor) ExtractionDimensionSpec(org.apache.druid.query.dimension.ExtractionDimensionSpec) InitializedNullHandlingTest(org.apache.druid.testing.InitializedNullHandlingTest) Test(org.junit.Test)

Aggregations

ExtractionDimensionSpec (org.apache.druid.query.dimension.ExtractionDimensionSpec)87 Test (org.junit.Test)82 InitializedNullHandlingTest (org.apache.druid.testing.InitializedNullHandlingTest)62 LookupExtractionFn (org.apache.druid.query.lookup.LookupExtractionFn)40 RegexDimExtractionFn (org.apache.druid.query.extraction.RegexDimExtractionFn)32 Result (org.apache.druid.query.Result)30 TimeFormatExtractionFn (org.apache.druid.query.extraction.TimeFormatExtractionFn)29 LongSumAggregatorFactory (org.apache.druid.query.aggregation.LongSumAggregatorFactory)27 DefaultDimensionSpec (org.apache.druid.query.dimension.DefaultDimensionSpec)26 JavaScriptExtractionFn (org.apache.druid.query.extraction.JavaScriptExtractionFn)22 SubstringDimExtractionFn (org.apache.druid.query.extraction.SubstringDimExtractionFn)22 StrlenExtractionFn (org.apache.druid.query.extraction.StrlenExtractionFn)21 ExtractionFn (org.apache.druid.query.extraction.ExtractionFn)20 MapLookupExtractor (org.apache.druid.query.extraction.MapLookupExtractor)20 StringFormatExtractionFn (org.apache.druid.query.extraction.StringFormatExtractionFn)20 DimExtractionFn (org.apache.druid.query.extraction.DimExtractionFn)19 SelectorDimFilter (org.apache.druid.query.filter.SelectorDimFilter)13 CascadeExtractionFn (org.apache.druid.query.extraction.CascadeExtractionFn)10 SearchQuerySpecDimExtractionFn (org.apache.druid.query.extraction.SearchQuerySpecDimExtractionFn)10 HashMap (java.util.HashMap)8