Search in sources :

Example 6 with TimeseriesQueryRunnerFactory

use of io.druid.query.timeseries.TimeseriesQueryRunnerFactory in project druid by druid-io.

the class IndexMergerV9WithSpatialIndexTest method testSpatialQueryWithOtherSpatialDim.

@Test
public void testSpatialQueryWithOtherSpatialDim() {
    TimeseriesQuery query = Druids.newTimeseriesQueryBuilder().dataSource("test").granularity(Granularities.ALL).intervals(Arrays.asList(new Interval("2013-01-01/2013-01-07"))).filters(new SpatialDimFilter("spatialIsRad", new RadiusBound(new float[] { 0.0f, 0.0f }, 5))).aggregators(Arrays.<AggregatorFactory>asList(new CountAggregatorFactory("rows"), new LongSumAggregatorFactory("val", "val"))).build();
    List<Result<TimeseriesResultValue>> expectedResults = Arrays.asList(new Result<>(new DateTime("2013-01-01T00:00:00.000Z"), new TimeseriesResultValue(ImmutableMap.<String, Object>builder().put("rows", 1L).put("val", 13L).build())));
    try {
        TimeseriesQueryRunnerFactory factory = new TimeseriesQueryRunnerFactory(new TimeseriesQueryQueryToolChest(QueryRunnerTestHelper.NoopIntervalChunkingQueryRunnerDecorator()), new TimeseriesQueryEngine(), QueryRunnerTestHelper.NOOP_QUERYWATCHER);
        QueryRunner runner = new FinalizeResultsQueryRunner(factory.createRunner(segment), factory.getToolchest());
        TestHelper.assertExpectedResults(expectedResults, runner.run(query, Maps.newHashMap()));
    } catch (Exception e) {
        throw Throwables.propagate(e);
    }
}
Also used : TimeseriesResultValue(io.druid.query.timeseries.TimeseriesResultValue) TimeseriesQuery(io.druid.query.timeseries.TimeseriesQuery) LongSumAggregatorFactory(io.druid.query.aggregation.LongSumAggregatorFactory) TimeseriesQueryQueryToolChest(io.druid.query.timeseries.TimeseriesQueryQueryToolChest) AggregatorFactory(io.druid.query.aggregation.AggregatorFactory) CountAggregatorFactory(io.druid.query.aggregation.CountAggregatorFactory) LongSumAggregatorFactory(io.druid.query.aggregation.LongSumAggregatorFactory) DateTime(org.joda.time.DateTime) FinalizeResultsQueryRunner(io.druid.query.FinalizeResultsQueryRunner) QueryRunner(io.druid.query.QueryRunner) IOException(java.io.IOException) Result(io.druid.query.Result) TimeseriesQueryEngine(io.druid.query.timeseries.TimeseriesQueryEngine) SpatialDimFilter(io.druid.query.filter.SpatialDimFilter) TimeseriesQueryRunnerFactory(io.druid.query.timeseries.TimeseriesQueryRunnerFactory) RadiusBound(io.druid.collections.spatial.search.RadiusBound) CountAggregatorFactory(io.druid.query.aggregation.CountAggregatorFactory) FinalizeResultsQueryRunner(io.druid.query.FinalizeResultsQueryRunner) Interval(org.joda.time.Interval) Test(org.junit.Test)

Example 7 with TimeseriesQueryRunnerFactory

use of io.druid.query.timeseries.TimeseriesQueryRunnerFactory in project druid by druid-io.

the class AggregationTestHelper method createTimeseriesQueryAggregationTestHelper.

public static final AggregationTestHelper createTimeseriesQueryAggregationTestHelper(List<? extends Module> jsonModulesToRegister, TemporaryFolder tempFolder) {
    ObjectMapper mapper = new DefaultObjectMapper();
    TimeseriesQueryQueryToolChest toolchest = new TimeseriesQueryQueryToolChest(QueryRunnerTestHelper.NoopIntervalChunkingQueryRunnerDecorator());
    TimeseriesQueryRunnerFactory factory = new TimeseriesQueryRunnerFactory(toolchest, new TimeseriesQueryEngine(), QueryRunnerTestHelper.NOOP_QUERYWATCHER);
    IndexIO indexIO = new IndexIO(mapper, new ColumnConfig() {

        @Override
        public int columnCacheSizeBytes() {
            return 0;
        }
    });
    return new AggregationTestHelper(mapper, new IndexMerger(mapper, indexIO), indexIO, toolchest, factory, tempFolder, jsonModulesToRegister);
}
Also used : IndexMerger(io.druid.segment.IndexMerger) TimeseriesQueryEngine(io.druid.query.timeseries.TimeseriesQueryEngine) TimeseriesQueryRunnerFactory(io.druid.query.timeseries.TimeseriesQueryRunnerFactory) IndexIO(io.druid.segment.IndexIO) ColumnConfig(io.druid.segment.column.ColumnConfig) DefaultObjectMapper(io.druid.jackson.DefaultObjectMapper) TimeseriesQueryQueryToolChest(io.druid.query.timeseries.TimeseriesQueryQueryToolChest) DefaultObjectMapper(io.druid.jackson.DefaultObjectMapper) ObjectMapper(com.fasterxml.jackson.databind.ObjectMapper)

Example 8 with TimeseriesQueryRunnerFactory

use of io.druid.query.timeseries.TimeseriesQueryRunnerFactory in project druid by druid-io.

the class SchemaEvolutionTest method testNumericEvolutionFiltering.

@Test
public void testNumericEvolutionFiltering() {
    final TimeseriesQueryRunnerFactory factory = QueryRunnerTestHelper.newTimeseriesQueryRunnerFactory();
    // "c1" changes from string(1) -> long(2) -> float(3) -> nonexistent(4)
    // test behavior of filtering
    final TimeseriesQuery query = Druids.newTimeseriesQueryBuilder().dataSource(DATA_SOURCE).intervals("1000/3000").filters(new BoundDimFilter("c1", "9", "11", false, false, null, null, StringComparators.NUMERIC)).aggregators(ImmutableList.of(new LongSumAggregatorFactory("a", "c1"), new DoubleSumAggregatorFactory("b", "c1"), new CountAggregatorFactory("c"))).build();
    // Only string(1) -- which we can filter but not aggregate
    Assert.assertEquals(timeseriesResult(ImmutableMap.of("a", 0L, "b", 0.0, "c", 2L)), runQuery(query, factory, ImmutableList.of(index1)));
    // Only long(2) -- which we can filter and aggregate
    Assert.assertEquals(timeseriesResult(ImmutableMap.of("a", 19L, "b", 19.0, "c", 2L)), runQuery(query, factory, ImmutableList.of(index2)));
    // Only float(3) -- which we can't filter, but can aggregate
    Assert.assertEquals(timeseriesResult(ImmutableMap.of("a", 19L, "b", 19.100000381469727, "c", 2L)), runQuery(query, factory, ImmutableList.of(index3)));
    // Only nonexistent(4)
    Assert.assertEquals(timeseriesResult(ImmutableMap.of("a", 0L, "b", 0.0, "c", 0L)), runQuery(query, factory, ImmutableList.of(index4)));
    // string(1) + long(2) + float(3) + nonexistent(4)
    Assert.assertEquals(timeseriesResult(ImmutableMap.of("a", 38L, "b", 38.10000038146973, "c", 6L)), runQuery(query, factory, ImmutableList.of(index1, index2, index3, index4)));
}
Also used : TimeseriesQueryRunnerFactory(io.druid.query.timeseries.TimeseriesQueryRunnerFactory) BoundDimFilter(io.druid.query.filter.BoundDimFilter) TimeseriesQuery(io.druid.query.timeseries.TimeseriesQuery) DoubleSumAggregatorFactory(io.druid.query.aggregation.DoubleSumAggregatorFactory) CountAggregatorFactory(io.druid.query.aggregation.CountAggregatorFactory) LongSumAggregatorFactory(io.druid.query.aggregation.LongSumAggregatorFactory) Test(org.junit.Test)

Example 9 with TimeseriesQueryRunnerFactory

use of io.druid.query.timeseries.TimeseriesQueryRunnerFactory in project druid by druid-io.

the class SchemaEvolutionTest method testHyperUniqueEvolutionTimeseries.

@Test
public void testHyperUniqueEvolutionTimeseries() {
    final TimeseriesQueryRunnerFactory factory = QueryRunnerTestHelper.newTimeseriesQueryRunnerFactory();
    final TimeseriesQuery query = Druids.newTimeseriesQueryBuilder().dataSource(DATA_SOURCE).intervals("1000/3000").aggregators(ImmutableList.<AggregatorFactory>of(new HyperUniquesAggregatorFactory("uniques", "uniques"))).build();
    // index1 has no "uniques" column
    Assert.assertEquals(timeseriesResult(ImmutableMap.of("uniques", 0)), runQuery(query, factory, ImmutableList.of(index1)));
    // index1 (no uniques) + index2 and index3 (yes uniques); we should be able to combine
    Assert.assertEquals(timeseriesResult(ImmutableMap.of("uniques", 4.003911343725148d)), runQuery(query, factory, ImmutableList.of(index1, index2, index3)));
}
Also used : TimeseriesQueryRunnerFactory(io.druid.query.timeseries.TimeseriesQueryRunnerFactory) TimeseriesQuery(io.druid.query.timeseries.TimeseriesQuery) HyperUniquesAggregatorFactory(io.druid.query.aggregation.hyperloglog.HyperUniquesAggregatorFactory) AggregatorFactory(io.druid.query.aggregation.AggregatorFactory) CountAggregatorFactory(io.druid.query.aggregation.CountAggregatorFactory) HyperUniquesAggregatorFactory(io.druid.query.aggregation.hyperloglog.HyperUniquesAggregatorFactory) DoubleSumAggregatorFactory(io.druid.query.aggregation.DoubleSumAggregatorFactory) LongSumAggregatorFactory(io.druid.query.aggregation.LongSumAggregatorFactory) Test(org.junit.Test)

Example 10 with TimeseriesQueryRunnerFactory

use of io.druid.query.timeseries.TimeseriesQueryRunnerFactory in project druid by druid-io.

the class FilteredAggregatorBenchmark method setup.

@Setup
public void setup() throws IOException {
    log.info("SETUP CALLED AT " + System.currentTimeMillis());
    if (ComplexMetrics.getSerdeForType("hyperUnique") == null) {
        ComplexMetrics.registerSerde("hyperUnique", new HyperUniquesSerde(HyperLogLogHash.getDefault()));
    }
    schemaInfo = BenchmarkSchemas.SCHEMA_MAP.get(schema);
    BenchmarkDataGenerator gen = new BenchmarkDataGenerator(schemaInfo.getColumnSchemas(), RNG_SEED, schemaInfo.getDataInterval(), rowsPerSegment);
    incIndex = makeIncIndex(schemaInfo.getAggsArray());
    filter = new OrDimFilter(Arrays.asList(new BoundDimFilter("dimSequential", "-1", "-1", true, true, null, null, StringComparators.ALPHANUMERIC), new JavaScriptDimFilter("dimSequential", "function(x) { return false }", null, JavaScriptConfig.getEnabledInstance()), new RegexDimFilter("dimSequential", "X", null), new SearchQueryDimFilter("dimSequential", new ContainsSearchQuerySpec("X", false), null), new InDimFilter("dimSequential", Arrays.asList("X"), null)));
    filteredMetrics = new AggregatorFactory[1];
    filteredMetrics[0] = new FilteredAggregatorFactory(new CountAggregatorFactory("rows"), filter);
    incIndexFilteredAgg = makeIncIndex(filteredMetrics);
    inputRows = new ArrayList<>();
    for (int j = 0; j < rowsPerSegment; j++) {
        InputRow row = gen.nextRow();
        if (j % 10000 == 0) {
            log.info(j + " rows generated.");
        }
        incIndex.add(row);
        inputRows.add(row);
    }
    tmpDir = Files.createTempDir();
    log.info("Using temp dir: " + tmpDir.getAbsolutePath());
    indexFile = INDEX_MERGER_V9.persist(incIndex, tmpDir, new IndexSpec());
    qIndex = INDEX_IO.loadIndex(indexFile);
    factory = new TimeseriesQueryRunnerFactory(new TimeseriesQueryQueryToolChest(QueryBenchmarkUtil.NoopIntervalChunkingQueryRunnerDecorator()), new TimeseriesQueryEngine(), QueryBenchmarkUtil.NOOP_QUERYWATCHER);
    BenchmarkSchemaInfo basicSchema = BenchmarkSchemas.SCHEMA_MAP.get("basic");
    QuerySegmentSpec intervalSpec = new MultipleIntervalSegmentSpec(Arrays.asList(basicSchema.getDataInterval()));
    List<AggregatorFactory> queryAggs = new ArrayList<>();
    queryAggs.add(filteredMetrics[0]);
    query = Druids.newTimeseriesQueryBuilder().dataSource("blah").granularity(Granularities.ALL).intervals(intervalSpec).aggregators(queryAggs).descending(false).build();
}
Also used : FilteredAggregatorFactory(io.druid.query.aggregation.FilteredAggregatorFactory) RegexDimFilter(io.druid.query.filter.RegexDimFilter) IndexSpec(io.druid.segment.IndexSpec) BoundDimFilter(io.druid.query.filter.BoundDimFilter) ContainsSearchQuerySpec(io.druid.query.search.search.ContainsSearchQuerySpec) BenchmarkDataGenerator(io.druid.benchmark.datagen.BenchmarkDataGenerator) ArrayList(java.util.ArrayList) HyperUniquesSerde(io.druid.query.aggregation.hyperloglog.HyperUniquesSerde) MultipleIntervalSegmentSpec(io.druid.query.spec.MultipleIntervalSegmentSpec) TimeseriesQueryQueryToolChest(io.druid.query.timeseries.TimeseriesQueryQueryToolChest) CountAggregatorFactory(io.druid.query.aggregation.CountAggregatorFactory) AggregatorFactory(io.druid.query.aggregation.AggregatorFactory) FilteredAggregatorFactory(io.druid.query.aggregation.FilteredAggregatorFactory) TimeseriesQueryEngine(io.druid.query.timeseries.TimeseriesQueryEngine) TimeseriesQueryRunnerFactory(io.druid.query.timeseries.TimeseriesQueryRunnerFactory) CountAggregatorFactory(io.druid.query.aggregation.CountAggregatorFactory) BenchmarkSchemaInfo(io.druid.benchmark.datagen.BenchmarkSchemaInfo) OrDimFilter(io.druid.query.filter.OrDimFilter) InDimFilter(io.druid.query.filter.InDimFilter) InputRow(io.druid.data.input.InputRow) SearchQueryDimFilter(io.druid.query.filter.SearchQueryDimFilter) JavaScriptDimFilter(io.druid.query.filter.JavaScriptDimFilter) QuerySegmentSpec(io.druid.query.spec.QuerySegmentSpec) Setup(org.openjdk.jmh.annotations.Setup)

Aggregations

TimeseriesQueryRunnerFactory (io.druid.query.timeseries.TimeseriesQueryRunnerFactory)19 TimeseriesQuery (io.druid.query.timeseries.TimeseriesQuery)16 TimeseriesQueryEngine (io.druid.query.timeseries.TimeseriesQueryEngine)16 TimeseriesQueryQueryToolChest (io.druid.query.timeseries.TimeseriesQueryQueryToolChest)16 CountAggregatorFactory (io.druid.query.aggregation.CountAggregatorFactory)15 LongSumAggregatorFactory (io.druid.query.aggregation.LongSumAggregatorFactory)15 Test (org.junit.Test)15 AggregatorFactory (io.druid.query.aggregation.AggregatorFactory)13 FinalizeResultsQueryRunner (io.druid.query.FinalizeResultsQueryRunner)12 Result (io.druid.query.Result)12 TimeseriesResultValue (io.druid.query.timeseries.TimeseriesResultValue)12 Interval (org.joda.time.Interval)12 QueryRunner (io.druid.query.QueryRunner)10 SpatialDimFilter (io.druid.query.filter.SpatialDimFilter)9 IOException (java.io.IOException)9 DateTime (org.joda.time.DateTime)9 DoubleSumAggregatorFactory (io.druid.query.aggregation.DoubleSumAggregatorFactory)6 FilteredAggregatorFactory (io.druid.query.aggregation.FilteredAggregatorFactory)6 HashMap (java.util.HashMap)6 RadiusBound (io.druid.collections.spatial.search.RadiusBound)5