Search in sources :

Example 16 with TimeseriesQueryQueryToolChest

use of io.druid.query.timeseries.TimeseriesQueryQueryToolChest in project druid by druid-io.

the class AggregationTestHelper method createTimeseriesQueryAggregationTestHelper.

public static final AggregationTestHelper createTimeseriesQueryAggregationTestHelper(List<? extends Module> jsonModulesToRegister, TemporaryFolder tempFolder) {
    ObjectMapper mapper = new DefaultObjectMapper();
    TimeseriesQueryQueryToolChest toolchest = new TimeseriesQueryQueryToolChest(QueryRunnerTestHelper.NoopIntervalChunkingQueryRunnerDecorator());
    TimeseriesQueryRunnerFactory factory = new TimeseriesQueryRunnerFactory(toolchest, new TimeseriesQueryEngine(), QueryRunnerTestHelper.NOOP_QUERYWATCHER);
    IndexIO indexIO = new IndexIO(mapper, new ColumnConfig() {

        @Override
        public int columnCacheSizeBytes() {
            return 0;
        }
    });
    return new AggregationTestHelper(mapper, new IndexMerger(mapper, indexIO), indexIO, toolchest, factory, tempFolder, jsonModulesToRegister);
}
Also used : IndexMerger(io.druid.segment.IndexMerger) TimeseriesQueryEngine(io.druid.query.timeseries.TimeseriesQueryEngine) TimeseriesQueryRunnerFactory(io.druid.query.timeseries.TimeseriesQueryRunnerFactory) IndexIO(io.druid.segment.IndexIO) ColumnConfig(io.druid.segment.column.ColumnConfig) DefaultObjectMapper(io.druid.jackson.DefaultObjectMapper) TimeseriesQueryQueryToolChest(io.druid.query.timeseries.TimeseriesQueryQueryToolChest) DefaultObjectMapper(io.druid.jackson.DefaultObjectMapper) ObjectMapper(com.fasterxml.jackson.databind.ObjectMapper)

Example 17 with TimeseriesQueryQueryToolChest

use of io.druid.query.timeseries.TimeseriesQueryQueryToolChest in project druid by druid-io.

the class RetryQueryRunnerTest method testRunWithMissingSegments.

@Test
public void testRunWithMissingSegments() throws Exception {
    Map<String, Object> context = new MapMaker().makeMap();
    context.put(Result.MISSING_SEGMENTS_KEY, Lists.newArrayList());
    RetryQueryRunner<Result<TimeseriesResultValue>> runner = new RetryQueryRunner<>(new QueryRunner<Result<TimeseriesResultValue>>() {

        @Override
        public Sequence<Result<TimeseriesResultValue>> run(Query query, Map context) {
            ((List) context.get(Result.MISSING_SEGMENTS_KEY)).add(new SegmentDescriptor(new Interval(178888, 1999999), "test", 1));
            return Sequences.empty();
        }
    }, (QueryToolChest) new TimeseriesQueryQueryToolChest(QueryRunnerTestHelper.NoopIntervalChunkingQueryRunnerDecorator()), new RetryQueryRunnerConfig() {

        @Override
        public int getNumTries() {
            return 0;
        }

        @Override
        public boolean isReturnPartialResults() {
            return true;
        }
    }, jsonMapper);
    Iterable<Result<TimeseriesResultValue>> actualResults = Sequences.toList(runner.run(query, context), Lists.<Result<TimeseriesResultValue>>newArrayList());
    Assert.assertTrue("Should have one entry in the list of missing segments", ((List) context.get(Result.MISSING_SEGMENTS_KEY)).size() == 1);
    Assert.assertTrue("Should return an empty sequence as a result", ((List) actualResults).size() == 0);
}
Also used : TimeseriesResultValue(io.druid.query.timeseries.TimeseriesResultValue) TimeseriesQuery(io.druid.query.timeseries.TimeseriesQuery) MapMaker(com.google.common.collect.MapMaker) Sequence(io.druid.java.util.common.guava.Sequence) TimeseriesQueryQueryToolChest(io.druid.query.timeseries.TimeseriesQueryQueryToolChest) List(java.util.List) Map(java.util.Map) Interval(org.joda.time.Interval) Test(org.junit.Test)

Example 18 with TimeseriesQueryQueryToolChest

use of io.druid.query.timeseries.TimeseriesQueryQueryToolChest in project druid by druid-io.

the class RetryQueryRunnerTest method testException.

@Test(expected = SegmentMissingException.class)
public void testException() throws Exception {
    Map<String, Object> context = new MapMaker().makeMap();
    context.put(Result.MISSING_SEGMENTS_KEY, Lists.newArrayList());
    RetryQueryRunner<Result<TimeseriesResultValue>> runner = new RetryQueryRunner<>(new QueryRunner<Result<TimeseriesResultValue>>() {

        @Override
        public Sequence<Result<TimeseriesResultValue>> run(Query<Result<TimeseriesResultValue>> query, Map<String, Object> context) {
            ((List) context.get(Result.MISSING_SEGMENTS_KEY)).add(new SegmentDescriptor(new Interval(178888, 1999999), "test", 1));
            return Sequences.empty();
        }
    }, (QueryToolChest) new TimeseriesQueryQueryToolChest(QueryRunnerTestHelper.NoopIntervalChunkingQueryRunnerDecorator()), new RetryQueryRunnerConfig() {

        private int numTries = 1;

        private boolean returnPartialResults = false;

        public int getNumTries() {
            return numTries;
        }

        public boolean returnPartialResults() {
            return returnPartialResults;
        }
    }, jsonMapper);
    Iterable<Result<TimeseriesResultValue>> actualResults = Sequences.toList(runner.run(query, context), Lists.<Result<TimeseriesResultValue>>newArrayList());
    Assert.assertTrue("Should have one entry in the list of missing segments", ((List) context.get(Result.MISSING_SEGMENTS_KEY)).size() == 1);
}
Also used : TimeseriesResultValue(io.druid.query.timeseries.TimeseriesResultValue) MapMaker(com.google.common.collect.MapMaker) Sequence(io.druid.java.util.common.guava.Sequence) TimeseriesQueryQueryToolChest(io.druid.query.timeseries.TimeseriesQueryQueryToolChest) List(java.util.List) Interval(org.joda.time.Interval) Test(org.junit.Test)

Example 19 with TimeseriesQueryQueryToolChest

use of io.druid.query.timeseries.TimeseriesQueryQueryToolChest in project druid by druid-io.

the class RetryQueryRunnerTest method testRetry.

@Test
public void testRetry() throws Exception {
    Map<String, Object> context = new MapMaker().makeMap();
    context.put("count", 0);
    context.put(Result.MISSING_SEGMENTS_KEY, Lists.newArrayList());
    RetryQueryRunner<Result<TimeseriesResultValue>> runner = new RetryQueryRunner<>(new QueryRunner<Result<TimeseriesResultValue>>() {

        @Override
        public Sequence<Result<TimeseriesResultValue>> run(Query<Result<TimeseriesResultValue>> query, Map<String, Object> context) {
            if ((int) context.get("count") == 0) {
                ((List) context.get(Result.MISSING_SEGMENTS_KEY)).add(new SegmentDescriptor(new Interval(178888, 1999999), "test", 1));
                context.put("count", 1);
                return Sequences.empty();
            } else {
                return Sequences.simple(Arrays.asList(new Result<>(new DateTime(), new TimeseriesResultValue(Maps.<String, Object>newHashMap()))));
            }
        }
    }, (QueryToolChest) new TimeseriesQueryQueryToolChest(QueryRunnerTestHelper.NoopIntervalChunkingQueryRunnerDecorator()), new RetryQueryRunnerConfig() {

        private int numTries = 1;

        private boolean returnPartialResults = true;

        public int getNumTries() {
            return numTries;
        }

        public boolean returnPartialResults() {
            return returnPartialResults;
        }
    }, jsonMapper);
    Iterable<Result<TimeseriesResultValue>> actualResults = Sequences.toList(runner.run(query, context), Lists.<Result<TimeseriesResultValue>>newArrayList());
    Assert.assertTrue("Should return a list with one element", ((List) actualResults).size() == 1);
    Assert.assertTrue("Should have nothing in missingSegment list", ((List) context.get(Result.MISSING_SEGMENTS_KEY)).size() == 0);
}
Also used : TimeseriesResultValue(io.druid.query.timeseries.TimeseriesResultValue) MapMaker(com.google.common.collect.MapMaker) Sequence(io.druid.java.util.common.guava.Sequence) TimeseriesQueryQueryToolChest(io.druid.query.timeseries.TimeseriesQueryQueryToolChest) DateTime(org.joda.time.DateTime) List(java.util.List) Interval(org.joda.time.Interval) Test(org.junit.Test)

Example 20 with TimeseriesQueryQueryToolChest

use of io.druid.query.timeseries.TimeseriesQueryQueryToolChest in project druid by druid-io.

the class FilteredAggregatorBenchmark method setup.

@Setup
public void setup() throws IOException {
    log.info("SETUP CALLED AT " + System.currentTimeMillis());
    if (ComplexMetrics.getSerdeForType("hyperUnique") == null) {
        ComplexMetrics.registerSerde("hyperUnique", new HyperUniquesSerde(HyperLogLogHash.getDefault()));
    }
    schemaInfo = BenchmarkSchemas.SCHEMA_MAP.get(schema);
    BenchmarkDataGenerator gen = new BenchmarkDataGenerator(schemaInfo.getColumnSchemas(), RNG_SEED, schemaInfo.getDataInterval(), rowsPerSegment);
    incIndex = makeIncIndex(schemaInfo.getAggsArray());
    filter = new OrDimFilter(Arrays.asList(new BoundDimFilter("dimSequential", "-1", "-1", true, true, null, null, StringComparators.ALPHANUMERIC), new JavaScriptDimFilter("dimSequential", "function(x) { return false }", null, JavaScriptConfig.getEnabledInstance()), new RegexDimFilter("dimSequential", "X", null), new SearchQueryDimFilter("dimSequential", new ContainsSearchQuerySpec("X", false), null), new InDimFilter("dimSequential", Arrays.asList("X"), null)));
    filteredMetrics = new AggregatorFactory[1];
    filteredMetrics[0] = new FilteredAggregatorFactory(new CountAggregatorFactory("rows"), filter);
    incIndexFilteredAgg = makeIncIndex(filteredMetrics);
    inputRows = new ArrayList<>();
    for (int j = 0; j < rowsPerSegment; j++) {
        InputRow row = gen.nextRow();
        if (j % 10000 == 0) {
            log.info(j + " rows generated.");
        }
        incIndex.add(row);
        inputRows.add(row);
    }
    tmpDir = Files.createTempDir();
    log.info("Using temp dir: " + tmpDir.getAbsolutePath());
    indexFile = INDEX_MERGER_V9.persist(incIndex, tmpDir, new IndexSpec());
    qIndex = INDEX_IO.loadIndex(indexFile);
    factory = new TimeseriesQueryRunnerFactory(new TimeseriesQueryQueryToolChest(QueryBenchmarkUtil.NoopIntervalChunkingQueryRunnerDecorator()), new TimeseriesQueryEngine(), QueryBenchmarkUtil.NOOP_QUERYWATCHER);
    BenchmarkSchemaInfo basicSchema = BenchmarkSchemas.SCHEMA_MAP.get("basic");
    QuerySegmentSpec intervalSpec = new MultipleIntervalSegmentSpec(Arrays.asList(basicSchema.getDataInterval()));
    List<AggregatorFactory> queryAggs = new ArrayList<>();
    queryAggs.add(filteredMetrics[0]);
    query = Druids.newTimeseriesQueryBuilder().dataSource("blah").granularity(Granularities.ALL).intervals(intervalSpec).aggregators(queryAggs).descending(false).build();
}
Also used : FilteredAggregatorFactory(io.druid.query.aggregation.FilteredAggregatorFactory) RegexDimFilter(io.druid.query.filter.RegexDimFilter) IndexSpec(io.druid.segment.IndexSpec) BoundDimFilter(io.druid.query.filter.BoundDimFilter) ContainsSearchQuerySpec(io.druid.query.search.search.ContainsSearchQuerySpec) BenchmarkDataGenerator(io.druid.benchmark.datagen.BenchmarkDataGenerator) ArrayList(java.util.ArrayList) HyperUniquesSerde(io.druid.query.aggregation.hyperloglog.HyperUniquesSerde) MultipleIntervalSegmentSpec(io.druid.query.spec.MultipleIntervalSegmentSpec) TimeseriesQueryQueryToolChest(io.druid.query.timeseries.TimeseriesQueryQueryToolChest) CountAggregatorFactory(io.druid.query.aggregation.CountAggregatorFactory) AggregatorFactory(io.druid.query.aggregation.AggregatorFactory) FilteredAggregatorFactory(io.druid.query.aggregation.FilteredAggregatorFactory) TimeseriesQueryEngine(io.druid.query.timeseries.TimeseriesQueryEngine) TimeseriesQueryRunnerFactory(io.druid.query.timeseries.TimeseriesQueryRunnerFactory) CountAggregatorFactory(io.druid.query.aggregation.CountAggregatorFactory) BenchmarkSchemaInfo(io.druid.benchmark.datagen.BenchmarkSchemaInfo) OrDimFilter(io.druid.query.filter.OrDimFilter) InDimFilter(io.druid.query.filter.InDimFilter) InputRow(io.druid.data.input.InputRow) SearchQueryDimFilter(io.druid.query.filter.SearchQueryDimFilter) JavaScriptDimFilter(io.druid.query.filter.JavaScriptDimFilter) QuerySegmentSpec(io.druid.query.spec.QuerySegmentSpec) Setup(org.openjdk.jmh.annotations.Setup)

Aggregations

TimeseriesQueryQueryToolChest (io.druid.query.timeseries.TimeseriesQueryQueryToolChest)29 Test (org.junit.Test)25 Interval (org.joda.time.Interval)24 FinalizeResultsQueryRunner (io.druid.query.FinalizeResultsQueryRunner)19 TimeseriesResultValue (io.druid.query.timeseries.TimeseriesResultValue)19 DateTime (org.joda.time.DateTime)19 QueryRunner (io.druid.query.QueryRunner)17 TimeseriesQuery (io.druid.query.timeseries.TimeseriesQuery)16 TimeseriesQueryEngine (io.druid.query.timeseries.TimeseriesQueryEngine)16 TimeseriesQueryRunnerFactory (io.druid.query.timeseries.TimeseriesQueryRunnerFactory)16 Result (io.druid.query.Result)13 CountAggregatorFactory (io.druid.query.aggregation.CountAggregatorFactory)13 LongSumAggregatorFactory (io.druid.query.aggregation.LongSumAggregatorFactory)13 AggregatorFactory (io.druid.query.aggregation.AggregatorFactory)12 IOException (java.io.IOException)10 SpatialDimFilter (io.druid.query.filter.SpatialDimFilter)9 HashMap (java.util.HashMap)9 List (java.util.List)9 ArrayList (java.util.ArrayList)8 Druids (io.druid.query.Druids)7