Search in sources :

Example 91 with Sequence

use of org.apache.druid.java.util.common.guava.Sequence in project druid by druid-io.

the class HashJoinSegmentStorageAdapter method makeCursors.

@Override
public Sequence<Cursor> makeCursors(@Nullable final Filter filter, @Nonnull final Interval interval, @Nonnull final VirtualColumns virtualColumns, @Nonnull final Granularity gran, final boolean descending, @Nullable final QueryMetrics<?> queryMetrics) {
    final Filter combinedFilter = baseFilterAnd(filter);
    if (clauses.isEmpty()) {
        return baseAdapter.makeCursors(combinedFilter, interval, virtualColumns, gran, descending, queryMetrics);
    }
    // Filter pre-analysis key implied by the call to "makeCursors". We need to sanity-check that it matches
    // the actual pre-analysis that was done. Note: we can't infer a rewrite config from the "makeCursors" call (it
    // requires access to the query context) so we'll need to skip sanity-checking it, by re-using the one present
    // in the cached key.)
    final JoinFilterPreAnalysisKey keyIn = new JoinFilterPreAnalysisKey(joinFilterPreAnalysis.getKey().getRewriteConfig(), clauses, virtualColumns, combinedFilter);
    final JoinFilterPreAnalysisKey keyCached = joinFilterPreAnalysis.getKey();
    if (!keyIn.equals(keyCached)) {
        // It is a bug if this happens. The implied key and the cached key should always match.
        throw new ISE("Pre-analysis mismatch, cannot execute query");
    }
    final List<VirtualColumn> preJoinVirtualColumns = new ArrayList<>();
    final List<VirtualColumn> postJoinVirtualColumns = new ArrayList<>();
    determineBaseColumnsWithPreAndPostJoinVirtualColumns(virtualColumns, preJoinVirtualColumns, postJoinVirtualColumns);
    // We merge the filter on base table specified by the user and filter on the base table that is pushed from
    // the join
    JoinFilterSplit joinFilterSplit = JoinFilterAnalyzer.splitFilter(joinFilterPreAnalysis, baseFilter);
    preJoinVirtualColumns.addAll(joinFilterSplit.getPushDownVirtualColumns());
    final Sequence<Cursor> baseCursorSequence = baseAdapter.makeCursors(joinFilterSplit.getBaseTableFilter().isPresent() ? joinFilterSplit.getBaseTableFilter().get() : null, interval, VirtualColumns.create(preJoinVirtualColumns), gran, descending, queryMetrics);
    Closer joinablesCloser = Closer.create();
    return Sequences.<Cursor, Cursor>map(baseCursorSequence, cursor -> {
        assert cursor != null;
        Cursor retVal = cursor;
        for (JoinableClause clause : clauses) {
            retVal = HashJoinEngine.makeJoinCursor(retVal, clause, descending, joinablesCloser);
        }
        return PostJoinCursor.wrap(retVal, VirtualColumns.create(postJoinVirtualColumns), joinFilterSplit.getJoinTableFilter().orElse(null));
    }).withBaggage(joinablesCloser);
}
Also used : Closer(org.apache.druid.java.util.common.io.Closer) Indexed(org.apache.druid.segment.data.Indexed) Arrays(java.util.Arrays) Granularity(org.apache.druid.java.util.common.granularity.Granularity) QueryMetrics(org.apache.druid.query.QueryMetrics) Metadata(org.apache.druid.segment.Metadata) StorageAdapter(org.apache.druid.segment.StorageAdapter) ArrayList(java.util.ArrayList) JoinFilterSplit(org.apache.druid.segment.join.filter.JoinFilterSplit) HashSet(java.util.HashSet) VectorCursor(org.apache.druid.segment.vector.VectorCursor) Interval(org.joda.time.Interval) Lists(com.google.common.collect.Lists) ListIndexed(org.apache.druid.segment.data.ListIndexed) JoinFilterPreAnalysisKey(org.apache.druid.segment.join.filter.JoinFilterPreAnalysisKey) Nonnull(javax.annotation.Nonnull) Sequences(org.apache.druid.java.util.common.guava.Sequences) Nullable(javax.annotation.Nullable) LinkedHashSet(java.util.LinkedHashSet) Sequence(org.apache.druid.java.util.common.guava.Sequence) VirtualColumns(org.apache.druid.segment.VirtualColumns) Closer(org.apache.druid.java.util.common.io.Closer) VirtualColumn(org.apache.druid.segment.VirtualColumn) DateTime(org.joda.time.DateTime) Set(java.util.Set) ISE(org.apache.druid.java.util.common.ISE) JoinFilterPreAnalysis(org.apache.druid.segment.join.filter.JoinFilterPreAnalysis) List(java.util.List) Cursor(org.apache.druid.segment.Cursor) ColumnCapabilities(org.apache.druid.segment.column.ColumnCapabilities) Optional(java.util.Optional) JoinFilterAnalyzer(org.apache.druid.segment.join.filter.JoinFilterAnalyzer) Filters(org.apache.druid.segment.filter.Filters) Filter(org.apache.druid.query.filter.Filter) Filter(org.apache.druid.query.filter.Filter) JoinFilterPreAnalysisKey(org.apache.druid.segment.join.filter.JoinFilterPreAnalysisKey) JoinFilterSplit(org.apache.druid.segment.join.filter.JoinFilterSplit) ArrayList(java.util.ArrayList) ISE(org.apache.druid.java.util.common.ISE) VirtualColumn(org.apache.druid.segment.VirtualColumn) VectorCursor(org.apache.druid.segment.vector.VectorCursor) Cursor(org.apache.druid.segment.Cursor)

Example 92 with Sequence

use of org.apache.druid.java.util.common.guava.Sequence in project druid by druid-io.

the class ExpressionSelectorsTest method test_incrementalIndexStringSelector.

@Test
public void test_incrementalIndexStringSelector() throws IndexSizeExceededException {
    // This test covers a regression caused by ColumnCapabilites.isDictionaryEncoded not matching the value of
    // DimensionSelector.nameLookupPossibleInAdvance in the indexers of an IncrementalIndex, which resulted in an
    // exception trying to make an optimized string expression selector that was not appropriate to use for the
    // underlying dimension selector.
    // This occurred during schemaless ingestion with spare dimension values and no explicit null rows, so the
    // conditions are replicated by this test. See https://github.com/apache/druid/pull/10248 for details
    IncrementalIndexSchema schema = new IncrementalIndexSchema(0, new TimestampSpec("time", "millis", DateTimes.nowUtc()), Granularities.NONE, VirtualColumns.EMPTY, DimensionsSpec.EMPTY, new AggregatorFactory[] { new CountAggregatorFactory("count") }, true);
    IncrementalIndex index = new OnheapIncrementalIndex.Builder().setMaxRowCount(100).setIndexSchema(schema).build();
    index.add(new MapBasedInputRow(DateTimes.nowUtc().getMillis(), ImmutableList.of("x"), ImmutableMap.of("x", "foo")));
    index.add(new MapBasedInputRow(DateTimes.nowUtc().plusMillis(1000).getMillis(), ImmutableList.of("y"), ImmutableMap.of("y", "foo")));
    IncrementalIndexStorageAdapter adapter = new IncrementalIndexStorageAdapter(index);
    Sequence<Cursor> cursors = adapter.makeCursors(null, Intervals.ETERNITY, VirtualColumns.EMPTY, Granularities.ALL, false, null);
    int rowsProcessed = cursors.map(cursor -> {
        DimensionSelector xExprSelector = ExpressionSelectors.makeDimensionSelector(cursor.getColumnSelectorFactory(), Parser.parse("concat(x, 'foo')", ExprMacroTable.nil()), null);
        DimensionSelector yExprSelector = ExpressionSelectors.makeDimensionSelector(cursor.getColumnSelectorFactory(), Parser.parse("concat(y, 'foo')", ExprMacroTable.nil()), null);
        int rowCount = 0;
        while (!cursor.isDone()) {
            Object x = xExprSelector.getObject();
            Object y = yExprSelector.getObject();
            List<String> expectedFoo = Collections.singletonList("foofoo");
            List<String> expectedNull = NullHandling.replaceWithDefault() ? Collections.singletonList("foo") : Collections.singletonList(null);
            if (rowCount == 0) {
                Assert.assertEquals(expectedFoo, x);
                Assert.assertEquals(expectedNull, y);
            } else {
                Assert.assertEquals(expectedNull, x);
                Assert.assertEquals(expectedFoo, y);
            }
            rowCount++;
            cursor.advance();
        }
        return rowCount;
    }).accumulate(0, (in, acc) -> in + acc);
    Assert.assertEquals(2, rowsProcessed);
}
Also used : SegmentGenerator(org.apache.druid.segment.generator.SegmentGenerator) ColumnValueSelector(org.apache.druid.segment.ColumnValueSelector) TimestampSpec(org.apache.druid.data.input.impl.TimestampSpec) StorageAdapter(org.apache.druid.segment.StorageAdapter) DefaultDimensionSpec(org.apache.druid.query.dimension.DefaultDimensionSpec) GeneratorBasicSchemas(org.apache.druid.segment.generator.GeneratorBasicSchemas) ColumnSelectorFactory(org.apache.druid.segment.ColumnSelectorFactory) IncrementalIndexStorageAdapter(org.apache.druid.segment.incremental.IncrementalIndexStorageAdapter) Expr(org.apache.druid.math.expr.Expr) DateTimes(org.apache.druid.java.util.common.DateTimes) Sequence(org.apache.druid.java.util.common.guava.Sequence) AfterClass(org.junit.AfterClass) ImmutableMap(com.google.common.collect.ImmutableMap) Closer(org.apache.druid.java.util.common.io.Closer) AggregatorFactory(org.apache.druid.query.aggregation.AggregatorFactory) QueryableIndex(org.apache.druid.segment.QueryableIndex) TestExprMacroTable(org.apache.druid.query.expression.TestExprMacroTable) ExprEval(org.apache.druid.math.expr.ExprEval) BaseSingleValueDimensionSelector(org.apache.druid.segment.BaseSingleValueDimensionSelector) TestObjectColumnSelector(org.apache.druid.segment.TestObjectColumnSelector) IncrementalIndexSchema(org.apache.druid.segment.incremental.IncrementalIndexSchema) ExprMacroTable(org.apache.druid.math.expr.ExprMacroTable) IndexSizeExceededException(org.apache.druid.segment.incremental.IndexSizeExceededException) List(java.util.List) LinearShardSpec(org.apache.druid.timeline.partition.LinearShardSpec) DataSegment(org.apache.druid.timeline.DataSegment) ColumnCapabilities(org.apache.druid.segment.column.ColumnCapabilities) BeforeClass(org.junit.BeforeClass) Intervals(org.apache.druid.java.util.common.Intervals) RuntimeShapeInspector(org.apache.druid.query.monomorphicprocessing.RuntimeShapeInspector) Supplier(com.google.common.base.Supplier) MapBasedInputRow(org.apache.druid.data.input.MapBasedInputRow) Parser(org.apache.druid.math.expr.Parser) ArrayList(java.util.ArrayList) ImmutableList(com.google.common.collect.ImmutableList) IncrementalIndex(org.apache.druid.segment.incremental.IncrementalIndex) SettableSupplier(org.apache.druid.common.guava.SettableSupplier) DimensionSelector(org.apache.druid.segment.DimensionSelector) OnheapIncrementalIndex(org.apache.druid.segment.incremental.OnheapIncrementalIndex) CountAggregatorFactory(org.apache.druid.query.aggregation.CountAggregatorFactory) QueryableIndexStorageAdapter(org.apache.druid.segment.QueryableIndexStorageAdapter) VirtualColumns(org.apache.druid.segment.VirtualColumns) GeneratorSchemaInfo(org.apache.druid.segment.generator.GeneratorSchemaInfo) DimensionsSpec(org.apache.druid.data.input.impl.DimensionsSpec) InitializedNullHandlingTest(org.apache.druid.testing.InitializedNullHandlingTest) Test(org.junit.Test) Granularities(org.apache.druid.java.util.common.granularity.Granularities) Cursor(org.apache.druid.segment.Cursor) NullHandling(org.apache.druid.common.config.NullHandling) Assert(org.junit.Assert) CloseableUtils(org.apache.druid.utils.CloseableUtils) Collections(java.util.Collections) BaseSingleValueDimensionSelector(org.apache.druid.segment.BaseSingleValueDimensionSelector) DimensionSelector(org.apache.druid.segment.DimensionSelector) IncrementalIndex(org.apache.druid.segment.incremental.IncrementalIndex) OnheapIncrementalIndex(org.apache.druid.segment.incremental.OnheapIncrementalIndex) OnheapIncrementalIndex(org.apache.druid.segment.incremental.OnheapIncrementalIndex) Cursor(org.apache.druid.segment.Cursor) CountAggregatorFactory(org.apache.druid.query.aggregation.CountAggregatorFactory) TimestampSpec(org.apache.druid.data.input.impl.TimestampSpec) IncrementalIndexStorageAdapter(org.apache.druid.segment.incremental.IncrementalIndexStorageAdapter) List(java.util.List) ArrayList(java.util.ArrayList) ImmutableList(com.google.common.collect.ImmutableList) MapBasedInputRow(org.apache.druid.data.input.MapBasedInputRow) IncrementalIndexSchema(org.apache.druid.segment.incremental.IncrementalIndexSchema) InitializedNullHandlingTest(org.apache.druid.testing.InitializedNullHandlingTest) Test(org.junit.Test)

Example 93 with Sequence

use of org.apache.druid.java.util.common.guava.Sequence in project druid by druid-io.

the class VectorizedVirtualColumnTest method testTimeseriesNoVirtual.

private void testTimeseriesNoVirtual(ColumnCapabilities capabilities, Map<String, Object> context) {
    TimeseriesQuery query = Druids.newTimeseriesQueryBuilder().intervals("2000/2030").dataSource(QueryRunnerTestHelper.DATA_SOURCE).granularity(Granularities.ALL).virtualColumns().aggregators(new CountAggregatorFactory(COUNT)).context(context).build();
    Sequence seq = timeseriesTestHelper.runQueryOnSegmentsObjs(segments, query);
    List<Result<TimeseriesResultValue>> expectedResults = ImmutableList.of(new Result<>(DateTimes.of("2011-01-12T00:00:00.000Z"), new TimeseriesResultValue(ImmutableMap.of(COUNT, 2418L))));
    TestHelper.assertExpectedObjects(expectedResults, seq.toList(), "failed");
}
Also used : TimeseriesResultValue(org.apache.druid.query.timeseries.TimeseriesResultValue) TimeseriesQuery(org.apache.druid.query.timeseries.TimeseriesQuery) CountAggregatorFactory(org.apache.druid.query.aggregation.CountAggregatorFactory) Sequence(org.apache.druid.java.util.common.guava.Sequence) Result(org.apache.druid.query.Result)

Example 94 with Sequence

use of org.apache.druid.java.util.common.guava.Sequence in project druid by druid-io.

the class ServerManagerForQueryErrorTest method buildQueryRunnerForSegment.

@Override
protected <T> QueryRunner<T> buildQueryRunnerForSegment(Query<T> query, SegmentDescriptor descriptor, QueryRunnerFactory<T, Query<T>> factory, QueryToolChest<T, Query<T>> toolChest, VersionedIntervalTimeline<String, ReferenceCountingSegment> timeline, Function<SegmentReference, SegmentReference> segmentMapFn, AtomicLong cpuTimeAccumulator, Optional<byte[]> cacheKeyPrefix) {
    if (query.getContextBoolean(QUERY_RETRY_TEST_CONTEXT_KEY, false)) {
        final MutableBoolean isIgnoreSegment = new MutableBoolean(false);
        queryToIgnoredSegments.compute(query.getMostSpecificId(), (queryId, ignoredSegments) -> {
            if (ignoredSegments == null) {
                ignoredSegments = new HashSet<>();
            }
            if (ignoredSegments.size() < MAX_NUM_FALSE_MISSING_SEGMENTS_REPORTS) {
                ignoredSegments.add(descriptor);
                isIgnoreSegment.setTrue();
            }
            return ignoredSegments;
        });
        if (isIgnoreSegment.isTrue()) {
            LOG.info("Pretending I don't have segment[%s]", descriptor);
            return new ReportTimelineMissingSegmentQueryRunner<>(descriptor);
        }
    } else if (query.getContextBoolean(QUERY_TIMEOUT_TEST_CONTEXT_KEY, false)) {
        return (queryPlus, responseContext) -> new Sequence<T>() {

            @Override
            public <OutType> OutType accumulate(OutType initValue, Accumulator<OutType, T> accumulator) {
                throw new QueryTimeoutException("query timeout test");
            }

            @Override
            public <OutType> Yielder<OutType> toYielder(OutType initValue, YieldingAccumulator<OutType, T> accumulator) {
                throw new QueryTimeoutException("query timeout test");
            }
        };
    } else if (query.getContextBoolean(QUERY_CAPACITY_EXCEEDED_TEST_CONTEXT_KEY, false)) {
        return (queryPlus, responseContext) -> new Sequence<T>() {

            @Override
            public <OutType> OutType accumulate(OutType initValue, Accumulator<OutType, T> accumulator) {
                throw QueryCapacityExceededException.withErrorMessageAndResolvedHost("query capacity exceeded test");
            }

            @Override
            public <OutType> Yielder<OutType> toYielder(OutType initValue, YieldingAccumulator<OutType, T> accumulator) {
                throw QueryCapacityExceededException.withErrorMessageAndResolvedHost("query capacity exceeded test");
            }
        };
    } else if (query.getContextBoolean(QUERY_UNSUPPORTED_TEST_CONTEXT_KEY, false)) {
        return (queryPlus, responseContext) -> new Sequence<T>() {

            @Override
            public <OutType> OutType accumulate(OutType initValue, Accumulator<OutType, T> accumulator) {
                throw new QueryUnsupportedException("query unsupported test");
            }

            @Override
            public <OutType> Yielder<OutType> toYielder(OutType initValue, YieldingAccumulator<OutType, T> accumulator) {
                throw new QueryUnsupportedException("query unsupported test");
            }
        };
    } else if (query.getContextBoolean(RESOURCE_LIMIT_EXCEEDED_TEST_CONTEXT_KEY, false)) {
        return (queryPlus, responseContext) -> new Sequence<T>() {

            @Override
            public <OutType> OutType accumulate(OutType initValue, Accumulator<OutType, T> accumulator) {
                throw new ResourceLimitExceededException("resource limit exceeded test");
            }

            @Override
            public <OutType> Yielder<OutType> toYielder(OutType initValue, YieldingAccumulator<OutType, T> accumulator) {
                throw new ResourceLimitExceededException("resource limit exceeded test");
            }
        };
    } else if (query.getContextBoolean(QUERY_FAILURE_TEST_CONTEXT_KEY, false)) {
        return (queryPlus, responseContext) -> new Sequence<T>() {

            @Override
            public <OutType> OutType accumulate(OutType initValue, Accumulator<OutType, T> accumulator) {
                throw new RuntimeException("query failure test");
            }

            @Override
            public <OutType> Yielder<OutType> toYielder(OutType initValue, YieldingAccumulator<OutType, T> accumulator) {
                throw new RuntimeException("query failure test");
            }
        };
    }
    return super.buildQueryRunnerForSegment(query, descriptor, factory, toolChest, timeline, segmentMapFn, cpuTimeAccumulator, cacheKeyPrefix);
}
Also used : Logger(org.apache.druid.java.util.common.logger.Logger) SegmentManager(org.apache.druid.server.SegmentManager) Inject(com.google.inject.Inject) Smile(org.apache.druid.guice.annotations.Smile) QueryProcessingPool(org.apache.druid.query.QueryProcessingPool) JoinableFactory(org.apache.druid.segment.join.JoinableFactory) Function(java.util.function.Function) QueryCapacityExceededException(org.apache.druid.query.QueryCapacityExceededException) HashSet(java.util.HashSet) SegmentReference(org.apache.druid.segment.SegmentReference) Query(org.apache.druid.query.Query) QueryRunner(org.apache.druid.query.QueryRunner) CachePopulator(org.apache.druid.client.cache.CachePopulator) Yielder(org.apache.druid.java.util.common.guava.Yielder) Sequence(org.apache.druid.java.util.common.guava.Sequence) YieldingAccumulator(org.apache.druid.java.util.common.guava.YieldingAccumulator) VersionedIntervalTimeline(org.apache.druid.timeline.VersionedIntervalTimeline) ServerConfig(org.apache.druid.server.initialization.ServerConfig) ObjectMapper(com.fasterxml.jackson.databind.ObjectMapper) ReportTimelineMissingSegmentQueryRunner(org.apache.druid.query.ReportTimelineMissingSegmentQueryRunner) ConcurrentHashMap(java.util.concurrent.ConcurrentHashMap) CacheConfig(org.apache.druid.client.cache.CacheConfig) QueryRunnerFactoryConglomerate(org.apache.druid.query.QueryRunnerFactoryConglomerate) QueryToolChest(org.apache.druid.query.QueryToolChest) Set(java.util.Set) ReferenceCountingSegment(org.apache.druid.segment.ReferenceCountingSegment) AtomicLong(java.util.concurrent.atomic.AtomicLong) QueryTimeoutException(org.apache.druid.query.QueryTimeoutException) ServiceEmitter(org.apache.druid.java.util.emitter.service.ServiceEmitter) QueryRunnerFactory(org.apache.druid.query.QueryRunnerFactory) ResourceLimitExceededException(org.apache.druid.query.ResourceLimitExceededException) Optional(java.util.Optional) MutableBoolean(org.apache.commons.lang3.mutable.MutableBoolean) SegmentDescriptor(org.apache.druid.query.SegmentDescriptor) Cache(org.apache.druid.client.cache.Cache) Accumulator(org.apache.druid.java.util.common.guava.Accumulator) QueryUnsupportedException(org.apache.druid.query.QueryUnsupportedException) YieldingAccumulator(org.apache.druid.java.util.common.guava.YieldingAccumulator) Accumulator(org.apache.druid.java.util.common.guava.Accumulator) Yielder(org.apache.druid.java.util.common.guava.Yielder) QueryUnsupportedException(org.apache.druid.query.QueryUnsupportedException) MutableBoolean(org.apache.commons.lang3.mutable.MutableBoolean) Sequence(org.apache.druid.java.util.common.guava.Sequence) YieldingAccumulator(org.apache.druid.java.util.common.guava.YieldingAccumulator) QueryTimeoutException(org.apache.druid.query.QueryTimeoutException) ReportTimelineMissingSegmentQueryRunner(org.apache.druid.query.ReportTimelineMissingSegmentQueryRunner) ResourceLimitExceededException(org.apache.druid.query.ResourceLimitExceededException)

Example 95 with Sequence

use of org.apache.druid.java.util.common.guava.Sequence in project druid by druid-io.

the class TimewarpOperatorTest method testPostProcessWithTimezonesAndDstShift.

@Test
public void testPostProcessWithTimezonesAndDstShift() {
    QueryRunner<Result<TimeseriesResultValue>> queryRunner = testOperator.postProcess(new QueryRunner<Result<TimeseriesResultValue>>() {

        @Override
        public Sequence<Result<TimeseriesResultValue>> run(QueryPlus<Result<TimeseriesResultValue>> queryPlus, ResponseContext responseContext) {
            return Sequences.simple(ImmutableList.of(new Result<>(DateTimes.of("2014-01-09T-08"), new TimeseriesResultValue(ImmutableMap.of("metric", 2))), new Result<>(DateTimes.of("2014-01-11T-08"), new TimeseriesResultValue(ImmutableMap.of("metric", 3))), new Result<>(queryPlus.getQuery().getIntervals().get(0).getEnd(), new TimeseriesResultValue(ImmutableMap.of("metric", 5)))));
        }
    }, DateTimes.of("2014-08-02T-07").getMillis());
    final Query<Result<TimeseriesResultValue>> query = Druids.newTimeseriesQueryBuilder().dataSource("dummy").intervals("2014-07-31T-07/2014-08-05T-07").granularity(new PeriodGranularity(new Period("P1D"), null, DateTimes.inferTzFromString("America/Los_Angeles"))).aggregators(Collections.singletonList(new CountAggregatorFactory("count"))).build();
    Assert.assertEquals(Lists.newArrayList(new Result<>(DateTimes.of("2014-07-31T-07"), new TimeseriesResultValue(ImmutableMap.of("metric", 2))), new Result<>(DateTimes.of("2014-08-02T-07"), new TimeseriesResultValue(ImmutableMap.of("metric", 3))), new Result<>(DateTimes.of("2014-08-02T-07"), new TimeseriesResultValue(ImmutableMap.of("metric", 5)))), queryRunner.run(QueryPlus.wrap(query)).toList());
}
Also used : TimeseriesResultValue(org.apache.druid.query.timeseries.TimeseriesResultValue) CountAggregatorFactory(org.apache.druid.query.aggregation.CountAggregatorFactory) ResponseContext(org.apache.druid.query.context.ResponseContext) PeriodGranularity(org.apache.druid.java.util.common.granularity.PeriodGranularity) Period(org.joda.time.Period) Sequence(org.apache.druid.java.util.common.guava.Sequence) Test(org.junit.Test)

Aggregations

Sequence (org.apache.druid.java.util.common.guava.Sequence)102 Test (org.junit.Test)53 List (java.util.List)44 DefaultDimensionSpec (org.apache.druid.query.dimension.DefaultDimensionSpec)37 ResponseContext (org.apache.druid.query.context.ResponseContext)32 ImmutableList (com.google.common.collect.ImmutableList)29 Intervals (org.apache.druid.java.util.common.Intervals)28 Granularities (org.apache.druid.java.util.common.granularity.Granularities)28 QueryRunner (org.apache.druid.query.QueryRunner)28 ArrayList (java.util.ArrayList)27 VirtualColumns (org.apache.druid.segment.VirtualColumns)26 Cursor (org.apache.druid.segment.Cursor)25 QueryPlus (org.apache.druid.query.QueryPlus)24 Result (org.apache.druid.query.Result)24 NullHandling (org.apache.druid.common.config.NullHandling)22 InitializedNullHandlingTest (org.apache.druid.testing.InitializedNullHandlingTest)22 MultipleIntervalSegmentSpec (org.apache.druid.query.spec.MultipleIntervalSegmentSpec)21 QueryableIndexStorageAdapter (org.apache.druid.segment.QueryableIndexStorageAdapter)20 DataSegment (org.apache.druid.timeline.DataSegment)20 ImmutableMap (com.google.common.collect.ImmutableMap)18