Search in sources :

Example 11 with DataSource

use of org.apache.druid.query.DataSource in project druid by druid-io.

the class DruidQueryTest method test_filtration_joinDataSource_intervalInBaseTableFilter_left.

@Test
public void test_filtration_joinDataSource_intervalInBaseTableFilter_left() {
    DataSource dataSource = join(JoinType.LEFT, filterWithInterval);
    DataSource expectedDataSource = join(JoinType.LEFT, selectorFilter);
    Pair<DataSource, Filtration> pair = DruidQuery.getFiltration(dataSource, otherFilter, VirtualColumnRegistry.create(RowSignature.empty(), TestExprMacroTable.INSTANCE));
    verify(pair, expectedDataSource, otherFilter, Intervals.utc(100, 200));
}
Also used : Filtration(org.apache.druid.sql.calcite.filtration.Filtration) DataSource(org.apache.druid.query.DataSource) TableDataSource(org.apache.druid.query.TableDataSource) JoinDataSource(org.apache.druid.query.JoinDataSource) Test(org.junit.Test)

Example 12 with DataSource

use of org.apache.druid.query.DataSource in project druid by druid-io.

the class ServerManagerTest method testGetQueryRunnerForSegmentsForUnknownQueryThrowingException.

@Test
public void testGetQueryRunnerForSegmentsForUnknownQueryThrowingException() {
    final Interval interval = Intervals.of("P1d/2011-04-01");
    final List<SegmentDescriptor> descriptors = Collections.singletonList(new SegmentDescriptor(interval, "1", 0));
    expectedException.expect(QueryUnsupportedException.class);
    expectedException.expectMessage("Unknown query type");
    serverManager.getQueryRunnerForSegments(new BaseQuery<Object>(new TableDataSource("test"), new MultipleSpecificSegmentSpec(descriptors), false, new HashMap<>()) {

        @Override
        public boolean hasFilters() {
            return false;
        }

        @Override
        public DimFilter getFilter() {
            return null;
        }

        @Override
        public String getType() {
            return null;
        }

        @Override
        public Query<Object> withOverriddenContext(Map<String, Object> contextOverride) {
            return null;
        }

        @Override
        public Query<Object> withQuerySegmentSpec(QuerySegmentSpec spec) {
            return null;
        }

        @Override
        public Query<Object> withDataSource(DataSource dataSource) {
            return null;
        }
    }, descriptors);
}
Also used : MultipleSpecificSegmentSpec(org.apache.druid.query.spec.MultipleSpecificSegmentSpec) BaseQuery(org.apache.druid.query.BaseQuery) Query(org.apache.druid.query.Query) SearchQuery(org.apache.druid.query.search.SearchQuery) HashMap(java.util.HashMap) DataSource(org.apache.druid.query.DataSource) TableDataSource(org.apache.druid.query.TableDataSource) TableDataSource(org.apache.druid.query.TableDataSource) SegmentDescriptor(org.apache.druid.query.SegmentDescriptor) QuerySegmentSpec(org.apache.druid.query.spec.QuerySegmentSpec) DimFilter(org.apache.druid.query.filter.DimFilter) Interval(org.joda.time.Interval) Test(org.junit.Test)

Example 13 with DataSource

use of org.apache.druid.query.DataSource in project druid by druid-io.

the class SegmentManagerBroadcastJoinIndexedTableTest method testLoadMultipleIndexedTable.

@Test
public void testLoadMultipleIndexedTable() throws IOException, SegmentLoadingException {
    final DataSource dataSource = new GlobalTableDataSource(TABLE_NAME);
    Assert.assertFalse(joinableFactory.isDirectlyJoinable(dataSource));
    final String version = DateTimes.nowUtc().toString();
    final String version2 = DateTimes.nowUtc().plus(1000L).toString();
    final String interval = "2011-01-12T00:00:00.000Z/2011-05-01T00:00:00.000Z";
    final String interval2 = "2011-01-12T00:00:00.000Z/2011-03-28T00:00:00.000Z";
    IncrementalIndex data = TestIndex.makeRealtimeIndex("druid.sample.numeric.tsv.bottom");
    IncrementalIndex data2 = TestIndex.makeRealtimeIndex("druid.sample.numeric.tsv.top");
    Assert.assertTrue(segmentManager.loadSegment(createSegment(data, interval, version), false, SegmentLazyLoadFailCallback.NOOP));
    Assert.assertTrue(joinableFactory.isDirectlyJoinable(dataSource));
    Optional<Joinable> maybeJoinable = makeJoinable(dataSource);
    Assert.assertTrue(maybeJoinable.isPresent());
    Joinable joinable = maybeJoinable.get();
    // cardinality currently tied to number of rows,
    Assert.assertEquals(733, joinable.getCardinality("market"));
    Assert.assertEquals(733, joinable.getCardinality("placement"));
    Assert.assertEquals(Optional.of(ImmutableSet.of("preferred")), joinable.getCorrelatedColumnValues("market", "spot", "placement", Long.MAX_VALUE, false));
    // add another segment with smaller interval, only partially overshadows so there will be 2 segments in timeline
    Assert.assertTrue(segmentManager.loadSegment(createSegment(data2, interval2, version2), false, SegmentLazyLoadFailCallback.NOOP));
    expectedException.expect(ISE.class);
    expectedException.expectMessage(StringUtils.format("Currently only single segment datasources are supported for broadcast joins, dataSource[%s] has multiple segments. Reingest the data so that it is entirely contained within a single segment to use in JOIN queries.", TABLE_NAME));
    // this will explode because datasource has multiple segments which is an invalid state for the joinable factory
    makeJoinable(dataSource);
}
Also used : IncrementalIndex(org.apache.druid.segment.incremental.IncrementalIndex) Joinable(org.apache.druid.segment.join.Joinable) GlobalTableDataSource(org.apache.druid.query.GlobalTableDataSource) DataSource(org.apache.druid.query.DataSource) GlobalTableDataSource(org.apache.druid.query.GlobalTableDataSource) InitializedNullHandlingTest(org.apache.druid.testing.InitializedNullHandlingTest) Test(org.junit.Test)

Example 14 with DataSource

use of org.apache.druid.query.DataSource in project druid by druid-io.

the class SegmentManagerBroadcastJoinIndexedTableTest method emptyCacheKeyForUnsupportedCondition.

@Test
public void emptyCacheKeyForUnsupportedCondition() {
    final DataSource dataSource = new GlobalTableDataSource(TABLE_NAME);
    JoinConditionAnalysis condition = EasyMock.mock(JoinConditionAnalysis.class);
    EasyMock.expect(condition.canHashJoin()).andReturn(false);
    EasyMock.replay(condition);
    Assert.assertNull(joinableFactory.build(dataSource, condition).orElse(null));
}
Also used : GlobalTableDataSource(org.apache.druid.query.GlobalTableDataSource) JoinConditionAnalysis(org.apache.druid.segment.join.JoinConditionAnalysis) DataSource(org.apache.druid.query.DataSource) GlobalTableDataSource(org.apache.druid.query.GlobalTableDataSource) InitializedNullHandlingTest(org.apache.druid.testing.InitializedNullHandlingTest) Test(org.junit.Test)

Example 15 with DataSource

use of org.apache.druid.query.DataSource in project druid by druid-io.

the class SegmentManagerBroadcastJoinIndexedTableTest method testLoadMultipleIndexedTableOverwrite.

@Test
public void testLoadMultipleIndexedTableOverwrite() throws IOException, SegmentLoadingException {
    final DataSource dataSource = new GlobalTableDataSource(TABLE_NAME);
    Assert.assertFalse(joinableFactory.isDirectlyJoinable(dataSource));
    // larger interval overwrites smaller interval
    final String version = DateTimes.nowUtc().toString();
    final String version2 = DateTimes.nowUtc().plus(1000L).toString();
    final String interval = "2011-01-12T00:00:00.000Z/2011-03-28T00:00:00.000Z";
    final String interval2 = "2011-01-12T00:00:00.000Z/2011-05-01T00:00:00.000Z";
    IncrementalIndex data = TestIndex.makeRealtimeIndex("druid.sample.numeric.tsv.top");
    IncrementalIndex data2 = TestIndex.makeRealtimeIndex("druid.sample.numeric.tsv.bottom");
    DataSegment segment1 = createSegment(data, interval, version);
    DataSegment segment2 = createSegment(data2, interval2, version2);
    Assert.assertTrue(segmentManager.loadSegment(segment1, false, SegmentLazyLoadFailCallback.NOOP));
    Assert.assertTrue(segmentManager.loadSegment(segment2, false, SegmentLazyLoadFailCallback.NOOP));
    Assert.assertTrue(joinableFactory.isDirectlyJoinable(dataSource));
    Optional<Joinable> maybeJoinable = makeJoinable(dataSource);
    Assert.assertTrue(maybeJoinable.isPresent());
    Joinable joinable = maybeJoinable.get();
    // cardinality currently tied to number of rows,
    Assert.assertEquals(733, joinable.getCardinality("market"));
    Assert.assertEquals(733, joinable.getCardinality("placement"));
    Assert.assertEquals(Optional.of(ImmutableSet.of("preferred")), joinable.getCorrelatedColumnValues("market", "spot", "placement", Long.MAX_VALUE, false));
    Optional<byte[]> cacheKey = joinableFactory.computeJoinCacheKey(dataSource, JOIN_CONDITION_ANALYSIS);
    Assert.assertTrue(cacheKey.isPresent());
    assertSegmentIdEquals(segment2.getId(), cacheKey.get());
    segmentManager.dropSegment(segment2);
    // if new segment is dropped for some reason that probably never happens, old table should still exist..
    maybeJoinable = makeJoinable(dataSource);
    Assert.assertTrue(maybeJoinable.isPresent());
    joinable = maybeJoinable.get();
    // cardinality currently tied to number of rows,
    Assert.assertEquals(478, joinable.getCardinality("market"));
    Assert.assertEquals(478, joinable.getCardinality("placement"));
    Assert.assertEquals(Optional.of(ImmutableSet.of("preferred")), joinable.getCorrelatedColumnValues("market", "spot", "placement", Long.MAX_VALUE, false));
    cacheKey = joinableFactory.computeJoinCacheKey(dataSource, JOIN_CONDITION_ANALYSIS);
    Assert.assertTrue(cacheKey.isPresent());
    assertSegmentIdEquals(segment1.getId(), cacheKey.get());
}
Also used : IncrementalIndex(org.apache.druid.segment.incremental.IncrementalIndex) Joinable(org.apache.druid.segment.join.Joinable) GlobalTableDataSource(org.apache.druid.query.GlobalTableDataSource) DataSegment(org.apache.druid.timeline.DataSegment) DataSource(org.apache.druid.query.DataSource) GlobalTableDataSource(org.apache.druid.query.GlobalTableDataSource) InitializedNullHandlingTest(org.apache.druid.testing.InitializedNullHandlingTest) Test(org.junit.Test)

Aggregations

DataSource (org.apache.druid.query.DataSource)36 TableDataSource (org.apache.druid.query.TableDataSource)23 Test (org.junit.Test)18 JoinDataSource (org.apache.druid.query.JoinDataSource)17 QueryDataSource (org.apache.druid.query.QueryDataSource)16 GlobalTableDataSource (org.apache.druid.query.GlobalTableDataSource)14 Filtration (org.apache.druid.sql.calcite.filtration.Filtration)12 ArrayList (java.util.ArrayList)10 InlineDataSource (org.apache.druid.query.InlineDataSource)7 HashMap (java.util.HashMap)6 Optional (java.util.Optional)6 LookupDataSource (org.apache.druid.query.LookupDataSource)6 UnionDataSource (org.apache.druid.query.UnionDataSource)6 GroupByQuery (org.apache.druid.query.groupby.GroupByQuery)6 List (java.util.List)5 Nullable (javax.annotation.Nullable)5 DimFilter (org.apache.druid.query.filter.DimFilter)5 ImmutableMap (com.google.common.collect.ImmutableMap)4 IntArrayList (it.unimi.dsi.fastutil.ints.IntArrayList)4 ISE (org.apache.druid.java.util.common.ISE)4