Search in sources :

Example 6 with QueryRunnerFactoryConglomerate

use of org.apache.druid.query.QueryRunnerFactoryConglomerate in project druid by druid-io.

the class SqlVsNativeBenchmark method setup.

@Setup(Level.Trial)
public void setup() {
    this.closer = Closer.create();
    final GeneratorSchemaInfo schemaInfo = GeneratorBasicSchemas.SCHEMA_MAP.get("basic");
    final DataSegment dataSegment = DataSegment.builder().dataSource("foo").interval(schemaInfo.getDataInterval()).version("1").shardSpec(new LinearShardSpec(0)).size(0).build();
    final SegmentGenerator segmentGenerator = closer.register(new SegmentGenerator());
    log.info("Starting benchmark setup using tmpDir[%s], rows[%,d].", segmentGenerator.getCacheDir(), rowsPerSegment);
    final QueryableIndex index = segmentGenerator.generate(dataSegment, schemaInfo, Granularities.NONE, rowsPerSegment);
    final QueryRunnerFactoryConglomerate conglomerate = QueryStackTests.createQueryRunnerFactoryConglomerate(closer);
    final PlannerConfig plannerConfig = new PlannerConfig();
    this.walker = closer.register(new SpecificSegmentsQuerySegmentWalker(conglomerate).add(dataSegment, index));
    final DruidSchemaCatalog rootSchema = CalciteTests.createMockRootSchema(conglomerate, walker, plannerConfig, AuthTestUtils.TEST_AUTHORIZER_MAPPER);
    plannerFactory = new PlannerFactory(rootSchema, CalciteTests.createMockQueryMakerFactory(walker, conglomerate), CalciteTests.createOperatorTable(), CalciteTests.createExprMacroTable(), plannerConfig, AuthTestUtils.TEST_AUTHORIZER_MAPPER, CalciteTests.getJsonMapper(), CalciteTests.DRUID_SCHEMA_NAME);
    groupByQuery = GroupByQuery.builder().setDataSource("foo").setInterval(Intervals.ETERNITY).setDimensions(new DefaultDimensionSpec("dimZipf", "d0"), new DefaultDimensionSpec("dimSequential", "d1")).setAggregatorSpecs(new CountAggregatorFactory("c")).setGranularity(Granularities.ALL).build();
    sqlQuery = "SELECT\n" + "  dimZipf AS d0," + "  dimSequential AS d1,\n" + "  COUNT(*) AS c\n" + "FROM druid.foo\n" + "GROUP BY dimZipf, dimSequential";
}
Also used : SegmentGenerator(org.apache.druid.segment.generator.SegmentGenerator) QueryRunnerFactoryConglomerate(org.apache.druid.query.QueryRunnerFactoryConglomerate) SpecificSegmentsQuerySegmentWalker(org.apache.druid.sql.calcite.util.SpecificSegmentsQuerySegmentWalker) CountAggregatorFactory(org.apache.druid.query.aggregation.CountAggregatorFactory) LinearShardSpec(org.apache.druid.timeline.partition.LinearShardSpec) QueryableIndex(org.apache.druid.segment.QueryableIndex) GeneratorSchemaInfo(org.apache.druid.segment.generator.GeneratorSchemaInfo) PlannerConfig(org.apache.druid.sql.calcite.planner.PlannerConfig) DruidSchemaCatalog(org.apache.druid.sql.calcite.schema.DruidSchemaCatalog) PlannerFactory(org.apache.druid.sql.calcite.planner.PlannerFactory) DataSegment(org.apache.druid.timeline.DataSegment) DefaultDimensionSpec(org.apache.druid.query.dimension.DefaultDimensionSpec) Setup(org.openjdk.jmh.annotations.Setup)

Example 7 with QueryRunnerFactoryConglomerate

use of org.apache.druid.query.QueryRunnerFactoryConglomerate in project druid by druid-io.

the class SqlExpressionBenchmark method setup.

@Setup(Level.Trial)
public void setup() {
    final GeneratorSchemaInfo schemaInfo = GeneratorBasicSchemas.SCHEMA_MAP.get("expression-testbench");
    final DataSegment dataSegment = DataSegment.builder().dataSource("foo").interval(schemaInfo.getDataInterval()).version("1").shardSpec(new LinearShardSpec(0)).size(0).build();
    final PlannerConfig plannerConfig = new PlannerConfig();
    final SegmentGenerator segmentGenerator = closer.register(new SegmentGenerator());
    log.info("Starting benchmark setup using cacheDir[%s], rows[%,d].", segmentGenerator.getCacheDir(), rowsPerSegment);
    final QueryableIndex index = segmentGenerator.generate(dataSegment, schemaInfo, Granularities.NONE, rowsPerSegment);
    final QueryRunnerFactoryConglomerate conglomerate = QueryStackTests.createQueryRunnerFactoryConglomerate(closer, PROCESSING_CONFIG);
    final SpecificSegmentsQuerySegmentWalker walker = new SpecificSegmentsQuerySegmentWalker(conglomerate).add(dataSegment, index);
    closer.register(walker);
    final DruidSchemaCatalog rootSchema = CalciteTests.createMockRootSchema(conglomerate, walker, plannerConfig, AuthTestUtils.TEST_AUTHORIZER_MAPPER);
    plannerFactory = new PlannerFactory(rootSchema, CalciteTests.createMockQueryMakerFactory(walker, conglomerate), CalciteTests.createOperatorTable(), CalciteTests.createExprMacroTable(), plannerConfig, AuthTestUtils.TEST_AUTHORIZER_MAPPER, CalciteTests.getJsonMapper(), CalciteTests.DRUID_SCHEMA_NAME);
    try {
        SqlVectorizedExpressionSanityTest.sanityTestVectorizedSqlQueries(plannerFactory, QUERIES.get(Integer.parseInt(query)));
    } catch (Throwable ignored) {
    // the show must go on
    }
}
Also used : SegmentGenerator(org.apache.druid.segment.generator.SegmentGenerator) QueryRunnerFactoryConglomerate(org.apache.druid.query.QueryRunnerFactoryConglomerate) SpecificSegmentsQuerySegmentWalker(org.apache.druid.sql.calcite.util.SpecificSegmentsQuerySegmentWalker) LinearShardSpec(org.apache.druid.timeline.partition.LinearShardSpec) QueryableIndex(org.apache.druid.segment.QueryableIndex) GeneratorSchemaInfo(org.apache.druid.segment.generator.GeneratorSchemaInfo) PlannerConfig(org.apache.druid.sql.calcite.planner.PlannerConfig) DruidSchemaCatalog(org.apache.druid.sql.calcite.schema.DruidSchemaCatalog) PlannerFactory(org.apache.druid.sql.calcite.planner.PlannerFactory) DataSegment(org.apache.druid.timeline.DataSegment) Setup(org.openjdk.jmh.annotations.Setup)

Example 8 with QueryRunnerFactoryConglomerate

use of org.apache.druid.query.QueryRunnerFactoryConglomerate in project druid by druid-io.

the class RealtimeIndexTaskTest method makeToolbox.

private TaskToolbox makeToolbox(final Task task, final TaskStorage taskStorage, final IndexerMetadataStorageCoordinator mdc, final File directory) {
    final TaskConfig taskConfig = new TaskConfig(directory.getPath(), null, null, 50000, null, true, null, null, null, false, false, TaskConfig.BATCH_PROCESSING_MODE_DEFAULT.name());
    final TaskLockbox taskLockbox = new TaskLockbox(taskStorage, mdc);
    try {
        taskStorage.insert(task, TaskStatus.running(task.getId()));
    } catch (EntryExistsException e) {
    // suppress
    }
    taskLockbox.syncFromStorage();
    final TaskActionToolbox taskActionToolbox = new TaskActionToolbox(taskLockbox, taskStorage, mdc, EMITTER, EasyMock.createMock(SupervisorManager.class));
    final TaskActionClientFactory taskActionClientFactory = new LocalTaskActionClientFactory(taskStorage, taskActionToolbox, new TaskAuditLogConfig(false));
    final QueryRunnerFactoryConglomerate conglomerate = new DefaultQueryRunnerFactoryConglomerate(ImmutableMap.of(TimeseriesQuery.class, new TimeseriesQueryRunnerFactory(new TimeseriesQueryQueryToolChest(), new TimeseriesQueryEngine(), new QueryWatcher() {

        @Override
        public void registerQueryFuture(Query query, ListenableFuture future) {
        // do nothing
        }
    })));
    handOffCallbacks = new ConcurrentHashMap<>();
    final SegmentHandoffNotifierFactory handoffNotifierFactory = new SegmentHandoffNotifierFactory() {

        @Override
        public SegmentHandoffNotifier createSegmentHandoffNotifier(String dataSource) {
            return new SegmentHandoffNotifier() {

                @Override
                public boolean registerSegmentHandoffCallback(SegmentDescriptor descriptor, Executor exec, Runnable handOffRunnable) {
                    handOffCallbacks.put(descriptor, new Pair<>(exec, handOffRunnable));
                    return true;
                }

                @Override
                public void start() {
                // Noop
                }

                @Override
                public void close() {
                // Noop
                }
            };
        }
    };
    final TestUtils testUtils = new TestUtils();
    final TaskToolboxFactory toolboxFactory = new TaskToolboxFactory(taskConfig, // taskExecutorNode
    null, taskActionClientFactory, EMITTER, new TestDataSegmentPusher(), new TestDataSegmentKiller(), // DataSegmentMover
    null, // DataSegmentArchiver
    null, new TestDataSegmentAnnouncer(), EasyMock.createNiceMock(DataSegmentServerAnnouncer.class), handoffNotifierFactory, () -> conglomerate, DirectQueryProcessingPool.INSTANCE, NoopJoinableFactory.INSTANCE, () -> EasyMock.createMock(MonitorScheduler.class), new SegmentCacheManagerFactory(testUtils.getTestObjectMapper()), testUtils.getTestObjectMapper(), testUtils.getTestIndexIO(), MapCache.create(1024), new CacheConfig(), new CachePopulatorStats(), testUtils.getTestIndexMergerV9(), EasyMock.createNiceMock(DruidNodeAnnouncer.class), EasyMock.createNiceMock(DruidNode.class), new LookupNodeService("tier"), new DataNodeService("tier", 1000, ServerType.INDEXER_EXECUTOR, 0), new NoopTestTaskReportFileWriter(), null, AuthTestUtils.TEST_AUTHORIZER_MAPPER, new NoopChatHandlerProvider(), testUtils.getRowIngestionMetersFactory(), new TestAppenderatorsManager(), new NoopIndexingServiceClient(), null, null, null);
    return toolboxFactory.build(task);
}
Also used : TimeseriesQuery(org.apache.druid.query.timeseries.TimeseriesQuery) Query(org.apache.druid.query.Query) QueryWatcher(org.apache.druid.query.QueryWatcher) TaskActionClientFactory(org.apache.druid.indexing.common.actions.TaskActionClientFactory) LocalTaskActionClientFactory(org.apache.druid.indexing.common.actions.LocalTaskActionClientFactory) TestDataSegmentAnnouncer(org.apache.druid.indexing.test.TestDataSegmentAnnouncer) TaskConfig(org.apache.druid.indexing.common.config.TaskConfig) DruidNodeAnnouncer(org.apache.druid.discovery.DruidNodeAnnouncer) TimeseriesQueryQueryToolChest(org.apache.druid.query.timeseries.TimeseriesQueryQueryToolChest) TaskAuditLogConfig(org.apache.druid.indexing.common.actions.TaskAuditLogConfig) AuthTestUtils(org.apache.druid.server.security.AuthTestUtils) TestUtils(org.apache.druid.indexing.common.TestUtils) QueryRunnerFactoryConglomerate(org.apache.druid.query.QueryRunnerFactoryConglomerate) DefaultQueryRunnerFactoryConglomerate(org.apache.druid.query.DefaultQueryRunnerFactoryConglomerate) TimeseriesQueryEngine(org.apache.druid.query.timeseries.TimeseriesQueryEngine) Executor(java.util.concurrent.Executor) NoopIndexingServiceClient(org.apache.druid.client.indexing.NoopIndexingServiceClient) TaskToolboxFactory(org.apache.druid.indexing.common.TaskToolboxFactory) SegmentDescriptor(org.apache.druid.query.SegmentDescriptor) CachePopulatorStats(org.apache.druid.client.cache.CachePopulatorStats) TaskActionToolbox(org.apache.druid.indexing.common.actions.TaskActionToolbox) LocalTaskActionClientFactory(org.apache.druid.indexing.common.actions.LocalTaskActionClientFactory) CacheConfig(org.apache.druid.client.cache.CacheConfig) TestDataSegmentPusher(org.apache.druid.indexing.test.TestDataSegmentPusher) TimeseriesQuery(org.apache.druid.query.timeseries.TimeseriesQuery) MonitorScheduler(org.apache.druid.java.util.metrics.MonitorScheduler) NoopChatHandlerProvider(org.apache.druid.segment.realtime.firehose.NoopChatHandlerProvider) SegmentHandoffNotifier(org.apache.druid.segment.handoff.SegmentHandoffNotifier) SegmentCacheManagerFactory(org.apache.druid.indexing.common.SegmentCacheManagerFactory) DefaultQueryRunnerFactoryConglomerate(org.apache.druid.query.DefaultQueryRunnerFactoryConglomerate) EntryExistsException(org.apache.druid.metadata.EntryExistsException) LookupNodeService(org.apache.druid.discovery.LookupNodeService) TestDataSegmentKiller(org.apache.druid.indexing.test.TestDataSegmentKiller) DataSegmentServerAnnouncer(org.apache.druid.server.coordination.DataSegmentServerAnnouncer) SegmentHandoffNotifierFactory(org.apache.druid.segment.handoff.SegmentHandoffNotifierFactory) SupervisorManager(org.apache.druid.indexing.overlord.supervisor.SupervisorManager) TimeseriesQueryRunnerFactory(org.apache.druid.query.timeseries.TimeseriesQueryRunnerFactory) TaskLockbox(org.apache.druid.indexing.overlord.TaskLockbox) ListenableFuture(com.google.common.util.concurrent.ListenableFuture) DruidNode(org.apache.druid.server.DruidNode) DataNodeService(org.apache.druid.discovery.DataNodeService)

Example 9 with QueryRunnerFactoryConglomerate

use of org.apache.druid.query.QueryRunnerFactoryConglomerate in project druid by druid-io.

the class TaskLifecycleTest method setUp.

@Before
public void setUp() throws Exception {
    // mock things
    queryRunnerFactoryConglomerate = EasyMock.createStrictMock(QueryRunnerFactoryConglomerate.class);
    monitorScheduler = EasyMock.createStrictMock(MonitorScheduler.class);
    // initialize variables
    announcedSinks = 0;
    pushedSegments = 0;
    indexSpec = new IndexSpec();
    emitter = newMockEmitter();
    EmittingLogger.registerEmitter(emitter);
    mapper = TEST_UTILS.getTestObjectMapper();
    handOffCallbacks = new ConcurrentHashMap<>();
    // Set up things, the order does matter as if it is messed up then the setUp
    // should fail because of the Precondition checks in the respective setUp methods
    // For creating a customized TaskQueue see testRealtimeIndexTaskFailure test
    taskStorage = setUpTaskStorage();
    handoffNotifierFactory = setUpSegmentHandOffNotifierFactory();
    dataSegmentPusher = setUpDataSegmentPusher();
    mdc = setUpMetadataStorageCoordinator();
    tb = setUpTaskToolboxFactory(dataSegmentPusher, handoffNotifierFactory, mdc);
    taskRunner = setUpThreadPoolTaskRunner(tb);
    taskQueue = setUpTaskQueue(taskStorage, taskRunner);
}
Also used : QueryRunnerFactoryConglomerate(org.apache.druid.query.QueryRunnerFactoryConglomerate) IndexSpec(org.apache.druid.segment.IndexSpec) MonitorScheduler(org.apache.druid.java.util.metrics.MonitorScheduler) Before(org.junit.Before)

Example 10 with QueryRunnerFactoryConglomerate

use of org.apache.druid.query.QueryRunnerFactoryConglomerate in project druid by druid-io.

the class DumpSegment method executeQuery.

@VisibleForTesting
static <T> Sequence<T> executeQuery(final Injector injector, final QueryableIndex index, final Query<T> query) {
    final QueryRunnerFactoryConglomerate conglomerate = injector.getInstance(QueryRunnerFactoryConglomerate.class);
    final QueryRunnerFactory<T, Query<T>> factory = conglomerate.findFactory(query);
    final QueryRunner<T> runner = factory.createRunner(new QueryableIndexSegment(index, SegmentId.dummy("segment")));
    return factory.getToolchest().mergeResults(factory.mergeRunners(DirectQueryProcessingPool.INSTANCE, ImmutableList.of(runner))).run(QueryPlus.wrap(query), ResponseContext.createEmpty());
}
Also used : QueryableIndexSegment(org.apache.druid.segment.QueryableIndexSegment) QueryRunnerFactoryConglomerate(org.apache.druid.query.QueryRunnerFactoryConglomerate) SegmentMetadataQuery(org.apache.druid.query.metadata.metadata.SegmentMetadataQuery) Query(org.apache.druid.query.Query) VisibleForTesting(com.google.common.annotations.VisibleForTesting)

Aggregations

QueryRunnerFactoryConglomerate (org.apache.druid.query.QueryRunnerFactoryConglomerate)11 Query (org.apache.druid.query.Query)4 SpecificSegmentsQuerySegmentWalker (org.apache.druid.sql.calcite.util.SpecificSegmentsQuerySegmentWalker)4 DataSegment (org.apache.druid.timeline.DataSegment)4 CacheConfig (org.apache.druid.client.cache.CacheConfig)3 CachePopulatorStats (org.apache.druid.client.cache.CachePopulatorStats)3 DefaultQueryRunnerFactoryConglomerate (org.apache.druid.query.DefaultQueryRunnerFactoryConglomerate)3 TimeseriesQueryEngine (org.apache.druid.query.timeseries.TimeseriesQueryEngine)3 TimeseriesQueryQueryToolChest (org.apache.druid.query.timeseries.TimeseriesQueryQueryToolChest)3 TimeseriesQueryRunnerFactory (org.apache.druid.query.timeseries.TimeseriesQueryRunnerFactory)3 QueryableIndex (org.apache.druid.segment.QueryableIndex)3 LinearShardSpec (org.apache.druid.timeline.partition.LinearShardSpec)3 ListenableFuture (com.google.common.util.concurrent.ListenableFuture)2 CountDownLatch (java.util.concurrent.CountDownLatch)2 DefaultObjectMapper (org.apache.druid.jackson.DefaultObjectMapper)2 MonitorScheduler (org.apache.druid.java.util.metrics.MonitorScheduler)2 QueryRunnerFactory (org.apache.druid.query.QueryRunnerFactory)2 CountAggregatorFactory (org.apache.druid.query.aggregation.CountAggregatorFactory)2 GeneratorSchemaInfo (org.apache.druid.segment.generator.GeneratorSchemaInfo)2 SegmentGenerator (org.apache.druid.segment.generator.SegmentGenerator)2