use of org.apache.druid.indexing.common.task.batch.partition.RangePartitionAnalysis in project druid by druid-io.
the class RangePartitionCachingLocalSegmentAllocatorTest method setup.
@Before
public void setup() throws IOException {
TaskToolbox toolbox = createToolbox(INTERVAL_TO_VERSION.keySet().stream().map(RangePartitionCachingLocalSegmentAllocatorTest::createTaskLock).collect(Collectors.toList()));
final RangePartitionAnalysis partitionAnalysis = new RangePartitionAnalysis(new DimensionRangePartitionsSpec(null, 1, PARTITION_DIMENSIONS, false));
INTERVAL_TO_PARTITIONS.forEach(partitionAnalysis::updateBucket);
target = SegmentAllocators.forNonLinearPartitioning(toolbox, DATASOURCE, TASKID, new UniformGranularitySpec(Granularities.HOUR, Granularities.NONE, ImmutableList.of()), new SupervisorTaskAccessWithNullClient(SUPERVISOR_TASKID), partitionAnalysis);
sequenceNameFunction = ((CachingLocalSegmentAllocator) target).getSequenceNameFunction();
}
use of org.apache.druid.indexing.common.task.batch.partition.RangePartitionAnalysis in project druid by druid-io.
the class PartialRangeSegmentGenerateTask method createSegmentAllocator.
@Override
SegmentAllocatorForBatch createSegmentAllocator(TaskToolbox toolbox, ParallelIndexSupervisorTaskClient taskClient) throws IOException {
final RangePartitionAnalysis partitionAnalysis = new RangePartitionAnalysis((DimensionRangePartitionsSpec) ingestionSchema.getTuningConfig().getPartitionsSpec());
intervalToPartitions.forEach(partitionAnalysis::updateBucket);
return SegmentAllocators.forNonLinearPartitioning(toolbox, getDataSource(), getSubtaskSpecId(), ingestionSchema.getDataSchema().getGranularitySpec(), new SupervisorTaskAccess(supervisorTaskId, taskClient), partitionAnalysis);
}
Aggregations