Search in sources :

Example 16 with SingleDimensionPartitionsSpec

use of org.apache.druid.indexer.partitions.SingleDimensionPartitionsSpec in project druid by druid-io.

the class ParallelIndexSupervisorTaskSerdeTest method forceGuaranteedRollupWithSingleDimPartitionsValid.

@Test
public void forceGuaranteedRollupWithSingleDimPartitionsValid() {
    ParallelIndexSupervisorTask task = new ParallelIndexSupervisorTaskBuilder().ingestionSpec(new ParallelIndexIngestionSpecBuilder().forceGuaranteedRollup(true).partitionsSpec(new SingleDimensionPartitionsSpec(1, null, "a", true)).inputIntervals(INTERVALS).build()).build();
    PartitionsSpec partitionsSpec = task.getIngestionSchema().getTuningConfig().getPartitionsSpec();
    Assert.assertThat(partitionsSpec, CoreMatchers.instanceOf(SingleDimensionPartitionsSpec.class));
}
Also used : PartitionsSpec(org.apache.druid.indexer.partitions.PartitionsSpec) HashedPartitionsSpec(org.apache.druid.indexer.partitions.HashedPartitionsSpec) SingleDimensionPartitionsSpec(org.apache.druid.indexer.partitions.SingleDimensionPartitionsSpec) SingleDimensionPartitionsSpec(org.apache.druid.indexer.partitions.SingleDimensionPartitionsSpec) Test(org.junit.Test)

Example 17 with SingleDimensionPartitionsSpec

use of org.apache.druid.indexer.partitions.SingleDimensionPartitionsSpec in project druid by druid-io.

the class ParallelIndexSupervisorTaskSerdeTest method forceGuaranteedRollupWithSingleDimPartitionsMissingDimension.

@Test
public void forceGuaranteedRollupWithSingleDimPartitionsMissingDimension() {
    expectedException.expect(IllegalArgumentException.class);
    expectedException.expectMessage("partitionDimensions must be specified");
    new ParallelIndexSupervisorTaskBuilder().ingestionSpec(new ParallelIndexIngestionSpecBuilder().forceGuaranteedRollup(true).partitionsSpec(new SingleDimensionPartitionsSpec(1, null, null, true)).inputIntervals(INTERVALS).build()).build();
}
Also used : SingleDimensionPartitionsSpec(org.apache.druid.indexer.partitions.SingleDimensionPartitionsSpec) Test(org.junit.Test)

Example 18 with SingleDimensionPartitionsSpec

use of org.apache.druid.indexer.partitions.SingleDimensionPartitionsSpec in project druid by druid-io.

the class CompactionTaskParallelRunTest method testRunParallelWithRangePartitioning.

@Test
public void testRunParallelWithRangePartitioning() throws Exception {
    // Range partitioning is not supported with segment lock yet
    Assume.assumeFalse(lockGranularity == LockGranularity.SEGMENT);
    runIndexTask(null, true);
    final Builder builder = new Builder(DATA_SOURCE, getSegmentCacheManagerFactory(), RETRY_POLICY_FACTORY);
    final CompactionTask compactionTask = builder.inputSpec(new CompactionIntervalSpec(INTERVAL_TO_INDEX, null)).tuningConfig(newTuningConfig(new SingleDimensionPartitionsSpec(7, null, "dim", false), 2, true)).build();
    final Set<DataSegment> compactedSegments = runTask(compactionTask);
    for (DataSegment segment : compactedSegments) {
        // Expect compaction state to exist as store compaction state by default
        Map<String, String> expectedLongSumMetric = new HashMap<>();
        expectedLongSumMetric.put("type", "longSum");
        expectedLongSumMetric.put("name", "val");
        expectedLongSumMetric.put("fieldName", "val");
        expectedLongSumMetric.put("expression", null);
        Assert.assertSame(SingleDimensionShardSpec.class, segment.getShardSpec().getClass());
        CompactionState expectedState = new CompactionState(new SingleDimensionPartitionsSpec(7, null, "dim", false), new DimensionsSpec(DimensionsSpec.getDefaultSchemas(ImmutableList.of("ts", "dim"))), ImmutableList.of(expectedLongSumMetric), null, compactionTask.getTuningConfig().getIndexSpec().asMap(getObjectMapper()), getObjectMapper().readValue(getObjectMapper().writeValueAsString(new UniformGranularitySpec(Granularities.HOUR, Granularities.MINUTE, true, ImmutableList.of(segment.getInterval()))), Map.class));
        Assert.assertEquals(expectedState, segment.getLastCompactionState());
    }
}
Also used : HashMap(java.util.HashMap) Builder(org.apache.druid.indexing.common.task.CompactionTask.Builder) SingleDimensionPartitionsSpec(org.apache.druid.indexer.partitions.SingleDimensionPartitionsSpec) DataSegment(org.apache.druid.timeline.DataSegment) UniformGranularitySpec(org.apache.druid.segment.indexing.granularity.UniformGranularitySpec) DimensionsSpec(org.apache.druid.data.input.impl.DimensionsSpec) CompactionState(org.apache.druid.timeline.CompactionState) Map(java.util.Map) ImmutableMap(com.google.common.collect.ImmutableMap) HashMap(java.util.HashMap) AbstractParallelIndexSupervisorTaskTest(org.apache.druid.indexing.common.task.batch.parallel.AbstractParallelIndexSupervisorTaskTest) Test(org.junit.Test)

Example 19 with SingleDimensionPartitionsSpec

use of org.apache.druid.indexer.partitions.SingleDimensionPartitionsSpec in project druid by druid-io.

the class HadoopDruidIndexerConfigTest method testGetTargetPartitionSizeWithSingleDimensionPartitionsTargetRowsPerSegment.

@Test
public void testGetTargetPartitionSizeWithSingleDimensionPartitionsTargetRowsPerSegment() {
    int targetRowsPerSegment = 123;
    SingleDimensionPartitionsSpec partitionsSpec = new SingleDimensionPartitionsSpec(targetRowsPerSegment, null, null, false);
    HadoopIngestionSpec spec = new HadoopIngestionSpecBuilder().partitionsSpec(partitionsSpec).build();
    HadoopDruidIndexerConfig config = new HadoopDruidIndexerConfig(spec);
    int targetPartitionSize = config.getTargetPartitionSize();
    Assert.assertEquals(targetRowsPerSegment, targetPartitionSize);
}
Also used : SingleDimensionPartitionsSpec(org.apache.druid.indexer.partitions.SingleDimensionPartitionsSpec) Test(org.junit.Test)

Example 20 with SingleDimensionPartitionsSpec

use of org.apache.druid.indexer.partitions.SingleDimensionPartitionsSpec in project druid by druid-io.

the class HadoopDruidIndexerConfigTest method testGetTargetPartitionSizeWithSingleDimensionPartitionsMaxRowsPerSegment.

@Test
public void testGetTargetPartitionSizeWithSingleDimensionPartitionsMaxRowsPerSegment() {
    int maxRowsPerSegment = 456;
    SingleDimensionPartitionsSpec partitionsSpec = new SingleDimensionPartitionsSpec(null, maxRowsPerSegment, null, false);
    HadoopIngestionSpec spec = new HadoopIngestionSpecBuilder().partitionsSpec(partitionsSpec).build();
    HadoopDruidIndexerConfig config = new HadoopDruidIndexerConfig(spec);
    int targetPartitionSize = config.getTargetPartitionSize();
    Assert.assertEquals(maxRowsPerSegment, targetPartitionSize);
}
Also used : SingleDimensionPartitionsSpec(org.apache.druid.indexer.partitions.SingleDimensionPartitionsSpec) Test(org.junit.Test)

Aggregations

SingleDimensionPartitionsSpec (org.apache.druid.indexer.partitions.SingleDimensionPartitionsSpec)21 Test (org.junit.Test)18 List (java.util.List)7 Map (java.util.Map)6 DataSegment (org.apache.druid.timeline.DataSegment)6 Interval (org.joda.time.Interval)6 ImmutableList (com.google.common.collect.ImmutableList)5 ArrayList (java.util.ArrayList)5 HashedPartitionsSpec (org.apache.druid.indexer.partitions.HashedPartitionsSpec)5 PartitionsSpec (org.apache.druid.indexer.partitions.PartitionsSpec)5 ImmutableMap (com.google.common.collect.ImmutableMap)4 HashMap (java.util.HashMap)4 DimensionsSpec (org.apache.druid.data.input.impl.DimensionsSpec)4 DynamicPartitionsSpec (org.apache.druid.indexer.partitions.DynamicPartitionsSpec)4 SingleDimensionShardSpec (org.apache.druid.timeline.partition.SingleDimensionShardSpec)4 IOException (java.io.IOException)3 TaskStatus (org.apache.druid.indexer.TaskStatus)3 TaskToolbox (org.apache.druid.indexing.common.TaskToolbox)3 TaskActionClient (org.apache.druid.indexing.common.actions.TaskActionClient)3 Builder (org.apache.druid.indexing.common.task.CompactionTask.Builder)3