Search in sources :

Example 6 with LinearShardSpec

use of org.apache.druid.timeline.partition.LinearShardSpec in project druid by druid-io.

the class IndexerSQLMetadataStorageCoordinatorTest method testTransactionalAnnounceFailSegmentDropFailWithoutRetry.

@Test
public void testTransactionalAnnounceFailSegmentDropFailWithoutRetry() throws IOException {
    insertUsedSegments(ImmutableSet.of(existingSegment1, existingSegment2));
    Assert.assertEquals(ImmutableList.of(existingSegment1.getId().toString(), existingSegment2.getId().toString()), retrieveUsedSegmentIds());
    DataSegment dataSegmentBar = DataSegment.builder().dataSource("bar").interval(Intervals.of("2001/P1D")).shardSpec(new LinearShardSpec(1)).version("b").size(0).build();
    Set<DataSegment> dropSegments = ImmutableSet.of(existingSegment1, existingSegment2, dataSegmentBar);
    final SegmentPublishResult result1 = coordinator.announceHistoricalSegments(SEGMENTS, dropSegments, null, null);
    Assert.assertEquals(SegmentPublishResult.fail("java.lang.RuntimeException: Aborting transaction!"), result1);
    // Should only be tried once. Since dropSegmentsWithHandle will return FAILURE (not TRY_AGAIN) as set of
    // segments to drop contains more than one datasource.
    Assert.assertEquals(1, segmentTableDropUpdateCounter.get());
    Assert.assertEquals(ImmutableList.of(existingSegment1.getId().toString(), existingSegment2.getId().toString()), retrieveUsedSegmentIds());
}
Also used : SegmentPublishResult(org.apache.druid.indexing.overlord.SegmentPublishResult) LinearShardSpec(org.apache.druid.timeline.partition.LinearShardSpec) DataSegment(org.apache.druid.timeline.DataSegment) Test(org.junit.Test)

Example 7 with LinearShardSpec

use of org.apache.druid.timeline.partition.LinearShardSpec in project druid by druid-io.

the class ActionBasedUsedSegmentCheckerTest method testBasic.

@Test
public void testBasic() throws IOException {
    final TaskActionClient taskActionClient = EasyMock.createMock(TaskActionClient.class);
    EasyMock.expect(taskActionClient.submit(new RetrieveUsedSegmentsAction("bar", Intervals.of("2002/P1D"), null, Segments.ONLY_VISIBLE))).andReturn(ImmutableList.of(DataSegment.builder().dataSource("bar").interval(Intervals.of("2002/P1D")).shardSpec(new LinearShardSpec(0)).version("b").size(0).build(), DataSegment.builder().dataSource("bar").interval(Intervals.of("2002/P1D")).shardSpec(new LinearShardSpec(1)).version("b").size(0).build()));
    EasyMock.expect(taskActionClient.submit(new RetrieveUsedSegmentsAction("foo", null, ImmutableList.of(Intervals.of("2000/P1D"), Intervals.of("2001/P1D")), Segments.ONLY_VISIBLE))).andReturn(ImmutableList.of(DataSegment.builder().dataSource("foo").interval(Intervals.of("2000/P1D")).shardSpec(new LinearShardSpec(0)).version("a").size(0).build(), DataSegment.builder().dataSource("foo").interval(Intervals.of("2000/P1D")).shardSpec(new LinearShardSpec(1)).version("a").size(0).build(), DataSegment.builder().dataSource("foo").interval(Intervals.of("2001/P1D")).shardSpec(new LinearShardSpec(1)).version("b").size(0).build(), DataSegment.builder().dataSource("foo").interval(Intervals.of("2002/P1D")).shardSpec(new LinearShardSpec(1)).version("b").size(0).build()));
    EasyMock.replay(taskActionClient);
    final UsedSegmentChecker checker = new ActionBasedUsedSegmentChecker(taskActionClient);
    final Set<DataSegment> segments = checker.findUsedSegments(ImmutableSet.of(new SegmentIdWithShardSpec("foo", Intervals.of("2000/P1D"), "a", new LinearShardSpec(1)), new SegmentIdWithShardSpec("foo", Intervals.of("2001/P1D"), "b", new LinearShardSpec(0)), new SegmentIdWithShardSpec("bar", Intervals.of("2002/P1D"), "b", new LinearShardSpec(0))));
    Assert.assertEquals(ImmutableSet.of(DataSegment.builder().dataSource("foo").interval(Intervals.of("2000/P1D")).shardSpec(new LinearShardSpec(1)).version("a").size(0).build(), DataSegment.builder().dataSource("bar").interval(Intervals.of("2002/P1D")).shardSpec(new LinearShardSpec(0)).version("b").size(0).build()), segments);
    EasyMock.verify(taskActionClient);
}
Also used : TaskActionClient(org.apache.druid.indexing.common.actions.TaskActionClient) LinearShardSpec(org.apache.druid.timeline.partition.LinearShardSpec) RetrieveUsedSegmentsAction(org.apache.druid.indexing.common.actions.RetrieveUsedSegmentsAction) UsedSegmentChecker(org.apache.druid.segment.realtime.appenderator.UsedSegmentChecker) DataSegment(org.apache.druid.timeline.DataSegment) SegmentIdWithShardSpec(org.apache.druid.segment.realtime.appenderator.SegmentIdWithShardSpec) Test(org.junit.Test)

Example 8 with LinearShardSpec

use of org.apache.druid.timeline.partition.LinearShardSpec in project druid by druid-io.

the class SequenceMetadataTest method testPublishAnnotatedSegmentsThrowExceptionIfDropSegmentsNotNullAndNotEmpty.

@Test
public void testPublishAnnotatedSegmentsThrowExceptionIfDropSegmentsNotNullAndNotEmpty() throws Exception {
    DataSegment dataSegment = DataSegment.builder().dataSource("foo").interval(Intervals.of("2001/P1D")).shardSpec(new LinearShardSpec(1)).version("b").size(0).build();
    Set<DataSegment> notNullNotEmptySegment = ImmutableSet.of(dataSegment);
    SequenceMetadata<Integer, Integer> sequenceMetadata = new SequenceMetadata<>(1, "test", ImmutableMap.of(), ImmutableMap.of(), true, ImmutableSet.of());
    TransactionalSegmentPublisher transactionalSegmentPublisher = sequenceMetadata.createPublisher(mockSeekableStreamIndexTaskRunner, mockTaskToolbox, true);
    expectedException.expect(ISE.class);
    expectedException.expectMessage("Stream ingestion task unexpectedly attempted to drop segments: " + SegmentUtils.commaSeparatedIdentifiers(notNullNotEmptySegment));
    transactionalSegmentPublisher.publishAnnotatedSegments(null, notNullNotEmptySegment, ImmutableSet.of(), null);
}
Also used : TransactionalSegmentPublisher(org.apache.druid.segment.realtime.appenderator.TransactionalSegmentPublisher) LinearShardSpec(org.apache.druid.timeline.partition.LinearShardSpec) DataSegment(org.apache.druid.timeline.DataSegment) Test(org.junit.Test)

Example 9 with LinearShardSpec

use of org.apache.druid.timeline.partition.LinearShardSpec in project druid by druid-io.

the class TDigestSketchSqlAggregatorTest method createQuerySegmentWalker.

@Override
public SpecificSegmentsQuerySegmentWalker createQuerySegmentWalker() throws IOException {
    TDigestSketchModule.registerSerde();
    final QueryableIndex index = IndexBuilder.create(CalciteTests.getJsonMapper()).tmpDir(temporaryFolder.newFolder()).segmentWriteOutMediumFactory(OffHeapMemorySegmentWriteOutMediumFactory.instance()).schema(new IncrementalIndexSchema.Builder().withMetrics(new CountAggregatorFactory("cnt"), new DoubleSumAggregatorFactory("m1", "m1"), new TDigestSketchAggregatorFactory("qsketch_m1", "m1", 128)).withRollup(false).build()).rows(CalciteTests.ROWS1).buildMMappedIndex();
    return new SpecificSegmentsQuerySegmentWalker(conglomerate).add(DataSegment.builder().dataSource(CalciteTests.DATASOURCE1).interval(index.getDataInterval()).version("1").shardSpec(new LinearShardSpec(0)).size(0).build(), index);
}
Also used : TDigestSketchAggregatorFactory(org.apache.druid.query.aggregation.tdigestsketch.TDigestSketchAggregatorFactory) CountAggregatorFactory(org.apache.druid.query.aggregation.CountAggregatorFactory) DoubleSumAggregatorFactory(org.apache.druid.query.aggregation.DoubleSumAggregatorFactory) SpecificSegmentsQuerySegmentWalker(org.apache.druid.sql.calcite.util.SpecificSegmentsQuerySegmentWalker) QueryableIndex(org.apache.druid.segment.QueryableIndex) LinearShardSpec(org.apache.druid.timeline.partition.LinearShardSpec) IndexBuilder(org.apache.druid.segment.IndexBuilder)

Example 10 with LinearShardSpec

use of org.apache.druid.timeline.partition.LinearShardSpec in project druid by druid-io.

the class QuantileSqlAggregatorTest method createQuerySegmentWalker.

@Override
public SpecificSegmentsQuerySegmentWalker createQuerySegmentWalker() throws IOException {
    ApproximateHistogramDruidModule.registerSerde();
    final QueryableIndex index = IndexBuilder.create(CalciteTests.getJsonMapper()).tmpDir(temporaryFolder.newFolder()).segmentWriteOutMediumFactory(OffHeapMemorySegmentWriteOutMediumFactory.instance()).schema(new IncrementalIndexSchema.Builder().withMetrics(new CountAggregatorFactory("cnt"), new DoubleSumAggregatorFactory("m1", "m1"), new ApproximateHistogramAggregatorFactory("hist_m1", "m1", null, null, null, null, false)).withRollup(false).build()).rows(CalciteTests.ROWS1).buildMMappedIndex();
    return new SpecificSegmentsQuerySegmentWalker(conglomerate).add(DataSegment.builder().dataSource(CalciteTests.DATASOURCE1).interval(index.getDataInterval()).version("1").shardSpec(new LinearShardSpec(0)).size(0).build(), index);
}
Also used : CountAggregatorFactory(org.apache.druid.query.aggregation.CountAggregatorFactory) DoubleSumAggregatorFactory(org.apache.druid.query.aggregation.DoubleSumAggregatorFactory) SpecificSegmentsQuerySegmentWalker(org.apache.druid.sql.calcite.util.SpecificSegmentsQuerySegmentWalker) QueryableIndex(org.apache.druid.segment.QueryableIndex) LinearShardSpec(org.apache.druid.timeline.partition.LinearShardSpec) IndexBuilder(org.apache.druid.segment.IndexBuilder) ApproximateHistogramAggregatorFactory(org.apache.druid.query.aggregation.histogram.ApproximateHistogramAggregatorFactory)

Aggregations

LinearShardSpec (org.apache.druid.timeline.partition.LinearShardSpec)42 DataSegment (org.apache.druid.timeline.DataSegment)30 Test (org.junit.Test)18 QueryableIndex (org.apache.druid.segment.QueryableIndex)14 Interval (org.joda.time.Interval)14 GeneratorSchemaInfo (org.apache.druid.segment.generator.GeneratorSchemaInfo)12 SegmentGenerator (org.apache.druid.segment.generator.SegmentGenerator)12 SpecificSegmentsQuerySegmentWalker (org.apache.druid.sql.calcite.util.SpecificSegmentsQuerySegmentWalker)12 CountAggregatorFactory (org.apache.druid.query.aggregation.CountAggregatorFactory)11 DoubleSumAggregatorFactory (org.apache.druid.query.aggregation.DoubleSumAggregatorFactory)9 Setup (org.openjdk.jmh.annotations.Setup)9 SegmentIdWithShardSpec (org.apache.druid.segment.realtime.appenderator.SegmentIdWithShardSpec)8 MetadataStorageTablesConfig (org.apache.druid.metadata.MetadataStorageTablesConfig)7 IndexBuilder (org.apache.druid.segment.IndexBuilder)7 DataSegmentPusher (org.apache.druid.segment.loading.DataSegmentPusher)7 HdfsDataSegmentPusher (org.apache.druid.storage.hdfs.HdfsDataSegmentPusher)7 HdfsDataSegmentPusherConfig (org.apache.druid.storage.hdfs.HdfsDataSegmentPusherConfig)7 LocalFileSystem (org.apache.hadoop.fs.LocalFileSystem)7 Path (org.apache.hadoop.fs.Path)7 File (java.io.File)6