Search in sources :

Example 16 with SegmentIdWithShardSpec

use of org.apache.druid.segment.realtime.appenderator.SegmentIdWithShardSpec in project druid by druid-io.

the class RangePartitionCachingLocalSegmentAllocatorTest method testAllocate.

private void testAllocate(InputRow row, Interval interval, int bucketId, @Nullable StringTuple partitionStart, @Nullable StringTuple partitionEnd) {
    String sequenceName = sequenceNameFunction.getSequenceName(interval, row);
    SegmentIdWithShardSpec segmentIdWithShardSpec = allocate(row, sequenceName);
    Assert.assertEquals(SegmentId.of(DATASOURCE, interval, INTERVAL_TO_VERSION.get(interval), bucketId), segmentIdWithShardSpec.asSegmentId());
    DimensionRangeBucketShardSpec shardSpec = (DimensionRangeBucketShardSpec) segmentIdWithShardSpec.getShardSpec();
    Assert.assertEquals(PARTITION_DIMENSIONS, shardSpec.getDimensions());
    Assert.assertEquals(bucketId, shardSpec.getBucketId());
    Assert.assertEquals(partitionStart, shardSpec.getStart());
    Assert.assertEquals(partitionEnd, shardSpec.getEnd());
}
Also used : SegmentIdWithShardSpec(org.apache.druid.segment.realtime.appenderator.SegmentIdWithShardSpec) DimensionRangeBucketShardSpec(org.apache.druid.timeline.partition.DimensionRangeBucketShardSpec)

Example 17 with SegmentIdWithShardSpec

use of org.apache.druid.segment.realtime.appenderator.SegmentIdWithShardSpec in project druid by druid-io.

the class IndexerSQLMetadataStorageCoordinatorTest method testAddNumberedShardSpecAfterSingleDimensionsShardSpecWithUnknownCorePartitionSize.

@Test
public void testAddNumberedShardSpecAfterSingleDimensionsShardSpecWithUnknownCorePartitionSize() throws IOException {
    final String datasource = "datasource";
    final Interval interval = Intervals.of("2020-01-01/P1D");
    final String version = "version";
    final List<String> dimensions = ImmutableList.of("dim");
    final List<String> metrics = ImmutableList.of("met");
    final Set<DataSegment> originalSegments = new HashSet<>();
    for (int i = 0; i < 6; i++) {
        final String start = i == 0 ? null : String.valueOf(i - 1);
        final String end = i == 5 ? null : String.valueOf(i);
        originalSegments.add(new DataSegment(datasource, interval, version, ImmutableMap.of(), dimensions, metrics, new SingleDimensionShardSpec("dim", start, end, i, // emulate shardSpecs created in older versions of Druid
        null), 9, 10L));
    }
    coordinator.announceHistoricalSegments(originalSegments);
    final SegmentIdWithShardSpec id = coordinator.allocatePendingSegment(datasource, "seq", null, interval, NumberedPartialShardSpec.instance(), version, false);
    Assert.assertNull(id);
}
Also used : DataSegment(org.apache.druid.timeline.DataSegment) SingleDimensionShardSpec(org.apache.druid.timeline.partition.SingleDimensionShardSpec) SegmentIdWithShardSpec(org.apache.druid.segment.realtime.appenderator.SegmentIdWithShardSpec) Interval(org.joda.time.Interval) HashSet(java.util.HashSet) Test(org.junit.Test)

Example 18 with SegmentIdWithShardSpec

use of org.apache.druid.segment.realtime.appenderator.SegmentIdWithShardSpec in project druid by druid-io.

the class IndexerSQLMetadataStorageCoordinatorTest method testDeletePendingSegment.

@Test
public void testDeletePendingSegment() throws InterruptedException {
    final PartialShardSpec partialShardSpec = NumberedPartialShardSpec.instance();
    final String dataSource = "ds";
    final Interval interval = Intervals.of("2017-01-01/2017-02-01");
    String prevSegmentId = null;
    final DateTime begin = DateTimes.nowUtc();
    for (int i = 0; i < 10; i++) {
        final SegmentIdWithShardSpec identifier = coordinator.allocatePendingSegment(dataSource, "seq", prevSegmentId, interval, partialShardSpec, "version", false);
        prevSegmentId = identifier.toString();
    }
    Thread.sleep(100);
    final DateTime secondBegin = DateTimes.nowUtc();
    for (int i = 0; i < 5; i++) {
        final SegmentIdWithShardSpec identifier = coordinator.allocatePendingSegment(dataSource, "seq", prevSegmentId, interval, partialShardSpec, "version", false);
        prevSegmentId = identifier.toString();
    }
    final int numDeleted = coordinator.deletePendingSegmentsCreatedInInterval(dataSource, new Interval(begin, secondBegin));
    Assert.assertEquals(10, numDeleted);
}
Also used : HashBasedNumberedPartialShardSpec(org.apache.druid.timeline.partition.HashBasedNumberedPartialShardSpec) PartialShardSpec(org.apache.druid.timeline.partition.PartialShardSpec) NumberedPartialShardSpec(org.apache.druid.timeline.partition.NumberedPartialShardSpec) NumberedOverwritePartialShardSpec(org.apache.druid.timeline.partition.NumberedOverwritePartialShardSpec) SegmentIdWithShardSpec(org.apache.druid.segment.realtime.appenderator.SegmentIdWithShardSpec) DateTime(org.joda.time.DateTime) Interval(org.joda.time.Interval) Test(org.junit.Test)

Example 19 with SegmentIdWithShardSpec

use of org.apache.druid.segment.realtime.appenderator.SegmentIdWithShardSpec in project druid by druid-io.

the class IndexerSQLMetadataStorageCoordinatorTest method testAllocatePendingSegmentsWithOvershadowingSegments.

@Test
public void testAllocatePendingSegmentsWithOvershadowingSegments() throws IOException {
    final String dataSource = "ds";
    final Interval interval = Intervals.of("2017-01-01/2017-02-01");
    String prevSegmentId = null;
    for (int i = 0; i < 10; i++) {
        final SegmentIdWithShardSpec identifier = coordinator.allocatePendingSegment(dataSource, "seq", prevSegmentId, interval, new NumberedOverwritePartialShardSpec(0, 1, (short) (i + 1)), "version", false);
        Assert.assertEquals(StringUtils.format("ds_2017-01-01T00:00:00.000Z_2017-02-01T00:00:00.000Z_version%s", "_" + (i + PartitionIds.NON_ROOT_GEN_START_PARTITION_ID)), identifier.toString());
        prevSegmentId = identifier.toString();
        final Set<DataSegment> toBeAnnounced = Collections.singleton(new DataSegment(identifier.getDataSource(), identifier.getInterval(), identifier.getVersion(), null, Collections.emptyList(), Collections.emptyList(), ((NumberedOverwriteShardSpec) identifier.getShardSpec()).withAtomicUpdateGroupSize(1), 0, 10L));
        final Set<DataSegment> announced = coordinator.announceHistoricalSegments(toBeAnnounced);
        Assert.assertEquals(toBeAnnounced, announced);
    }
    final Collection<DataSegment> visibleSegments = coordinator.retrieveUsedSegmentsForInterval(dataSource, interval, Segments.ONLY_VISIBLE);
    Assert.assertEquals(1, visibleSegments.size());
    Assert.assertEquals(new DataSegment(dataSource, interval, "version", null, Collections.emptyList(), Collections.emptyList(), new NumberedOverwriteShardSpec(9 + PartitionIds.NON_ROOT_GEN_START_PARTITION_ID, 0, 1, (short) 9, (short) 1), 0, 10L), Iterables.getOnlyElement(visibleSegments));
}
Also used : NumberedOverwritePartialShardSpec(org.apache.druid.timeline.partition.NumberedOverwritePartialShardSpec) NumberedOverwriteShardSpec(org.apache.druid.timeline.partition.NumberedOverwriteShardSpec) SegmentIdWithShardSpec(org.apache.druid.segment.realtime.appenderator.SegmentIdWithShardSpec) DataSegment(org.apache.druid.timeline.DataSegment) Interval(org.joda.time.Interval) Test(org.junit.Test)

Example 20 with SegmentIdWithShardSpec

use of org.apache.druid.segment.realtime.appenderator.SegmentIdWithShardSpec in project druid by druid-io.

the class IndexerSQLMetadataStorageCoordinatorTest method testAllocatePendingSegment.

@Test
public void testAllocatePendingSegment() {
    final PartialShardSpec partialShardSpec = NumberedPartialShardSpec.instance();
    final String dataSource = "ds";
    final Interval interval = Intervals.of("2017-01-01/2017-02-01");
    final SegmentIdWithShardSpec identifier = coordinator.allocatePendingSegment(dataSource, "seq", null, interval, partialShardSpec, "version", false);
    Assert.assertEquals("ds_2017-01-01T00:00:00.000Z_2017-02-01T00:00:00.000Z_version", identifier.toString());
    final SegmentIdWithShardSpec identifier1 = coordinator.allocatePendingSegment(dataSource, "seq", identifier.toString(), interval, partialShardSpec, identifier.getVersion(), false);
    Assert.assertEquals("ds_2017-01-01T00:00:00.000Z_2017-02-01T00:00:00.000Z_version_1", identifier1.toString());
    final SegmentIdWithShardSpec identifier2 = coordinator.allocatePendingSegment(dataSource, "seq", identifier1.toString(), interval, partialShardSpec, identifier1.getVersion(), false);
    Assert.assertEquals("ds_2017-01-01T00:00:00.000Z_2017-02-01T00:00:00.000Z_version_2", identifier2.toString());
    final SegmentIdWithShardSpec identifier3 = coordinator.allocatePendingSegment(dataSource, "seq", identifier1.toString(), interval, partialShardSpec, identifier1.getVersion(), false);
    Assert.assertEquals("ds_2017-01-01T00:00:00.000Z_2017-02-01T00:00:00.000Z_version_2", identifier3.toString());
    Assert.assertEquals(identifier2, identifier3);
    final SegmentIdWithShardSpec identifier4 = coordinator.allocatePendingSegment(dataSource, "seq1", null, interval, partialShardSpec, "version", false);
    Assert.assertEquals("ds_2017-01-01T00:00:00.000Z_2017-02-01T00:00:00.000Z_version_3", identifier4.toString());
}
Also used : HashBasedNumberedPartialShardSpec(org.apache.druid.timeline.partition.HashBasedNumberedPartialShardSpec) PartialShardSpec(org.apache.druid.timeline.partition.PartialShardSpec) NumberedPartialShardSpec(org.apache.druid.timeline.partition.NumberedPartialShardSpec) NumberedOverwritePartialShardSpec(org.apache.druid.timeline.partition.NumberedOverwritePartialShardSpec) SegmentIdWithShardSpec(org.apache.druid.segment.realtime.appenderator.SegmentIdWithShardSpec) Interval(org.joda.time.Interval) Test(org.junit.Test)

Aggregations

SegmentIdWithShardSpec (org.apache.druid.segment.realtime.appenderator.SegmentIdWithShardSpec)36 Test (org.junit.Test)23 DataSegment (org.apache.druid.timeline.DataSegment)14 Interval (org.joda.time.Interval)14 NoopTask (org.apache.druid.indexing.common.task.NoopTask)12 Task (org.apache.druid.indexing.common.task.Task)12 PartialShardSpec (org.apache.druid.timeline.partition.PartialShardSpec)11 HashBasedNumberedPartialShardSpec (org.apache.druid.timeline.partition.HashBasedNumberedPartialShardSpec)10 NumberedPartialShardSpec (org.apache.druid.timeline.partition.NumberedPartialShardSpec)10 HashBasedNumberedShardSpec (org.apache.druid.timeline.partition.HashBasedNumberedShardSpec)9 LinearShardSpec (org.apache.druid.timeline.partition.LinearShardSpec)9 NumberedShardSpec (org.apache.druid.timeline.partition.NumberedShardSpec)8 NumberedOverwritePartialShardSpec (org.apache.druid.timeline.partition.NumberedOverwritePartialShardSpec)7 IOException (java.io.IOException)6 HashSet (java.util.HashSet)6 Map (java.util.Map)6 DateTime (org.joda.time.DateTime)6 ObjectMapper (com.fasterxml.jackson.databind.ObjectMapper)5 Iterables (com.google.common.collect.Iterables)5 List (java.util.List)5