Search in sources :

Example 1 with HashBucketShardSpec

use of org.apache.druid.timeline.partition.HashBucketShardSpec in project druid by druid-io.

the class HashPartitionCachingLocalSegmentAllocatorTest method allocatesCorrectShardSpec.

@Test
public void allocatesCorrectShardSpec() throws IOException {
    InputRow row = createInputRow();
    String sequenceName = sequenceNameFunction.getSequenceName(INTERVAL, row);
    SegmentIdWithShardSpec segmentIdWithShardSpec = target.allocate(row, sequenceName, null, false);
    Assert.assertEquals(SegmentId.of(DATASOURCE, INTERVAL, VERSION, PARTITION_NUM), segmentIdWithShardSpec.asSegmentId());
    HashBucketShardSpec shardSpec = (HashBucketShardSpec) segmentIdWithShardSpec.getShardSpec();
    Assert.assertEquals(PARTITION_DIMENSIONS, shardSpec.getPartitionDimensions());
    Assert.assertEquals(NUM_PARTITONS, shardSpec.getNumBuckets());
    Assert.assertEquals(PARTITION_NUM, shardSpec.getBucketId());
}
Also used : HashBucketShardSpec(org.apache.druid.timeline.partition.HashBucketShardSpec) MapBasedInputRow(org.apache.druid.data.input.MapBasedInputRow) InputRow(org.apache.druid.data.input.InputRow) SegmentIdWithShardSpec(org.apache.druid.segment.realtime.appenderator.SegmentIdWithShardSpec) Test(org.junit.Test)

Example 2 with HashBucketShardSpec

use of org.apache.druid.timeline.partition.HashBucketShardSpec in project druid by druid-io.

the class ShardSpecsTest method testShardSpecSelectionWithNullPartitionDimension.

@Test
public void testShardSpecSelectionWithNullPartitionDimension() {
    HashBucketShardSpec spec1 = new HashBucketShardSpec(0, 2, null, HashPartitionFunction.MURMUR3_32_ABS, jsonMapper);
    HashBucketShardSpec spec2 = new HashBucketShardSpec(1, 2, null, HashPartitionFunction.MURMUR3_32_ABS, jsonMapper);
    Map<Interval, List<BucketNumberedShardSpec<?>>> shardSpecMap = new HashMap<>();
    shardSpecMap.put(Intervals.of("2014-01-01T00:00:00.000Z/2014-01-02T00:00:00.000Z"), ImmutableList.of(spec1, spec2));
    ShardSpecs shardSpecs = new ShardSpecs(shardSpecMap, Granularities.HOUR);
    String visitorId = "visitorId";
    String clientType = "clientType";
    long timestamp1 = DateTimes.of("2014-01-01T00:00:00.000Z").getMillis();
    InputRow row1 = new MapBasedInputRow(timestamp1, Lists.newArrayList(visitorId, clientType), ImmutableMap.of(visitorId, "0", clientType, "iphone"));
    long timestamp2 = DateTimes.of("2014-01-01T00:30:20.456Z").getMillis();
    InputRow row2 = new MapBasedInputRow(timestamp2, Lists.newArrayList(visitorId, clientType), ImmutableMap.of(visitorId, "0", clientType, "iphone"));
    long timestamp3 = DateTimes.of("2014-01-01T10:10:20.456Z").getMillis();
    InputRow row3 = new MapBasedInputRow(timestamp3, Lists.newArrayList(visitorId, clientType), ImmutableMap.of(visitorId, "0", clientType, "iphone"));
    ShardSpec spec3 = shardSpecs.getShardSpec(Intervals.of("2014-01-01T00:00:00.000Z/2014-01-02T00:00:00.000Z"), row1);
    ShardSpec spec4 = shardSpecs.getShardSpec(Intervals.of("2014-01-01T00:00:00.000Z/2014-01-02T00:00:00.000Z"), row2);
    ShardSpec spec5 = shardSpecs.getShardSpec(Intervals.of("2014-01-01T00:00:00.000Z/2014-01-02T00:00:00.000Z"), row3);
    Assert.assertSame(true, spec3 == spec4);
    Assert.assertSame(false, spec3 == spec5);
}
Also used : HashMap(java.util.HashMap) HashBucketShardSpec(org.apache.druid.timeline.partition.HashBucketShardSpec) MapBasedInputRow(org.apache.druid.data.input.MapBasedInputRow) InputRow(org.apache.druid.data.input.InputRow) List(java.util.List) ImmutableList(com.google.common.collect.ImmutableList) MapBasedInputRow(org.apache.druid.data.input.MapBasedInputRow) ShardSpec(org.apache.druid.timeline.partition.ShardSpec) HashBucketShardSpec(org.apache.druid.timeline.partition.HashBucketShardSpec) BucketNumberedShardSpec(org.apache.druid.timeline.partition.BucketNumberedShardSpec) Interval(org.joda.time.Interval) Test(org.junit.Test)

Example 3 with HashBucketShardSpec

use of org.apache.druid.timeline.partition.HashBucketShardSpec in project druid by druid-io.

the class SegmentPublisherHelperTest method testAnnotateShardSpecThrowingExceptionForBucketNumberedShardSpec.

@Test
public void testAnnotateShardSpecThrowingExceptionForBucketNumberedShardSpec() {
    final Set<DataSegment> segments = ImmutableSet.of(newSegment(new HashBucketShardSpec(0, 3, null, HashPartitionFunction.MURMUR3_32_ABS, new ObjectMapper())), newSegment(new HashBucketShardSpec(1, 3, null, HashPartitionFunction.MURMUR3_32_ABS, new ObjectMapper())), newSegment(new HashBucketShardSpec(2, 3, null, HashPartitionFunction.MURMUR3_32_ABS, new ObjectMapper())));
    expectedException.expect(IllegalStateException.class);
    expectedException.expectMessage("Cannot publish segments with shardSpec");
    SegmentPublisherHelper.annotateShardSpec(segments);
}
Also used : HashBucketShardSpec(org.apache.druid.timeline.partition.HashBucketShardSpec) DataSegment(org.apache.druid.timeline.DataSegment) ObjectMapper(com.fasterxml.jackson.databind.ObjectMapper) Test(org.junit.Test)

Aggregations

HashBucketShardSpec (org.apache.druid.timeline.partition.HashBucketShardSpec)3 Test (org.junit.Test)3 InputRow (org.apache.druid.data.input.InputRow)2 MapBasedInputRow (org.apache.druid.data.input.MapBasedInputRow)2 ObjectMapper (com.fasterxml.jackson.databind.ObjectMapper)1 ImmutableList (com.google.common.collect.ImmutableList)1 HashMap (java.util.HashMap)1 List (java.util.List)1 SegmentIdWithShardSpec (org.apache.druid.segment.realtime.appenderator.SegmentIdWithShardSpec)1 DataSegment (org.apache.druid.timeline.DataSegment)1 BucketNumberedShardSpec (org.apache.druid.timeline.partition.BucketNumberedShardSpec)1 ShardSpec (org.apache.druid.timeline.partition.ShardSpec)1 Interval (org.joda.time.Interval)1