Search in sources :

Example 21 with LinearShardSpec

use of io.druid.timeline.partition.LinearShardSpec in project hive by apache.

the class TestDruidStorageHandler method testCommitInsertIntoWithNonExtendableSegment.

@Test(expected = IllegalStateException.class)
public void testCommitInsertIntoWithNonExtendableSegment() throws MetaException, IOException {
    DerbyConnectorTestUtility connector = derbyConnectorRule.getConnector();
    MetadataStorageTablesConfig metadataStorageTablesConfig = derbyConnectorRule.metadataTablesConfigSupplier().get();
    druidStorageHandler.preCreateTable(tableMock);
    LocalFileSystem localFileSystem = FileSystem.getLocal(config);
    Path taskDirPath = new Path(tableWorkingPath, druidStorageHandler.makeStagingName());
    List<DataSegment> existingSegments = Arrays.asList(createSegment(new Path(taskDirPath, "index_old_1.zip").toString(), new Interval(100, 150, DateTimeZone.UTC), "v0", new NoneShardSpec()), createSegment(new Path(taskDirPath, "index_old_2.zip").toString(), new Interval(200, 250, DateTimeZone.UTC), "v0", new LinearShardSpec(0)), createSegment(new Path(taskDirPath, "index_old_3.zip").toString(), new Interval(250, 300, DateTimeZone.UTC), "v0", new LinearShardSpec(0)));
    HdfsDataSegmentPusherConfig pusherConfig = new HdfsDataSegmentPusherConfig();
    pusherConfig.setStorageDirectory(taskDirPath.toString());
    DataSegmentPusher dataSegmentPusher = new HdfsDataSegmentPusher(pusherConfig, config, DruidStorageHandlerUtils.JSON_MAPPER);
    DruidStorageHandlerUtils.publishSegmentsAndCommit(connector, metadataStorageTablesConfig, DATA_SOURCE_NAME, existingSegments, true, config, dataSegmentPusher);
    // Try appending to non extendable shard spec
    DataSegment conflictingSegment = createSegment(new Path(taskDirPath, DruidStorageHandlerUtils.INDEX_ZIP).toString(), new Interval(100, 150, DateTimeZone.UTC), "v1", new LinearShardSpec(0));
    Path descriptorPath = DruidStorageHandlerUtils.makeSegmentDescriptorOutputPath(conflictingSegment, new Path(taskDirPath, DruidStorageHandler.SEGMENTS_DESCRIPTOR_DIR_NAME));
    DruidStorageHandlerUtils.writeSegmentDescriptor(localFileSystem, conflictingSegment, descriptorPath);
    druidStorageHandler.commitInsertTable(tableMock, false);
}
Also used : Path(org.apache.hadoop.fs.Path) MetadataStorageTablesConfig(io.druid.metadata.MetadataStorageTablesConfig) HdfsDataSegmentPusher(io.druid.storage.hdfs.HdfsDataSegmentPusher) DataSegmentPusher(io.druid.segment.loading.DataSegmentPusher) LocalFileSystem(org.apache.hadoop.fs.LocalFileSystem) LinearShardSpec(io.druid.timeline.partition.LinearShardSpec) HdfsDataSegmentPusherConfig(io.druid.storage.hdfs.HdfsDataSegmentPusherConfig) NoneShardSpec(io.druid.timeline.partition.NoneShardSpec) DataSegment(io.druid.timeline.DataSegment) HdfsDataSegmentPusher(io.druid.storage.hdfs.HdfsDataSegmentPusher) Interval(org.joda.time.Interval) Test(org.junit.Test)

Aggregations

LinearShardSpec (io.druid.timeline.partition.LinearShardSpec)21 Interval (org.joda.time.Interval)17 DataSegment (io.druid.timeline.DataSegment)12 Test (org.junit.Test)12 DataSegmentPusher (io.druid.segment.loading.DataSegmentPusher)8 File (java.io.File)8 MetadataStorageTablesConfig (io.druid.metadata.MetadataStorageTablesConfig)7 HdfsDataSegmentPusher (io.druid.storage.hdfs.HdfsDataSegmentPusher)7 HdfsDataSegmentPusherConfig (io.druid.storage.hdfs.HdfsDataSegmentPusherConfig)7 LocalFileSystem (org.apache.hadoop.fs.LocalFileSystem)7 Path (org.apache.hadoop.fs.Path)7 CountAggregatorFactory (io.druid.query.aggregation.CountAggregatorFactory)5 SegmentIdentifier (io.druid.segment.realtime.appenderator.SegmentIdentifier)5 AggregatorFactory (io.druid.query.aggregation.AggregatorFactory)4 IOException (java.io.IOException)4 LongSumAggregatorFactory (io.druid.query.aggregation.LongSumAggregatorFactory)3 QueryableIndex (io.druid.segment.QueryableIndex)3 DateTime (org.joda.time.DateTime)3 ObjectMapper (com.fasterxml.jackson.databind.ObjectMapper)2 InputRow (io.druid.data.input.InputRow)2