Search in sources :

Example 11 with PartitionsSpec

use of org.apache.druid.indexer.partitions.PartitionsSpec in project druid by druid-io.

the class PartialRangeSegmentGenerateTaskTest method requiresMultiDimensionPartitions.

@Test
public void requiresMultiDimensionPartitions() {
    exception.expect(IllegalArgumentException.class);
    exception.expectMessage("range or single_dim partitionsSpec required");
    PartitionsSpec partitionsSpec = new HashedPartitionsSpec(null, 1, null);
    ParallelIndexTuningConfig tuningConfig = new ParallelIndexTestingFactory.TuningConfigBuilder().partitionsSpec(partitionsSpec).build();
    new PartialRangeSegmentGenerateTaskBuilder().tuningConfig(tuningConfig).build();
}
Also used : HashedPartitionsSpec(org.apache.druid.indexer.partitions.HashedPartitionsSpec) HashedPartitionsSpec(org.apache.druid.indexer.partitions.HashedPartitionsSpec) DynamicPartitionsSpec(org.apache.druid.indexer.partitions.DynamicPartitionsSpec) PartitionsSpec(org.apache.druid.indexer.partitions.PartitionsSpec) Test(org.junit.Test)

Example 12 with PartitionsSpec

use of org.apache.druid.indexer.partitions.PartitionsSpec in project druid by druid-io.

the class ParallelIndexSupervisorTaskSerdeTest method forceGuaranteedRollupWithHashPartitionsValid.

@Test
public void forceGuaranteedRollupWithHashPartitionsValid() {
    Integer numShards = 2;
    ParallelIndexSupervisorTask task = new ParallelIndexSupervisorTaskBuilder().ingestionSpec(new ParallelIndexIngestionSpecBuilder().forceGuaranteedRollup(true).partitionsSpec(new HashedPartitionsSpec(null, numShards, null)).inputIntervals(INTERVALS).build()).build();
    PartitionsSpec partitionsSpec = task.getIngestionSchema().getTuningConfig().getPartitionsSpec();
    Assert.assertThat(partitionsSpec, CoreMatchers.instanceOf(HashedPartitionsSpec.class));
}
Also used : HashedPartitionsSpec(org.apache.druid.indexer.partitions.HashedPartitionsSpec) PartitionsSpec(org.apache.druid.indexer.partitions.PartitionsSpec) HashedPartitionsSpec(org.apache.druid.indexer.partitions.HashedPartitionsSpec) SingleDimensionPartitionsSpec(org.apache.druid.indexer.partitions.SingleDimensionPartitionsSpec) Test(org.junit.Test)

Example 13 with PartitionsSpec

use of org.apache.druid.indexer.partitions.PartitionsSpec in project druid by druid-io.

the class ITAutoCompactionUpgradeTest method testUpgradeAutoCompactionConfigurationWhenConfigurationFromOlderVersionAlreadyExist.

@Test
public void testUpgradeAutoCompactionConfigurationWhenConfigurationFromOlderVersionAlreadyExist() throws Exception {
    // Verify that compaction config already exist. This config was inserted manually into the database using SQL script.
    // This auto compaction configuration payload is from Druid 0.21.0
    CoordinatorCompactionConfig coordinatorCompactionConfig = compactionResource.getCoordinatorCompactionConfigs();
    DataSourceCompactionConfig foundDataSourceCompactionConfig = null;
    for (DataSourceCompactionConfig dataSourceCompactionConfig : coordinatorCompactionConfig.getCompactionConfigs()) {
        if (dataSourceCompactionConfig.getDataSource().equals(UPGRADE_DATASOURCE_NAME)) {
            foundDataSourceCompactionConfig = dataSourceCompactionConfig;
        }
    }
    Assert.assertNotNull(foundDataSourceCompactionConfig);
    // Now submit a new auto compaction configuration
    PartitionsSpec newPartitionsSpec = new DynamicPartitionsSpec(4000, null);
    Period newSkipOffset = Period.seconds(0);
    DataSourceCompactionConfig compactionConfig = new DataSourceCompactionConfig(UPGRADE_DATASOURCE_NAME, null, null, null, newSkipOffset, new UserCompactionTaskQueryTuningConfig(null, null, null, new MaxSizeSplitHintSpec(null, 1), newPartitionsSpec, null, null, null, null, null, 1, null, null, null, null, null, 1), new UserCompactionTaskGranularityConfig(Granularities.YEAR, null, null), null, null, null, new UserCompactionTaskIOConfig(true), null);
    compactionResource.submitCompactionConfig(compactionConfig);
    // Wait for compaction config to persist
    Thread.sleep(2000);
    // Verify that compaction was successfully updated
    coordinatorCompactionConfig = compactionResource.getCoordinatorCompactionConfigs();
    foundDataSourceCompactionConfig = null;
    for (DataSourceCompactionConfig dataSourceCompactionConfig : coordinatorCompactionConfig.getCompactionConfigs()) {
        if (dataSourceCompactionConfig.getDataSource().equals(UPGRADE_DATASOURCE_NAME)) {
            foundDataSourceCompactionConfig = dataSourceCompactionConfig;
        }
    }
    Assert.assertNotNull(foundDataSourceCompactionConfig);
    Assert.assertNotNull(foundDataSourceCompactionConfig.getTuningConfig());
    Assert.assertEquals(foundDataSourceCompactionConfig.getTuningConfig().getPartitionsSpec(), newPartitionsSpec);
    Assert.assertEquals(foundDataSourceCompactionConfig.getSkipOffsetFromLatest(), newSkipOffset);
}
Also used : CoordinatorCompactionConfig(org.apache.druid.server.coordinator.CoordinatorCompactionConfig) DynamicPartitionsSpec(org.apache.druid.indexer.partitions.DynamicPartitionsSpec) DataSourceCompactionConfig(org.apache.druid.server.coordinator.DataSourceCompactionConfig) DynamicPartitionsSpec(org.apache.druid.indexer.partitions.DynamicPartitionsSpec) PartitionsSpec(org.apache.druid.indexer.partitions.PartitionsSpec) UserCompactionTaskIOConfig(org.apache.druid.server.coordinator.UserCompactionTaskIOConfig) Period(org.joda.time.Period) UserCompactionTaskQueryTuningConfig(org.apache.druid.server.coordinator.UserCompactionTaskQueryTuningConfig) UserCompactionTaskGranularityConfig(org.apache.druid.server.coordinator.UserCompactionTaskGranularityConfig) MaxSizeSplitHintSpec(org.apache.druid.data.input.MaxSizeSplitHintSpec) Test(org.testng.annotations.Test) AbstractIndexerTest(org.apache.druid.tests.indexer.AbstractIndexerTest)

Example 14 with PartitionsSpec

use of org.apache.druid.indexer.partitions.PartitionsSpec in project druid by druid-io.

the class ITAppendBatchIndexTest method submitIngestionTaskAndVerify.

private void submitIngestionTaskAndVerify(String indexDatasource, PartitionsSpec partitionsSpec, boolean appendToExisting, Pair<Boolean, Boolean> segmentAvailabilityConfirmationPair) throws Exception {
    InputFormatDetails inputFormatDetails = InputFormatDetails.JSON;
    Map inputFormatMap = new ImmutableMap.Builder<String, Object>().put("type", inputFormatDetails.getInputFormatType()).build();
    final Function<String, String> sqlInputSourcePropsTransform = spec -> {
        try {
            spec = StringUtils.replace(spec, "%%PARTITIONS_SPEC%%", jsonMapper.writeValueAsString(partitionsSpec));
            spec = StringUtils.replace(spec, "%%INPUT_SOURCE_FILTER%%", "*" + inputFormatDetails.getFileExtension());
            spec = StringUtils.replace(spec, "%%INPUT_SOURCE_BASE_DIR%%", "/resources/data/batch_index" + inputFormatDetails.getFolderSuffix());
            spec = StringUtils.replace(spec, "%%INPUT_FORMAT%%", jsonMapper.writeValueAsString(inputFormatMap));
            spec = StringUtils.replace(spec, "%%APPEND_TO_EXISTING%%", jsonMapper.writeValueAsString(appendToExisting));
            spec = StringUtils.replace(spec, "%%DROP_EXISTING%%", jsonMapper.writeValueAsString(false));
            if (partitionsSpec instanceof DynamicPartitionsSpec) {
                spec = StringUtils.replace(spec, "%%FORCE_GUARANTEED_ROLLUP%%", jsonMapper.writeValueAsString(false));
            } else if (partitionsSpec instanceof HashedPartitionsSpec || partitionsSpec instanceof SingleDimensionPartitionsSpec) {
                spec = StringUtils.replace(spec, "%%FORCE_GUARANTEED_ROLLUP%%", jsonMapper.writeValueAsString(true));
            }
            return spec;
        } catch (Exception e) {
            throw new RuntimeException(e);
        }
    };
    doIndexTest(indexDatasource, INDEX_TASK, sqlInputSourcePropsTransform, null, false, false, true, segmentAvailabilityConfirmationPair);
}
Also used : Logger(org.apache.druid.java.util.common.logger.Logger) DataProvider(org.testng.annotations.DataProvider) ImmutableMap(com.google.common.collect.ImmutableMap) StringUtils(org.apache.druid.java.util.common.StringUtils) DruidTestModuleFactory(org.apache.druid.testing.guice.DruidTestModuleFactory) HashedPartitionsSpec(org.apache.druid.indexer.partitions.HashedPartitionsSpec) Test(org.testng.annotations.Test) UUID(java.util.UUID) Function(java.util.function.Function) Guice(org.testng.annotations.Guice) Pair(org.apache.druid.java.util.common.Pair) List(java.util.List) ImmutableList(com.google.common.collect.ImmutableList) SingleDimensionPartitionsSpec(org.apache.druid.indexer.partitions.SingleDimensionPartitionsSpec) TestNGGroup(org.apache.druid.tests.TestNGGroup) Closeable(java.io.Closeable) Map(java.util.Map) DynamicPartitionsSpec(org.apache.druid.indexer.partitions.DynamicPartitionsSpec) PartitionsSpec(org.apache.druid.indexer.partitions.PartitionsSpec) HashedPartitionsSpec(org.apache.druid.indexer.partitions.HashedPartitionsSpec) DynamicPartitionsSpec(org.apache.druid.indexer.partitions.DynamicPartitionsSpec) SingleDimensionPartitionsSpec(org.apache.druid.indexer.partitions.SingleDimensionPartitionsSpec) ImmutableMap(com.google.common.collect.ImmutableMap) Map(java.util.Map)

Example 15 with PartitionsSpec

use of org.apache.druid.indexer.partitions.PartitionsSpec in project druid by druid-io.

the class ITBestEffortRollupParallelIndexTest method testIndexDataAwaitSegmentAvailabilityFailsButTaskSucceeds.

/**
 * Test a non zero value for awaitSegmentAvailabilityTimeoutMillis. Setting the config value to 1 millis
 * and pausing coordination to confirm that the task will still succeed even if the job was not able to confirm the
 * segments were loaded by the time the timeout occurs.
 *
 * @param partitionsSpec
 * @throws Exception
 */
@Test(dataProvider = "resources")
public void testIndexDataAwaitSegmentAvailabilityFailsButTaskSucceeds(PartitionsSpec partitionsSpec) throws Exception {
    try (final Closeable ignored1 = unloader(INDEX_DATASOURCE + config.getExtraDatasourceNameSuffix())) {
        coordinatorClient.postDynamicConfig(DYNAMIC_CONFIG_PAUSED);
        boolean forceGuaranteedRollup = partitionsSpec.isForceGuaranteedRollupCompatible();
        Assert.assertFalse(forceGuaranteedRollup, "parititionSpec does not support best-effort rollup");
        final Function<String, String> rollupTransform = spec -> {
            try {
                spec = StringUtils.replace(spec, "%%FORCE_GUARANTEED_ROLLUP%%", Boolean.toString(false));
                spec = StringUtils.replace(spec, "%%SEGMENT_AVAIL_TIMEOUT_MILLIS%%", jsonMapper.writeValueAsString("1"));
                return StringUtils.replace(spec, "%%PARTITIONS_SPEC%%", jsonMapper.writeValueAsString(partitionsSpec));
            } catch (JsonProcessingException e) {
                throw new RuntimeException(e);
            }
        };
        doIndexTest(INDEX_DATASOURCE, INDEX_TASK, rollupTransform, INDEX_QUERIES_RESOURCE, false, false, false, new Pair<>(true, false));
        coordinatorClient.postDynamicConfig(DYNAMIC_CONFIG_DEFAULT);
        ITRetryUtil.retryUntilTrue(() -> coordinator.areSegmentsLoaded(INDEX_DATASOURCE + config.getExtraDatasourceNameSuffix()), "Segment Load");
    }
}
Also used : DataProvider(org.testng.annotations.DataProvider) ITRetryUtil(org.apache.druid.testing.utils.ITRetryUtil) Inject(com.google.inject.Inject) StringUtils(org.apache.druid.java.util.common.StringUtils) DruidTestModuleFactory(org.apache.druid.testing.guice.DruidTestModuleFactory) JsonProcessingException(com.fasterxml.jackson.core.JsonProcessingException) Test(org.testng.annotations.Test) Function(java.util.function.Function) Guice(org.testng.annotations.Guice) Pair(org.apache.druid.java.util.common.Pair) CoordinatorDynamicConfig(org.apache.druid.server.coordinator.CoordinatorDynamicConfig) CoordinatorResourceTestClient(org.apache.druid.testing.clients.CoordinatorResourceTestClient) TestNGGroup(org.apache.druid.tests.TestNGGroup) Assert(org.testng.Assert) Closeable(java.io.Closeable) DynamicPartitionsSpec(org.apache.druid.indexer.partitions.DynamicPartitionsSpec) PartitionsSpec(org.apache.druid.indexer.partitions.PartitionsSpec) Closeable(java.io.Closeable) JsonProcessingException(com.fasterxml.jackson.core.JsonProcessingException) Test(org.testng.annotations.Test)

Aggregations

PartitionsSpec (org.apache.druid.indexer.partitions.PartitionsSpec)34 Test (org.junit.Test)19 Map (java.util.Map)17 ArrayList (java.util.ArrayList)16 DataSegment (org.apache.druid.timeline.DataSegment)16 Period (org.joda.time.Period)16 ImmutableMap (com.google.common.collect.ImmutableMap)15 HashedPartitionsSpec (org.apache.druid.indexer.partitions.HashedPartitionsSpec)15 IndexSpec (org.apache.druid.segment.IndexSpec)15 CompactionState (org.apache.druid.timeline.CompactionState)14 DynamicPartitionsSpec (org.apache.druid.indexer.partitions.DynamicPartitionsSpec)11 UserCompactionTaskGranularityConfig (org.apache.druid.server.coordinator.UserCompactionTaskGranularityConfig)11 SingleDimensionPartitionsSpec (org.apache.druid.indexer.partitions.SingleDimensionPartitionsSpec)10 StringUtils (org.apache.druid.java.util.common.StringUtils)9 Function (java.util.function.Function)8 IOException (java.io.IOException)7 List (java.util.List)7 Pair (org.apache.druid.java.util.common.Pair)5 Interval (org.joda.time.Interval)5 Test (org.testng.annotations.Test)5