Search in sources :

Example 16 with UserCompactionTaskGranularityConfig

use of org.apache.druid.server.coordinator.UserCompactionTaskGranularityConfig in project druid by druid-io.

the class ITAutoCompactionTest method testAutoCompactionDutyWithSegmentGranularityAndExistingCompactedSegmentsHaveSameSegmentGranularity.

@Test
public void testAutoCompactionDutyWithSegmentGranularityAndExistingCompactedSegmentsHaveSameSegmentGranularity() throws Exception {
    loadData(INDEX_TASK);
    try (final Closeable ignored = unloader(fullDatasourceName)) {
        final List<String> intervalsBeforeCompaction = coordinator.getSegmentIntervals(fullDatasourceName);
        intervalsBeforeCompaction.sort(null);
        // 4 segments across 2 days (4 total)...
        verifySegmentsCount(4);
        verifyQuery(INDEX_QUERIES_RESOURCE);
        // Compacted without SegmentGranularity in auto compaction config
        submitCompactionConfig(MAX_ROWS_PER_SEGMENT_COMPACTED, NO_SKIP_OFFSET);
        forceTriggerAutoCompaction(2);
        verifyQuery(INDEX_QUERIES_RESOURCE);
        verifySegmentsCompacted(2, MAX_ROWS_PER_SEGMENT_COMPACTED);
        List<TaskResponseObject> compactTasksBefore = indexer.getCompleteTasksForDataSource(fullDatasourceName);
        // Segments were compacted and already has DAY granularity since it was initially ingested with DAY granularity.
        // Now set auto compaction with DAY granularity in the granularitySpec
        Granularity newGranularity = Granularities.DAY;
        submitCompactionConfig(MAX_ROWS_PER_SEGMENT_COMPACTED, NO_SKIP_OFFSET, new UserCompactionTaskGranularityConfig(newGranularity, null, null));
        forceTriggerAutoCompaction(2);
        verifyQuery(INDEX_QUERIES_RESOURCE);
        verifySegmentsCompacted(2, MAX_ROWS_PER_SEGMENT_COMPACTED);
        // should be no new compaction task as segmentGranularity is already DAY
        List<TaskResponseObject> compactTasksAfter = indexer.getCompleteTasksForDataSource(fullDatasourceName);
        Assert.assertEquals(compactTasksAfter.size(), compactTasksBefore.size());
    }
}
Also used : TaskResponseObject(org.apache.druid.testing.clients.TaskResponseObject) Closeable(java.io.Closeable) UserCompactionTaskGranularityConfig(org.apache.druid.server.coordinator.UserCompactionTaskGranularityConfig) Granularity(org.apache.druid.java.util.common.granularity.Granularity) Test(org.testng.annotations.Test) AbstractIndexerTest(org.apache.druid.tests.indexer.AbstractIndexerTest) AbstractITBatchIndexTest(org.apache.druid.tests.indexer.AbstractITBatchIndexTest)

Example 17 with UserCompactionTaskGranularityConfig

use of org.apache.druid.server.coordinator.UserCompactionTaskGranularityConfig in project druid by druid-io.

the class ITAutoCompactionTest method testAutoCompactionDutyWithSegmentGranularityAndMixedVersion.

@Test
public void testAutoCompactionDutyWithSegmentGranularityAndMixedVersion() throws Exception {
    loadData(INDEX_TASK);
    try (final Closeable ignored = unloader(fullDatasourceName)) {
        final List<String> intervalsBeforeCompaction = coordinator.getSegmentIntervals(fullDatasourceName);
        intervalsBeforeCompaction.sort(null);
        // 4 segments across 2 days (4 total)...
        verifySegmentsCount(4);
        verifyQuery(INDEX_QUERIES_RESOURCE);
        submitCompactionConfig(MAX_ROWS_PER_SEGMENT_COMPACTED, Period.days(1));
        // ...compacted into 1 new segment for 1 day. 1 day compacted and 1 day skipped/remains uncompacted. (3 total)
        forceTriggerAutoCompaction(3);
        verifyQuery(INDEX_QUERIES_RESOURCE);
        verifySegmentsCompacted(1, MAX_ROWS_PER_SEGMENT_COMPACTED);
        Granularity newGranularity = Granularities.YEAR;
        submitCompactionConfig(1000, NO_SKIP_OFFSET, new UserCompactionTaskGranularityConfig(newGranularity, null, null));
        LOG.info("Auto compaction test with YEAR segment granularity");
        List<String> expectedIntervalAfterCompaction = new ArrayList<>();
        for (String interval : intervalsBeforeCompaction) {
            for (Interval newinterval : newGranularity.getIterable(new Interval(interval, ISOChronology.getInstanceUTC()))) {
                expectedIntervalAfterCompaction.add(newinterval.toString());
            }
        }
        // Since the new segmentGranularity is YEAR, it will have mixed versions inside the same time chunk
        // There will be an old version (for the first day interval) from the initial ingestion and
        // a newer version (for the second day interval) from the first compaction
        forceTriggerAutoCompaction(1);
        verifyQuery(INDEX_QUERIES_RESOURCE);
        verifySegmentsCompacted(1, 1000);
        checkCompactionIntervals(expectedIntervalAfterCompaction);
    }
}
Also used : Closeable(java.io.Closeable) ArrayList(java.util.ArrayList) UserCompactionTaskGranularityConfig(org.apache.druid.server.coordinator.UserCompactionTaskGranularityConfig) Granularity(org.apache.druid.java.util.common.granularity.Granularity) Interval(org.joda.time.Interval) Test(org.testng.annotations.Test) AbstractIndexerTest(org.apache.druid.tests.indexer.AbstractIndexerTest) AbstractITBatchIndexTest(org.apache.druid.tests.indexer.AbstractITBatchIndexTest)

Example 18 with UserCompactionTaskGranularityConfig

use of org.apache.druid.server.coordinator.UserCompactionTaskGranularityConfig in project druid by druid-io.

the class ITAutoCompactionTest method testAutoCompactionDutyWithSegmentGranularityAndSmallerSegmentGranularityCoveringMultipleSegmentsInTimelineAndDropExistingTrue.

@Test
public void testAutoCompactionDutyWithSegmentGranularityAndSmallerSegmentGranularityCoveringMultipleSegmentsInTimelineAndDropExistingTrue() throws Exception {
    loadData(INDEX_TASK);
    try (final Closeable ignored = unloader(fullDatasourceName)) {
        final List<String> intervalsBeforeCompaction = coordinator.getSegmentIntervals(fullDatasourceName);
        intervalsBeforeCompaction.sort(null);
        // 4 segments across 2 days (4 total)...
        verifySegmentsCount(4);
        verifyQuery(INDEX_QUERIES_RESOURCE);
        Granularity newGranularity = Granularities.YEAR;
        // Set dropExisting to true
        submitCompactionConfig(MAX_ROWS_PER_SEGMENT_COMPACTED, NO_SKIP_OFFSET, new UserCompactionTaskGranularityConfig(newGranularity, null, null), true);
        List<String> expectedIntervalAfterCompaction = new ArrayList<>();
        // We wil have one segment with interval of 2013-01-01/2014-01-01 (compacted with YEAR)
        for (String interval : intervalsBeforeCompaction) {
            for (Interval newinterval : newGranularity.getIterable(new Interval(interval, ISOChronology.getInstanceUTC()))) {
                expectedIntervalAfterCompaction.add(newinterval.toString());
            }
        }
        forceTriggerAutoCompaction(1);
        verifyQuery(INDEX_QUERIES_RESOURCE);
        verifySegmentsCompacted(1, MAX_ROWS_PER_SEGMENT_COMPACTED);
        checkCompactionIntervals(expectedIntervalAfterCompaction);
        loadData(INDEX_TASK);
        verifySegmentsCount(5);
        verifyQuery(INDEX_QUERIES_RESOURCE);
        // 5 segments. 1 compacted YEAR segment and 4 newly ingested DAY segments across 2 days
        // We wil have one segment with interval of 2013-01-01/2014-01-01 (compacted with YEAR) from the compaction earlier
        // two segments with interval of 2013-08-31/2013-09-01 (newly ingested with DAY)
        // and two segments with interval of 2013-09-01/2013-09-02 (newly ingested with DAY)
        expectedIntervalAfterCompaction.addAll(intervalsBeforeCompaction);
        checkCompactionIntervals(expectedIntervalAfterCompaction);
        newGranularity = Granularities.MONTH;
        // Set dropExisting to true
        submitCompactionConfig(MAX_ROWS_PER_SEGMENT_COMPACTED, NO_SKIP_OFFSET, new UserCompactionTaskGranularityConfig(newGranularity, null, null), true);
        // Since dropExisting is set to true...
        // This will submit a single compaction task for interval of 2013-01-01/2014-01-01 with MONTH granularity
        expectedIntervalAfterCompaction = new ArrayList<>();
        // and one segments with interval of 2013-10-01/2013-11-01 (compacted with MONTH)
        for (String interval : intervalsBeforeCompaction) {
            for (Interval newinterval : Granularities.MONTH.getIterable(new Interval(interval, ISOChronology.getInstanceUTC()))) {
                expectedIntervalAfterCompaction.add(newinterval.toString());
            }
        }
        forceTriggerAutoCompaction(2);
        verifyQuery(INDEX_QUERIES_RESOURCE);
        verifySegmentsCompacted(2, MAX_ROWS_PER_SEGMENT_COMPACTED);
        checkCompactionIntervals(expectedIntervalAfterCompaction);
    }
}
Also used : Closeable(java.io.Closeable) ArrayList(java.util.ArrayList) UserCompactionTaskGranularityConfig(org.apache.druid.server.coordinator.UserCompactionTaskGranularityConfig) Granularity(org.apache.druid.java.util.common.granularity.Granularity) Interval(org.joda.time.Interval) Test(org.testng.annotations.Test) AbstractIndexerTest(org.apache.druid.tests.indexer.AbstractIndexerTest) AbstractITBatchIndexTest(org.apache.druid.tests.indexer.AbstractITBatchIndexTest)

Example 19 with UserCompactionTaskGranularityConfig

use of org.apache.druid.server.coordinator.UserCompactionTaskGranularityConfig in project druid by druid-io.

the class ITAutoCompactionTest method testAutoCompactionDutyWithSegmentGranularityAndWithDropExistingFalse.

@Test
public void testAutoCompactionDutyWithSegmentGranularityAndWithDropExistingFalse() throws Exception {
    loadData(INDEX_TASK);
    try (final Closeable ignored = unloader(fullDatasourceName)) {
        final List<String> intervalsBeforeCompaction = coordinator.getSegmentIntervals(fullDatasourceName);
        intervalsBeforeCompaction.sort(null);
        // 4 segments across 2 days (4 total)...
        verifySegmentsCount(4);
        verifyQuery(INDEX_QUERIES_RESOURCE);
        Granularity newGranularity = Granularities.YEAR;
        // Set dropExisting to false
        submitCompactionConfig(1000, NO_SKIP_OFFSET, new UserCompactionTaskGranularityConfig(newGranularity, null, null), false);
        LOG.info("Auto compaction test with YEAR segment granularity");
        List<String> expectedIntervalAfterCompaction = new ArrayList<>();
        for (String interval : intervalsBeforeCompaction) {
            for (Interval newinterval : newGranularity.getIterable(new Interval(interval, ISOChronology.getInstanceUTC()))) {
                expectedIntervalAfterCompaction.add(newinterval.toString());
            }
        }
        forceTriggerAutoCompaction(1);
        verifyQuery(INDEX_QUERIES_RESOURCE);
        verifySegmentsCompacted(1, 1000);
        checkCompactionIntervals(expectedIntervalAfterCompaction);
        newGranularity = Granularities.DAY;
        // Set dropExisting to false
        submitCompactionConfig(1000, NO_SKIP_OFFSET, new UserCompactionTaskGranularityConfig(newGranularity, null, null), false);
        LOG.info("Auto compaction test with DAY segment granularity");
        // (which are 2013-08-31 to 2013-09-01, 2013-09-01 to 2013-09-02 and 2013-01-01 to 2014-01-01)
        for (String interval : intervalsBeforeCompaction) {
            for (Interval newinterval : newGranularity.getIterable(new Interval(interval, ISOChronology.getInstanceUTC()))) {
                expectedIntervalAfterCompaction.add(newinterval.toString());
            }
        }
        forceTriggerAutoCompaction(3);
        verifyQuery(INDEX_QUERIES_RESOURCE);
        verifySegmentsCompacted(3, 1000);
        checkCompactionIntervals(expectedIntervalAfterCompaction);
    }
}
Also used : Closeable(java.io.Closeable) ArrayList(java.util.ArrayList) UserCompactionTaskGranularityConfig(org.apache.druid.server.coordinator.UserCompactionTaskGranularityConfig) Granularity(org.apache.druid.java.util.common.granularity.Granularity) Interval(org.joda.time.Interval) Test(org.testng.annotations.Test) AbstractIndexerTest(org.apache.druid.tests.indexer.AbstractIndexerTest) AbstractITBatchIndexTest(org.apache.druid.tests.indexer.AbstractITBatchIndexTest)

Example 20 with UserCompactionTaskGranularityConfig

use of org.apache.druid.server.coordinator.UserCompactionTaskGranularityConfig in project druid by druid-io.

the class ITAutoCompactionTest method testAutoCompactionDutyWithRollup.

@Test
public void testAutoCompactionDutyWithRollup() throws Exception {
    final ISOChronology chrono = ISOChronology.getInstance(DateTimes.inferTzFromString("America/Los_Angeles"));
    Map<String, Object> specs = ImmutableMap.of("%%GRANULARITYSPEC%%", new UniformGranularitySpec(Granularities.DAY, Granularities.DAY, false, ImmutableList.of(new Interval("2013-08-31/2013-09-02", chrono))));
    loadData(INDEX_TASK_WITH_GRANULARITY_SPEC, specs);
    try (final Closeable ignored = unloader(fullDatasourceName)) {
        Map<String, Object> expectedResult = ImmutableMap.of("%%FIELD_TO_QUERY%%", "added", "%%EXPECTED_COUNT_RESULT%%", 2, "%%EXPECTED_SCAN_RESULT%%", ImmutableList.of(ImmutableMap.of("events", ImmutableList.of(ImmutableList.of(57.0), ImmutableList.of(459.0)))));
        verifyQuery(INDEX_ROLLUP_QUERIES_RESOURCE, expectedResult);
        submitCompactionConfig(MAX_ROWS_PER_SEGMENT_COMPACTED, NO_SKIP_OFFSET, new UserCompactionTaskGranularityConfig(null, null, true), false);
        forceTriggerAutoCompaction(2);
        expectedResult = ImmutableMap.of("%%FIELD_TO_QUERY%%", "added", "%%EXPECTED_COUNT_RESULT%%", 1, "%%EXPECTED_SCAN_RESULT%%", ImmutableList.of(ImmutableMap.of("events", ImmutableList.of(ImmutableList.of(516.0)))));
        verifyQuery(INDEX_ROLLUP_QUERIES_RESOURCE, expectedResult);
        verifySegmentsCompacted(2, MAX_ROWS_PER_SEGMENT_COMPACTED);
        List<TaskResponseObject> compactTasksBefore = indexer.getCompleteTasksForDataSource(fullDatasourceName);
        // Verify rollup segments does not get compacted again
        forceTriggerAutoCompaction(2);
        List<TaskResponseObject> compactTasksAfter = indexer.getCompleteTasksForDataSource(fullDatasourceName);
        Assert.assertEquals(compactTasksAfter.size(), compactTasksBefore.size());
    }
}
Also used : ISOChronology(org.joda.time.chrono.ISOChronology) UniformGranularitySpec(org.apache.druid.segment.indexing.granularity.UniformGranularitySpec) TaskResponseObject(org.apache.druid.testing.clients.TaskResponseObject) Closeable(java.io.Closeable) TaskResponseObject(org.apache.druid.testing.clients.TaskResponseObject) UserCompactionTaskGranularityConfig(org.apache.druid.server.coordinator.UserCompactionTaskGranularityConfig) Interval(org.joda.time.Interval) Test(org.testng.annotations.Test) AbstractIndexerTest(org.apache.druid.tests.indexer.AbstractIndexerTest) AbstractITBatchIndexTest(org.apache.druid.tests.indexer.AbstractITBatchIndexTest)

Aggregations

UserCompactionTaskGranularityConfig (org.apache.druid.server.coordinator.UserCompactionTaskGranularityConfig)36 Period (org.joda.time.Period)27 Test (org.junit.Test)26 ArrayList (java.util.ArrayList)24 DataSegment (org.apache.druid.timeline.DataSegment)18 PartitionsSpec (org.apache.druid.indexer.partitions.PartitionsSpec)10 DataSourceCompactionConfig (org.apache.druid.server.coordinator.DataSourceCompactionConfig)10 AbstractIndexerTest (org.apache.druid.tests.indexer.AbstractIndexerTest)10 Test (org.testng.annotations.Test)10 ImmutableMap (com.google.common.collect.ImmutableMap)9 Closeable (java.io.Closeable)9 Map (java.util.Map)9 IndexSpec (org.apache.druid.segment.IndexSpec)9 AbstractITBatchIndexTest (org.apache.druid.tests.indexer.AbstractITBatchIndexTest)9 CompactionState (org.apache.druid.timeline.CompactionState)9 Granularity (org.apache.druid.java.util.common.granularity.Granularity)7 Interval (org.joda.time.Interval)7 CoordinatorCompactionConfig (org.apache.druid.server.coordinator.CoordinatorCompactionConfig)6 UserCompactionTaskQueryTuningConfig (org.apache.druid.server.coordinator.UserCompactionTaskQueryTuningConfig)5 ImmutableList (com.google.common.collect.ImmutableList)4