Search in sources :

Example 1 with DruidCluster

use of org.apache.druid.server.coordinator.DruidCluster in project druid by druid-io.

the class MarkAsUnusedOvershadowedSegments method run.

@Override
public DruidCoordinatorRuntimeParams run(DruidCoordinatorRuntimeParams params) {
    // Mark as unused overshadowed segments only if we've had enough time to make sure we aren't flapping with old data.
    if (!params.coordinatorIsLeadingEnoughTimeToMarkAsUnusedOvershadowedSegements()) {
        return params;
    }
    CoordinatorStats stats = new CoordinatorStats();
    DruidCluster cluster = params.getDruidCluster();
    Map<String, VersionedIntervalTimeline<String, DataSegment>> timelines = new HashMap<>();
    for (SortedSet<ServerHolder> serverHolders : cluster.getSortedHistoricalsByTier()) {
        for (ServerHolder serverHolder : serverHolders) {
            addSegmentsFromServer(serverHolder, timelines);
        }
    }
    for (ServerHolder serverHolder : cluster.getBrokers()) {
        addSegmentsFromServer(serverHolder, timelines);
    }
    // Mark all segments as unused in db that are overshadowed by served segments
    for (DataSegment dataSegment : params.getUsedSegments()) {
        VersionedIntervalTimeline<String, DataSegment> timeline = timelines.get(dataSegment.getDataSource());
        if (timeline != null && timeline.isOvershadowed(dataSegment.getInterval(), dataSegment.getVersion(), dataSegment)) {
            coordinator.markSegmentAsUnused(dataSegment);
            stats.addToGlobalStat("overShadowedCount", 1);
        }
    }
    return params.buildFromExisting().withCoordinatorStats(stats).build();
}
Also used : CoordinatorStats(org.apache.druid.server.coordinator.CoordinatorStats) ServerHolder(org.apache.druid.server.coordinator.ServerHolder) HashMap(java.util.HashMap) VersionedIntervalTimeline(org.apache.druid.timeline.VersionedIntervalTimeline) DruidCluster(org.apache.druid.server.coordinator.DruidCluster) DataSegment(org.apache.druid.timeline.DataSegment)

Example 2 with DruidCluster

use of org.apache.druid.server.coordinator.DruidCluster in project druid by druid-io.

the class LoadRuleTest method testDrop.

@Test
public void testDrop() {
    final LoadQueuePeon mockPeon = createEmptyPeon();
    mockPeon.dropSegment(EasyMock.anyObject(), EasyMock.anyObject());
    EasyMock.expectLastCall().atLeastOnce();
    EasyMock.expect(mockBalancerStrategy.pickServersToDrop(EasyMock.anyObject(), EasyMock.anyObject())).andDelegateTo(balancerStrategy).times(4);
    EasyMock.replay(throttler, mockPeon, mockBalancerStrategy);
    LoadRule rule = createLoadRule(ImmutableMap.of("hot", 0, DruidServer.DEFAULT_TIER, 0));
    final DataSegment segment = createDataSegment("foo");
    DruidServer server1 = new DruidServer("serverHot", "hostHot", null, 1000, ServerType.HISTORICAL, "hot", 0);
    server1.addDataSegment(segment);
    DruidServer server2 = new DruidServer("serverNorm", "hostNorm", null, 1000, ServerType.HISTORICAL, DruidServer.DEFAULT_TIER, 0);
    server2.addDataSegment(segment);
    DruidServer server3 = new DruidServer("serverNormNotServing", "hostNorm", null, 10, ServerType.HISTORICAL, DruidServer.DEFAULT_TIER, 0);
    DruidCluster druidCluster = DruidClusterBuilder.newBuilder().addTier("hot", new ServerHolder(server1.toImmutableDruidServer(), mockPeon)).addTier(DruidServer.DEFAULT_TIER, new ServerHolder(server2.toImmutableDruidServer(), mockPeon), new ServerHolder(server3.toImmutableDruidServer(), mockPeon)).build();
    CoordinatorStats stats = rule.run(null, makeCoordinatorRuntimeParams(druidCluster, segment), segment);
    Assert.assertEquals(1L, stats.getTieredStat("droppedCount", "hot"));
    Assert.assertEquals(1L, stats.getTieredStat("droppedCount", DruidServer.DEFAULT_TIER));
    EasyMock.verify(throttler, mockPeon);
}
Also used : CoordinatorStats(org.apache.druid.server.coordinator.CoordinatorStats) ServerHolder(org.apache.druid.server.coordinator.ServerHolder) LoadQueuePeon(org.apache.druid.server.coordinator.LoadQueuePeon) DruidServer(org.apache.druid.client.DruidServer) ImmutableDruidServer(org.apache.druid.client.ImmutableDruidServer) DruidCluster(org.apache.druid.server.coordinator.DruidCluster) DataSegment(org.apache.druid.timeline.DataSegment) Test(org.junit.Test)

Example 3 with DruidCluster

use of org.apache.druid.server.coordinator.DruidCluster in project druid by druid-io.

the class LoadRuleTest method testLoadDecommissioning.

/**
 * 2 servers in different tiers, the first is decommissioning.
 * Should not load a segment to the server that is decommissioning
 */
@Test
public void testLoadDecommissioning() {
    final LoadQueuePeon mockPeon1 = createEmptyPeon();
    final LoadQueuePeon mockPeon2 = createOneCallPeonMock();
    LoadRule rule = createLoadRule(ImmutableMap.of("tier1", 1, "tier2", 1));
    final DataSegment segment = createDataSegment("foo");
    EasyMock.expect(mockBalancerStrategy.findNewSegmentHomeReplicator(EasyMock.anyObject(), EasyMock.anyObject())).andDelegateTo(balancerStrategy).times(1);
    EasyMock.replay(mockPeon1, mockPeon2, mockBalancerStrategy);
    DruidCluster druidCluster = DruidClusterBuilder.newBuilder().addTier("tier1", createServerHolder("tier1", mockPeon1, true)).addTier("tier2", createServerHolder("tier2", mockPeon2, false)).build();
    CoordinatorStats stats = rule.run(null, makeCoordinatorRuntimeParams(druidCluster, segment), segment);
    Assert.assertEquals(1L, stats.getTieredStat(LoadRule.ASSIGNED_COUNT, "tier2"));
    EasyMock.verify(mockPeon1, mockPeon2, mockBalancerStrategy);
}
Also used : CoordinatorStats(org.apache.druid.server.coordinator.CoordinatorStats) LoadQueuePeon(org.apache.druid.server.coordinator.LoadQueuePeon) DruidCluster(org.apache.druid.server.coordinator.DruidCluster) DataSegment(org.apache.druid.timeline.DataSegment) Test(org.junit.Test)

Example 4 with DruidCluster

use of org.apache.druid.server.coordinator.DruidCluster in project druid by druid-io.

the class LoadRuleTest method testLoadReplicaDuringDecommissioning.

/**
 * 2 tiers, 2 servers each, 1 server of the second tier is decommissioning.
 * Should not load a segment to the server that is decommssioning.
 */
@Test
public void testLoadReplicaDuringDecommissioning() {
    EasyMock.expect(throttler.canCreateReplicant(EasyMock.anyString())).andReturn(true).anyTimes();
    final LoadQueuePeon mockPeon1 = createEmptyPeon();
    final LoadQueuePeon mockPeon2 = createOneCallPeonMock();
    final LoadQueuePeon mockPeon3 = createOneCallPeonMock();
    final LoadQueuePeon mockPeon4 = createOneCallPeonMock();
    LoadRule rule = createLoadRule(ImmutableMap.of("tier1", 2, "tier2", 2));
    final DataSegment segment = createDataSegment("foo");
    throttler.registerReplicantCreation(EasyMock.eq("tier2"), EasyMock.anyObject(), EasyMock.anyObject());
    EasyMock.expectLastCall().times(2);
    ServerHolder holder1 = createServerHolder("tier1", mockPeon1, true);
    ServerHolder holder2 = createServerHolder("tier1", mockPeon2, false);
    ServerHolder holder3 = createServerHolder("tier2", mockPeon3, false);
    ServerHolder holder4 = createServerHolder("tier2", mockPeon4, false);
    EasyMock.expect(mockBalancerStrategy.findNewSegmentHomeReplicator(segment, ImmutableList.of(holder2))).andReturn(holder2);
    EasyMock.expect(mockBalancerStrategy.findNewSegmentHomeReplicator(segment, ImmutableList.of(holder4, holder3))).andReturn(holder3);
    EasyMock.expect(mockBalancerStrategy.findNewSegmentHomeReplicator(segment, ImmutableList.of(holder4))).andReturn(holder4);
    EasyMock.replay(throttler, mockPeon1, mockPeon2, mockPeon3, mockPeon4, mockBalancerStrategy);
    DruidCluster druidCluster = DruidClusterBuilder.newBuilder().addTier("tier1", holder1, holder2).addTier("tier2", holder3, holder4).build();
    CoordinatorStats stats = rule.run(null, makeCoordinatorRuntimeParams(druidCluster, segment), segment);
    Assert.assertEquals(1L, stats.getTieredStat(LoadRule.ASSIGNED_COUNT, "tier1"));
    Assert.assertEquals(2L, stats.getTieredStat(LoadRule.ASSIGNED_COUNT, "tier2"));
    EasyMock.verify(throttler, mockPeon1, mockPeon2, mockPeon3, mockPeon4, mockBalancerStrategy);
}
Also used : CoordinatorStats(org.apache.druid.server.coordinator.CoordinatorStats) ServerHolder(org.apache.druid.server.coordinator.ServerHolder) LoadQueuePeon(org.apache.druid.server.coordinator.LoadQueuePeon) DruidCluster(org.apache.druid.server.coordinator.DruidCluster) DataSegment(org.apache.druid.timeline.DataSegment) Test(org.junit.Test)

Example 5 with DruidCluster

use of org.apache.druid.server.coordinator.DruidCluster in project druid by druid-io.

the class LoadRuleTest method testRedundantReplicaDropDuringDecommissioning.

/**
 * 3 servers hosting 3 replicas of the segment.
 * 1 servers is decommissioning.
 * 1 replica is redundant.
 * Should drop from the decommissioning server.
 */
@Test
public void testRedundantReplicaDropDuringDecommissioning() {
    final LoadQueuePeon mockPeon1 = new LoadQueuePeonTester();
    final LoadQueuePeon mockPeon2 = new LoadQueuePeonTester();
    final LoadQueuePeon mockPeon3 = new LoadQueuePeonTester();
    EasyMock.expect(mockBalancerStrategy.pickServersToDrop(EasyMock.anyObject(), EasyMock.anyObject())).andDelegateTo(balancerStrategy).times(4);
    EasyMock.replay(throttler, mockBalancerStrategy);
    LoadRule rule = createLoadRule(ImmutableMap.of("tier1", 2));
    final DataSegment segment1 = createDataSegment("foo1");
    DruidServer server1 = createServer("tier1");
    server1.addDataSegment(segment1);
    DruidServer server2 = createServer("tier1");
    server2.addDataSegment(segment1);
    DruidServer server3 = createServer("tier1");
    server3.addDataSegment(segment1);
    DruidCluster druidCluster = DruidClusterBuilder.newBuilder().addTier("tier1", new ServerHolder(server1.toImmutableDruidServer(), mockPeon1, false), new ServerHolder(server2.toImmutableDruidServer(), mockPeon2, true), new ServerHolder(server3.toImmutableDruidServer(), mockPeon3, false)).build();
    CoordinatorStats stats = rule.run(null, makeCoordinatorRuntimeParams(druidCluster, segment1), segment1);
    Assert.assertEquals(1L, stats.getTieredStat("droppedCount", "tier1"));
    Assert.assertEquals(0, mockPeon1.getSegmentsToDrop().size());
    Assert.assertEquals(1, mockPeon2.getSegmentsToDrop().size());
    Assert.assertEquals(0, mockPeon3.getSegmentsToDrop().size());
    EasyMock.verify(throttler);
}
Also used : CoordinatorStats(org.apache.druid.server.coordinator.CoordinatorStats) ServerHolder(org.apache.druid.server.coordinator.ServerHolder) LoadQueuePeon(org.apache.druid.server.coordinator.LoadQueuePeon) DruidServer(org.apache.druid.client.DruidServer) ImmutableDruidServer(org.apache.druid.client.ImmutableDruidServer) DruidCluster(org.apache.druid.server.coordinator.DruidCluster) LoadQueuePeonTester(org.apache.druid.server.coordinator.LoadQueuePeonTester) DataSegment(org.apache.druid.timeline.DataSegment) Test(org.junit.Test)

Aggregations

DruidCluster (org.apache.druid.server.coordinator.DruidCluster)22 CoordinatorStats (org.apache.druid.server.coordinator.CoordinatorStats)21 DataSegment (org.apache.druid.timeline.DataSegment)21 ServerHolder (org.apache.druid.server.coordinator.ServerHolder)20 Test (org.junit.Test)17 DruidServer (org.apache.druid.client.DruidServer)15 LoadQueuePeon (org.apache.druid.server.coordinator.LoadQueuePeon)14 ImmutableDruidServer (org.apache.druid.client.ImmutableDruidServer)13 DruidCoordinatorRuntimeParams (org.apache.druid.server.coordinator.DruidCoordinatorRuntimeParams)7 HashMap (java.util.HashMap)5 LoadQueuePeonTester (org.apache.druid.server.coordinator.LoadQueuePeonTester)5 ArrayList (java.util.ArrayList)3 List (java.util.List)3 DateTimes (org.apache.druid.java.util.common.DateTimes)3 Intervals (org.apache.druid.java.util.common.Intervals)3 ServerType (org.apache.druid.server.coordination.ServerType)3 CoordinatorRuntimeParamsTestHelpers (org.apache.druid.server.coordinator.CoordinatorRuntimeParamsTestHelpers)3 DruidClusterBuilder (org.apache.druid.server.coordinator.DruidClusterBuilder)3 SegmentReplicantLookup (org.apache.druid.server.coordinator.SegmentReplicantLookup)3 NoneShardSpec (org.apache.druid.timeline.partition.NoneShardSpec)3