Search in sources :

Example 1 with LoadQueuePeonTester

use of org.apache.druid.server.coordinator.LoadQueuePeonTester in project druid by druid-io.

the class LoadRuleTest method testRedundantReplicaDropDuringDecommissioning.

/**
 * 3 servers hosting 3 replicas of the segment.
 * 1 servers is decommissioning.
 * 1 replica is redundant.
 * Should drop from the decommissioning server.
 */
@Test
public void testRedundantReplicaDropDuringDecommissioning() {
    final LoadQueuePeon mockPeon1 = new LoadQueuePeonTester();
    final LoadQueuePeon mockPeon2 = new LoadQueuePeonTester();
    final LoadQueuePeon mockPeon3 = new LoadQueuePeonTester();
    EasyMock.expect(mockBalancerStrategy.pickServersToDrop(EasyMock.anyObject(), EasyMock.anyObject())).andDelegateTo(balancerStrategy).times(4);
    EasyMock.replay(throttler, mockBalancerStrategy);
    LoadRule rule = createLoadRule(ImmutableMap.of("tier1", 2));
    final DataSegment segment1 = createDataSegment("foo1");
    DruidServer server1 = createServer("tier1");
    server1.addDataSegment(segment1);
    DruidServer server2 = createServer("tier1");
    server2.addDataSegment(segment1);
    DruidServer server3 = createServer("tier1");
    server3.addDataSegment(segment1);
    DruidCluster druidCluster = DruidClusterBuilder.newBuilder().addTier("tier1", new ServerHolder(server1.toImmutableDruidServer(), mockPeon1, false), new ServerHolder(server2.toImmutableDruidServer(), mockPeon2, true), new ServerHolder(server3.toImmutableDruidServer(), mockPeon3, false)).build();
    CoordinatorStats stats = rule.run(null, makeCoordinatorRuntimeParams(druidCluster, segment1), segment1);
    Assert.assertEquals(1L, stats.getTieredStat("droppedCount", "tier1"));
    Assert.assertEquals(0, mockPeon1.getSegmentsToDrop().size());
    Assert.assertEquals(1, mockPeon2.getSegmentsToDrop().size());
    Assert.assertEquals(0, mockPeon3.getSegmentsToDrop().size());
    EasyMock.verify(throttler);
}
Also used : CoordinatorStats(org.apache.druid.server.coordinator.CoordinatorStats) ServerHolder(org.apache.druid.server.coordinator.ServerHolder) LoadQueuePeon(org.apache.druid.server.coordinator.LoadQueuePeon) DruidServer(org.apache.druid.client.DruidServer) ImmutableDruidServer(org.apache.druid.client.ImmutableDruidServer) DruidCluster(org.apache.druid.server.coordinator.DruidCluster) LoadQueuePeonTester(org.apache.druid.server.coordinator.LoadQueuePeonTester) DataSegment(org.apache.druid.timeline.DataSegment) Test(org.junit.Test)

Example 2 with LoadQueuePeonTester

use of org.apache.druid.server.coordinator.LoadQueuePeonTester in project druid by druid-io.

the class CachingCostBalancerStrategyTest method createServerHolder.

private ServerHolder createServerHolder(String name, String host, int maxSize, int numberOfSegments, Random random, DateTime referenceTime) {
    DruidServer druidServer = new DruidServer(name, host, null, maxSize, ServerType.HISTORICAL, "normal", 0);
    createDataSegments(numberOfSegments, random, referenceTime).forEach(druidServer::addDataSegment);
    return new ServerHolder(druidServer.toImmutableDruidServer(), new LoadQueuePeonTester());
}
Also used : ServerHolder(org.apache.druid.server.coordinator.ServerHolder) DruidServer(org.apache.druid.client.DruidServer) LoadQueuePeonTester(org.apache.druid.server.coordinator.LoadQueuePeonTester)

Example 3 with LoadQueuePeonTester

use of org.apache.druid.server.coordinator.LoadQueuePeonTester in project druid by druid-io.

the class BroadcastDistributionRuleTest method setUp.

@Before
public void setUp() {
    smallSegment = new DataSegment("small_source", Intervals.of("0/1000"), DateTimes.nowUtc().toString(), new HashMap<>(), new ArrayList<>(), new ArrayList<>(), NoneShardSpec.instance(), 0, 0);
    for (int i = 0; i < 3; i++) {
        largeSegments.add(new DataSegment("large_source", Intervals.of((i * 1000) + "/" + ((i + 1) * 1000)), DateTimes.nowUtc().toString(), new HashMap<>(), new ArrayList<>(), new ArrayList<>(), NoneShardSpec.instance(), 0, 100));
    }
    for (int i = 0; i < 2; i++) {
        largeSegments2.add(new DataSegment("large_source2", Intervals.of((i * 1000) + "/" + ((i + 1) * 1000)), DateTimes.nowUtc().toString(), new HashMap<>(), new ArrayList<>(), new ArrayList<>(), NoneShardSpec.instance(), 0, 100));
    }
    holderOfSmallSegment = new ServerHolder(new DruidServer("serverHot2", "hostHot2", null, 1000, ServerType.HISTORICAL, "hot", 0).addDataSegment(smallSegment).toImmutableDruidServer(), new LoadQueuePeonTester());
    holdersOfLargeSegments.add(new ServerHolder(new DruidServer("serverHot1", "hostHot1", null, 1000, ServerType.HISTORICAL, "hot", 0).addDataSegment(largeSegments.get(0)).toImmutableDruidServer(), new LoadQueuePeonTester()));
    holdersOfLargeSegments.add(new ServerHolder(new DruidServer("serverNorm1", "hostNorm1", null, 1000, ServerType.HISTORICAL, DruidServer.DEFAULT_TIER, 0).addDataSegment(largeSegments.get(1)).toImmutableDruidServer(), new LoadQueuePeonTester()));
    holdersOfLargeSegments.add(new ServerHolder(new DruidServer("serverNorm2", "hostNorm2", null, 100, ServerType.HISTORICAL, DruidServer.DEFAULT_TIER, 0).addDataSegment(largeSegments.get(2)).toImmutableDruidServer(), new LoadQueuePeonTester()));
    holdersOfLargeSegments2.add(new ServerHolder(new DruidServer("serverHot3", "hostHot3", null, 1000, ServerType.HISTORICAL, "hot", 0).addDataSegment(largeSegments2.get(0)).toImmutableDruidServer(), new LoadQueuePeonTester()));
    holdersOfLargeSegments2.add(new ServerHolder(new DruidServer("serverNorm3", "hostNorm3", null, 100, ServerType.HISTORICAL, DruidServer.DEFAULT_TIER, 0).addDataSegment(largeSegments2.get(1)).toImmutableDruidServer(), new LoadQueuePeonTester()));
    activeServer = new ServerHolder(new DruidServer("active", "host1", null, 100, ServerType.HISTORICAL, "tier1", 0).addDataSegment(largeSegments.get(0)).toImmutableDruidServer(), new LoadQueuePeonTester());
    decommissioningServer1 = new ServerHolder(new DruidServer("decommissioning1", "host2", null, 100, ServerType.HISTORICAL, "tier1", 0).addDataSegment(smallSegment).toImmutableDruidServer(), new LoadQueuePeonTester(), true);
    decommissioningServer2 = new ServerHolder(new DruidServer("decommissioning2", "host3", null, 100, ServerType.HISTORICAL, "tier1", 0).addDataSegment(largeSegments.get(1)).toImmutableDruidServer(), new LoadQueuePeonTester(), true);
    druidCluster = DruidClusterBuilder.newBuilder().addTier("hot", holdersOfLargeSegments.get(0), holderOfSmallSegment, holdersOfLargeSegments2.get(0)).addTier(DruidServer.DEFAULT_TIER, holdersOfLargeSegments.get(1), holdersOfLargeSegments.get(2), holdersOfLargeSegments2.get(1)).build();
    secondCluster = DruidClusterBuilder.newBuilder().addTier("tier1", activeServer, decommissioningServer1, decommissioningServer2).build();
}
Also used : ServerHolder(org.apache.druid.server.coordinator.ServerHolder) HashMap(java.util.HashMap) ArrayList(java.util.ArrayList) DruidServer(org.apache.druid.client.DruidServer) DataSegment(org.apache.druid.timeline.DataSegment) LoadQueuePeonTester(org.apache.druid.server.coordinator.LoadQueuePeonTester) Before(org.junit.Before)

Example 4 with LoadQueuePeonTester

use of org.apache.druid.server.coordinator.LoadQueuePeonTester in project druid by druid-io.

the class LoadRuleTest method testMaxLoadingQueueSize.

@Test
public void testMaxLoadingQueueSize() {
    EasyMock.expect(mockBalancerStrategy.findNewSegmentHomeReplicator(EasyMock.anyObject(), EasyMock.anyObject())).andDelegateTo(balancerStrategy).times(2);
    EasyMock.replay(throttler, mockBalancerStrategy);
    final LoadQueuePeonTester peon = new LoadQueuePeonTester();
    LoadRule rule = createLoadRule(ImmutableMap.of("hot", 1));
    DruidCluster druidCluster = DruidClusterBuilder.newBuilder().addTier("hot", new ServerHolder(new DruidServer("serverHot", "hostHot", null, 1000, ServerType.HISTORICAL, "hot", 0).toImmutableDruidServer(), peon)).build();
    DataSegment dataSegment1 = createDataSegment("ds1");
    DataSegment dataSegment2 = createDataSegment("ds2");
    DataSegment dataSegment3 = createDataSegment("ds3");
    DruidCoordinatorRuntimeParams params = CoordinatorRuntimeParamsTestHelpers.newBuilder().withDruidCluster(druidCluster).withSegmentReplicantLookup(SegmentReplicantLookup.make(druidCluster, false)).withReplicationManager(throttler).withBalancerStrategy(mockBalancerStrategy).withUsedSegmentsInTest(dataSegment1, dataSegment2, dataSegment3).withDynamicConfigs(CoordinatorDynamicConfig.builder().withMaxSegmentsInNodeLoadingQueue(2).build()).build();
    CoordinatorStats stats1 = rule.run(null, params, dataSegment1);
    CoordinatorStats stats2 = rule.run(null, params, dataSegment2);
    CoordinatorStats stats3 = rule.run(null, params, dataSegment3);
    Assert.assertEquals(1L, stats1.getTieredStat(LoadRule.ASSIGNED_COUNT, "hot"));
    Assert.assertEquals(1L, stats2.getTieredStat(LoadRule.ASSIGNED_COUNT, "hot"));
    Assert.assertFalse(stats3.getTiers(LoadRule.ASSIGNED_COUNT).contains("hot"));
    EasyMock.verify(throttler, mockBalancerStrategy);
}
Also used : DruidCoordinatorRuntimeParams(org.apache.druid.server.coordinator.DruidCoordinatorRuntimeParams) CoordinatorStats(org.apache.druid.server.coordinator.CoordinatorStats) ServerHolder(org.apache.druid.server.coordinator.ServerHolder) DruidServer(org.apache.druid.client.DruidServer) ImmutableDruidServer(org.apache.druid.client.ImmutableDruidServer) DruidCluster(org.apache.druid.server.coordinator.DruidCluster) LoadQueuePeonTester(org.apache.druid.server.coordinator.LoadQueuePeonTester) DataSegment(org.apache.druid.timeline.DataSegment) Test(org.junit.Test)

Example 5 with LoadQueuePeonTester

use of org.apache.druid.server.coordinator.LoadQueuePeonTester in project druid by druid-io.

the class UnloadUnusedSegmentsTest method setUp.

@Before
public void setUp() {
    coordinator = EasyMock.createMock(DruidCoordinator.class);
    historicalServer = EasyMock.createMock(ImmutableDruidServer.class);
    historicalServerTier2 = EasyMock.createMock(ImmutableDruidServer.class);
    brokerServer = EasyMock.createMock(ImmutableDruidServer.class);
    indexerServer = EasyMock.createMock(ImmutableDruidServer.class);
    segment1 = EasyMock.createMock(DataSegment.class);
    segment2 = EasyMock.createMock(DataSegment.class);
    databaseRuleManager = EasyMock.createMock(MetadataRuleManager.class);
    DateTime start1 = DateTimes.of("2012-01-01");
    DateTime start2 = DateTimes.of("2012-02-01");
    DateTime version = DateTimes.of("2012-05-01");
    segment1 = new DataSegment("datasource1", new Interval(start1, start1.plusHours(1)), version.toString(), new HashMap<>(), new ArrayList<>(), new ArrayList<>(), NoneShardSpec.instance(), 0, 11L);
    segment2 = new DataSegment("datasource2", new Interval(start1, start1.plusHours(1)), version.toString(), new HashMap<>(), new ArrayList<>(), new ArrayList<>(), NoneShardSpec.instance(), 0, 7L);
    realtimeOnlySegment = new DataSegment("datasource2", new Interval(start2, start2.plusHours(1)), version.toString(), new HashMap<>(), new ArrayList<>(), new ArrayList<>(), NoneShardSpec.instance(), 0, 7L);
    broadcastSegment = new DataSegment("broadcastDatasource", new Interval(start1, start1.plusHours(1)), version.toString(), new HashMap<>(), new ArrayList<>(), new ArrayList<>(), NoneShardSpec.instance(), 0, 7L);
    segments = new ArrayList<>();
    segments.add(segment1);
    segments.add(segment2);
    segments.add(broadcastSegment);
    segmentsForRealtime = new ArrayList<>();
    segmentsForRealtime.add(realtimeOnlySegment);
    segmentsForRealtime.add(broadcastSegment);
    historicalPeon = new LoadQueuePeonTester();
    historicalTier2Peon = new LoadQueuePeonTester();
    brokerPeon = new LoadQueuePeonTester();
    indexerPeon = new LoadQueuePeonTester();
    dataSource1 = new ImmutableDruidDataSource("datasource1", Collections.emptyMap(), Collections.singleton(segment1));
    dataSource2 = new ImmutableDruidDataSource("datasource2", Collections.emptyMap(), Collections.singleton(segment2));
    broadcastDatasourceNames = Collections.singleton("broadcastDatasource");
    broadcastDatasource = new ImmutableDruidDataSource("broadcastDatasource", Collections.emptyMap(), Collections.singleton(broadcastSegment));
    dataSources = ImmutableList.of(dataSource1, dataSource2, broadcastDatasource);
    // This simulates a task that is ingesting to an existing non-broadcast datasource, with unpublished segments,
    // while also having a broadcast segment loaded.
    dataSource2ForRealtime = new ImmutableDruidDataSource("datasource2", Collections.emptyMap(), Collections.singleton(realtimeOnlySegment));
    dataSourcesForRealtime = ImmutableList.of(dataSource2ForRealtime, broadcastDatasource);
}
Also used : MetadataRuleManager(org.apache.druid.metadata.MetadataRuleManager) ImmutableDruidDataSource(org.apache.druid.client.ImmutableDruidDataSource) HashMap(java.util.HashMap) ArrayList(java.util.ArrayList) DataSegment(org.apache.druid.timeline.DataSegment) LoadQueuePeonTester(org.apache.druid.server.coordinator.LoadQueuePeonTester) DruidCoordinator(org.apache.druid.server.coordinator.DruidCoordinator) ImmutableDruidServer(org.apache.druid.client.ImmutableDruidServer) DateTime(org.joda.time.DateTime) Interval(org.joda.time.Interval) Before(org.junit.Before)

Aggregations

LoadQueuePeonTester (org.apache.druid.server.coordinator.LoadQueuePeonTester)5 DruidServer (org.apache.druid.client.DruidServer)4 ServerHolder (org.apache.druid.server.coordinator.ServerHolder)4 DataSegment (org.apache.druid.timeline.DataSegment)4 ImmutableDruidServer (org.apache.druid.client.ImmutableDruidServer)3 ArrayList (java.util.ArrayList)2 HashMap (java.util.HashMap)2 CoordinatorStats (org.apache.druid.server.coordinator.CoordinatorStats)2 DruidCluster (org.apache.druid.server.coordinator.DruidCluster)2 Before (org.junit.Before)2 Test (org.junit.Test)2 ImmutableDruidDataSource (org.apache.druid.client.ImmutableDruidDataSource)1 MetadataRuleManager (org.apache.druid.metadata.MetadataRuleManager)1 DruidCoordinator (org.apache.druid.server.coordinator.DruidCoordinator)1 DruidCoordinatorRuntimeParams (org.apache.druid.server.coordinator.DruidCoordinatorRuntimeParams)1 LoadQueuePeon (org.apache.druid.server.coordinator.LoadQueuePeon)1 DateTime (org.joda.time.DateTime)1 Interval (org.joda.time.Interval)1