Search in sources :

Example 6 with RasNode

use of org.apache.storm.scheduler.resource.RasNode in project storm by apache.

the class SchedulingSearcherState method logNodeCompAssignments.

/**
 * Use this method to log the current component assignments on the Node.
 * Useful for debugging and tests.
 */
public void logNodeCompAssignments() {
    if (nodeCompAssignmentCnts == null || nodeCompAssignmentCnts.isEmpty()) {
        LOG.info("Topology {} NodeCompAssignment is empty", topoName);
        return;
    }
    StringBuffer sb = new StringBuffer();
    int cntAllNodes = 0;
    int cntFilledNodes = 0;
    for (RasNode node : new TreeSet<>(nodeCompAssignmentCnts.keySet())) {
        cntAllNodes++;
        Map<String, Integer> oneMap = nodeCompAssignmentCnts.get(node);
        if (oneMap.isEmpty()) {
            continue;
        }
        cntFilledNodes++;
        String oneMapJoined = oneMap.entrySet().stream().map(e -> String.format("%s: %s", e.getKey(), e.getValue())).collect(Collectors.joining(","));
        sb.append(String.format("\n\t(%d) Node %s: %s", cntFilledNodes, node.getId(), oneMapJoined));
    }
    LOG.info("Topology {} NodeCompAssignments available for {} of {} nodes {}", topoName, cntFilledNodes, cntAllNodes, sb);
    LOG.info("Topology {} Executors assignments attempted (cnt={}) are: \n\t{}", topoName, execs.size(), execs.stream().map(ExecutorDetails::toString).collect(Collectors.joining(",")));
}
Also used : Acker(org.apache.storm.daemon.Acker) Logger(org.slf4j.Logger) RasNode(org.apache.storm.scheduler.resource.RasNode) TopologyDetails(org.apache.storm.scheduler.TopologyDetails) LoggerFactory(org.slf4j.LoggerFactory) Set(java.util.Set) HashMap(java.util.HashMap) SchedulingStatus(org.apache.storm.scheduler.resource.SchedulingStatus) Collectors(java.util.stream.Collectors) TreeSet(java.util.TreeSet) ArrayList(java.util.ArrayList) HashSet(java.util.HashSet) Time(org.apache.storm.utils.Time) SchedulingResult(org.apache.storm.scheduler.resource.SchedulingResult) List(java.util.List) ObjectReader(org.apache.storm.utils.ObjectReader) Map(java.util.Map) WorkerSlot(org.apache.storm.scheduler.WorkerSlot) Config(org.apache.storm.Config) LinkedList(java.util.LinkedList) ExecutorDetails(org.apache.storm.scheduler.ExecutorDetails) ExecutorDetails(org.apache.storm.scheduler.ExecutorDetails) TreeSet(java.util.TreeSet) RasNode(org.apache.storm.scheduler.resource.RasNode)

Example 7 with RasNode

use of org.apache.storm.scheduler.resource.RasNode in project storm by apache.

the class NodeSorterHostProximity method sortHosts.

/**
 * Nodes are sorted by two criteria.
 *
 * <p>1) the number executors of the topology that needs to be scheduled is already on the node in
 * descending order. The reasoning to sort based on criterion 1 is so we schedule the rest of a topology on the same node as the
 * existing executors of the topology.
 *
 * <p>2) the subordinate/subservient resource availability percentage of a node in descending
 * order We calculate the resource availability percentage by dividing the resource availability that have exhausted or little of one of
 * the resources mentioned above will be ranked after on the node by the resource availability of the entire rack By doing this
 * calculation, nodes nodes that have more balanced resource availability. So we will be less likely to pick a node that have a lot of
 * one resource but a low amount of another.
 *
 * @param availHosts a collection of all the hosts we want to sort
 * @param rackId     the rack id availNodes are a part of
 * @return an iterable of sorted hosts.
 */
private Iterable<ObjectResourcesItem> sortHosts(Collection<String> availHosts, ExecutorDetails exec, String rackId, Map<String, AtomicInteger> scheduledCount) {
    ObjectResourcesSummary rackResourcesSummary = new ObjectResourcesSummary("RACK");
    availHosts.forEach(h -> {
        ObjectResourcesItem hostItem = new ObjectResourcesItem(h);
        for (RasNode x : hostnameToNodes.get(h)) {
            hostItem.add(new ObjectResourcesItem(x.getId(), x.getTotalAvailableResources(), x.getTotalResources(), 0, 0));
        }
        rackResourcesSummary.addObjectResourcesItem(hostItem);
    });
    LOG.debug("Rack {}: Overall Avail [ {} ] Total [ {} ]", rackId, rackResourcesSummary.getAvailableResourcesOverall(), rackResourcesSummary.getTotalResourcesOverall());
    return sortObjectResources(rackResourcesSummary, exec, (hostId) -> {
        AtomicInteger count = scheduledCount.get(hostId);
        if (count == null) {
            return 0;
        }
        return count.get();
    });
}
Also used : ObjectResourcesSummary(org.apache.storm.scheduler.resource.strategies.scheduling.ObjectResourcesSummary) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) ObjectResourcesItem(org.apache.storm.scheduler.resource.strategies.scheduling.ObjectResourcesItem) RasNode(org.apache.storm.scheduler.resource.RasNode)

Example 8 with RasNode

use of org.apache.storm.scheduler.resource.RasNode in project storm by apache.

the class BaseResourceAwareStrategy method createSearcherState.

/**
 * Create an instance of {@link SchedulingSearcherState}. This method is called by
 * {@link #prepareForScheduling(Cluster, TopologyDetails)} and depends on variables initialized therein prior.
 *
 * @return a new instance of {@link SchedulingSearcherState}.
 */
private SchedulingSearcherState createSearcherState() {
    Map<WorkerSlot, Map<String, Integer>> workerCompCnts = new HashMap<>();
    Map<RasNode, Map<String, Integer>> nodeCompCnts = new HashMap<>();
    // populate with existing assignments
    SchedulerAssignment existingAssignment = cluster.getAssignmentById(topologyDetails.getId());
    if (existingAssignment != null) {
        existingAssignment.getExecutorToSlot().forEach((exec, ws) -> {
            String compId = execToComp.get(exec);
            RasNode node = nodes.getNodeById(ws.getNodeId());
            Map<String, Integer> compCnts = nodeCompCnts.computeIfAbsent(node, (k) -> new HashMap<>());
            // increment
            compCnts.put(compId, compCnts.getOrDefault(compId, 0) + 1);
            // populate worker to comp assignments
            compCnts = workerCompCnts.computeIfAbsent(ws, (k) -> new HashMap<>());
            // increment
            compCnts.put(compId, compCnts.getOrDefault(compId, 0) + 1);
        });
    }
    LinkedList<ExecutorDetails> unassignedAckers = new LinkedList<>();
    if (compToExecs.containsKey(Acker.ACKER_COMPONENT_ID)) {
        for (ExecutorDetails acker : compToExecs.get(Acker.ACKER_COMPONENT_ID)) {
            if (unassignedExecutors.contains(acker)) {
                unassignedAckers.add(acker);
            }
        }
    }
    return new SchedulingSearcherState(workerCompCnts, nodeCompCnts, maxStateSearch, maxSchedulingTimeMs, new ArrayList<>(unassignedExecutors), unassignedAckers, topologyDetails, execToComp);
}
Also used : Acker(org.apache.storm.daemon.Acker) IExecSorter(org.apache.storm.scheduler.resource.strategies.scheduling.sorter.IExecSorter) ExecSorterByConnectionCount(org.apache.storm.scheduler.resource.strategies.scheduling.sorter.ExecSorterByConnectionCount) RasNode(org.apache.storm.scheduler.resource.RasNode) LoggerFactory(org.slf4j.LoggerFactory) HashMap(java.util.HashMap) NodeSorter(org.apache.storm.scheduler.resource.strategies.scheduling.sorter.NodeSorter) RasNodes(org.apache.storm.scheduler.resource.RasNodes) ArrayList(java.util.ArrayList) HashSet(java.util.HashSet) ExecSorterByProximity(org.apache.storm.scheduler.resource.strategies.scheduling.sorter.ExecSorterByProximity) DaemonConfig(org.apache.storm.DaemonConfig) Map(java.util.Map) WorkerSlot(org.apache.storm.scheduler.WorkerSlot) LinkedList(java.util.LinkedList) NodeSorterHostProximity(org.apache.storm.scheduler.resource.strategies.scheduling.sorter.NodeSorterHostProximity) SchedulerAssignment(org.apache.storm.scheduler.SchedulerAssignment) Logger(org.slf4j.Logger) TopologyDetails(org.apache.storm.scheduler.TopologyDetails) Set(java.util.Set) INodeSorter(org.apache.storm.scheduler.resource.strategies.scheduling.sorter.INodeSorter) SchedulingStatus(org.apache.storm.scheduler.resource.SchedulingStatus) Cluster(org.apache.storm.scheduler.Cluster) Time(org.apache.storm.utils.Time) SchedulingResult(org.apache.storm.scheduler.resource.SchedulingResult) List(java.util.List) ObjectReader(org.apache.storm.utils.ObjectReader) Config(org.apache.storm.Config) Collections(java.util.Collections) ExecutorDetails(org.apache.storm.scheduler.ExecutorDetails) ExecutorDetails(org.apache.storm.scheduler.ExecutorDetails) HashMap(java.util.HashMap) LinkedList(java.util.LinkedList) SchedulerAssignment(org.apache.storm.scheduler.SchedulerAssignment) WorkerSlot(org.apache.storm.scheduler.WorkerSlot) RasNode(org.apache.storm.scheduler.resource.RasNode) HashMap(java.util.HashMap) Map(java.util.Map)

Example 9 with RasNode

use of org.apache.storm.scheduler.resource.RasNode in project storm by apache.

the class TestDefaultResourceAwareStrategy method testMultipleRacks.

/**
 * Test whether strategy will choose correct rack
 */
@Test
public void testMultipleRacks() {
    final Map<String, SupervisorDetails> supMap = new HashMap<>();
    final Map<String, SupervisorDetails> supMapRack0 = genSupervisors(10, 4, 0, 400, 8000);
    // generate another rack of supervisors with less resources
    final Map<String, SupervisorDetails> supMapRack1 = genSupervisors(10, 4, 10, 200, 4000);
    // generate some supervisors that are depleted of one resource
    final Map<String, SupervisorDetails> supMapRack2 = genSupervisors(10, 4, 20, 0, 8000);
    // generate some that has alot of memory but little of cpu
    final Map<String, SupervisorDetails> supMapRack3 = genSupervisors(10, 4, 30, 10, 8000 * 2 + 4000);
    // generate some that has alot of cpu but little of memory
    final Map<String, SupervisorDetails> supMapRack4 = genSupervisors(10, 4, 40, 400 + 200 + 10, 1000);
    // Generate some that have neither resource, to verify that the strategy will prioritize this last
    // Also put a generic resource with 0 value in the resources list, to verify that it doesn't affect the sorting
    final Map<String, SupervisorDetails> supMapRack5 = genSupervisors(10, 4, 50, 0.0, 0.0, Collections.singletonMap("gpu.count", 0.0));
    supMap.putAll(supMapRack0);
    supMap.putAll(supMapRack1);
    supMap.putAll(supMapRack2);
    supMap.putAll(supMapRack3);
    supMap.putAll(supMapRack4);
    supMap.putAll(supMapRack5);
    Config config = createClusterConfig(100, 500, 500, null);
    config.put(Config.TOPOLOGY_WORKER_MAX_HEAP_SIZE_MB, Double.MAX_VALUE);
    INimbus iNimbus = new INimbusTest();
    // create test DNSToSwitchMapping plugin
    DNSToSwitchMapping TestNetworkTopographyPlugin = new TestDNSToSwitchMapping(supMapRack0, supMapRack1, supMapRack2, supMapRack3, supMapRack4, supMapRack5);
    // generate topologies
    TopologyDetails topo1 = genTopology("topo-1", config, 8, 0, 2, 0, CURRENT_TIME - 2, 10, "user");
    TopologyDetails topo2 = genTopology("topo-2", config, 8, 0, 2, 0, CURRENT_TIME - 2, 10, "user");
    Topologies topologies = new Topologies(topo1, topo2);
    Cluster cluster = new Cluster(iNimbus, new ResourceMetrics(new StormMetricsRegistry()), supMap, new HashMap<>(), topologies, config);
    List<String> supHostnames = new LinkedList<>();
    for (SupervisorDetails sup : supMap.values()) {
        supHostnames.add(sup.getHost());
    }
    Map<String, List<String>> rackToNodes = new HashMap<>();
    Map<String, String> resolvedSuperVisors = TestNetworkTopographyPlugin.resolve(supHostnames);
    for (Map.Entry<String, String> entry : resolvedSuperVisors.entrySet()) {
        String hostName = entry.getKey();
        String rack = entry.getValue();
        rackToNodes.computeIfAbsent(rack, rid -> new ArrayList<>()).add(hostName);
    }
    cluster.setNetworkTopography(rackToNodes);
    DefaultResourceAwareStrategyOld rs = new DefaultResourceAwareStrategyOld();
    rs.prepareForScheduling(cluster, topo1);
    INodeSorter nodeSorter = new NodeSorterHostProximity(cluster, topo1, BaseResourceAwareStrategy.NodeSortType.DEFAULT_RAS);
    nodeSorter.prepare(null);
    Iterable<ObjectResourcesItem> sortedRacks = nodeSorter.getSortedRacks();
    Iterator<ObjectResourcesItem> it = sortedRacks.iterator();
    // Ranked first since rack-0 has the most balanced set of resources
    Assert.assertEquals("rack-0 should be ordered first", "rack-0", it.next().id);
    // Ranked second since rack-1 has a balanced set of resources but less than rack-0
    Assert.assertEquals("rack-1 should be ordered second", "rack-1", it.next().id);
    // Ranked third since rack-4 has a lot of cpu but not a lot of memory
    Assert.assertEquals("rack-4 should be ordered third", "rack-4", it.next().id);
    // Ranked fourth since rack-3 has alot of memory but not cpu
    Assert.assertEquals("rack-3 should be ordered fourth", "rack-3", it.next().id);
    // Ranked fifth since rack-2 has not cpu resources
    Assert.assertEquals("rack-2 should be ordered fifth", "rack-2", it.next().id);
    // Ranked last since rack-5 has neither CPU nor memory available
    assertEquals("Rack-5 should be ordered sixth", "rack-5", it.next().id);
    SchedulingResult schedulingResult = rs.schedule(cluster, topo1);
    assert (schedulingResult.isSuccess());
    SchedulerAssignment assignment = cluster.getAssignmentById(topo1.getId());
    for (WorkerSlot ws : assignment.getSlotToExecutors().keySet()) {
        // make sure all workers on scheduled in rack-0
        Assert.assertEquals("assert worker scheduled on rack-0", "rack-0", resolvedSuperVisors.get(rs.idToNode(ws.getNodeId()).getHostname()));
    }
    Assert.assertEquals("All executors in topo-1 scheduled", 0, cluster.getUnassignedExecutors(topo1).size());
    // Test if topology is already partially scheduled on one rack
    Iterator<ExecutorDetails> executorIterator = topo2.getExecutors().iterator();
    List<String> nodeHostnames = rackToNodes.get("rack-1");
    for (int i = 0; i < topo2.getExecutors().size() / 2; i++) {
        String nodeHostname = nodeHostnames.get(i % nodeHostnames.size());
        RasNode node = rs.hostnameToNodes(nodeHostname).get(0);
        WorkerSlot targetSlot = node.getFreeSlots().iterator().next();
        ExecutorDetails targetExec = executorIterator.next();
        // to keep track of free slots
        node.assign(targetSlot, topo2, Arrays.asList(targetExec));
    }
    rs = new DefaultResourceAwareStrategyOld();
    // schedule topo2
    schedulingResult = rs.schedule(cluster, topo2);
    assert (schedulingResult.isSuccess());
    assignment = cluster.getAssignmentById(topo2.getId());
    for (WorkerSlot ws : assignment.getSlotToExecutors().keySet()) {
        // make sure all workers on scheduled in rack-1
        Assert.assertEquals("assert worker scheduled on rack-1", "rack-1", resolvedSuperVisors.get(rs.idToNode(ws.getNodeId()).getHostname()));
    }
    Assert.assertEquals("All executors in topo-2 scheduled", 0, cluster.getUnassignedExecutors(topo2).size());
}
Also used : Arrays(java.util.Arrays) LoggerFactory(org.slf4j.LoggerFactory) INimbus(org.apache.storm.scheduler.INimbus) SupervisorResources(org.apache.storm.scheduler.SupervisorResources) ExtendWith(org.junit.jupiter.api.extension.ExtendWith) Matchers.closeTo(org.hamcrest.Matchers.closeTo) ResourceMetrics(org.apache.storm.scheduler.resource.normalization.ResourceMetrics) WorkerSlot(org.apache.storm.scheduler.WorkerSlot) Map(java.util.Map) TopologyBuilder(org.apache.storm.topology.TopologyBuilder) NodeSorterHostProximity(org.apache.storm.scheduler.resource.strategies.scheduling.sorter.NodeSorterHostProximity) SchedulerAssignment(org.apache.storm.scheduler.SchedulerAssignment) DNSToSwitchMapping(org.apache.storm.networktopography.DNSToSwitchMapping) Collection(java.util.Collection) TopologyDetails(org.apache.storm.scheduler.TopologyDetails) Collectors(java.util.stream.Collectors) SharedOnHeap(org.apache.storm.topology.SharedOnHeap) Test(org.junit.jupiter.api.Test) WorkerResources(org.apache.storm.generated.WorkerResources) List(java.util.List) TestUtilsForResourceAwareScheduler(org.apache.storm.scheduler.resource.TestUtilsForResourceAwareScheduler) Entry(java.util.Map.Entry) Config(org.apache.storm.Config) Matchers.is(org.hamcrest.Matchers.is) InvalidTopologyException(org.apache.storm.generated.InvalidTopologyException) StormCommon(org.apache.storm.daemon.StormCommon) ExecutorDetails(org.apache.storm.scheduler.ExecutorDetails) IScheduler(org.apache.storm.scheduler.IScheduler) RasNode(org.apache.storm.scheduler.resource.RasNode) SharedOffHeapWithinNode(org.apache.storm.topology.SharedOffHeapWithinNode) NodeSorter(org.apache.storm.scheduler.resource.strategies.scheduling.sorter.NodeSorter) EnumSource(org.junit.jupiter.params.provider.EnumSource) HashMap(java.util.HashMap) ArrayList(java.util.ArrayList) HashSet(java.util.HashSet) Topologies(org.apache.storm.scheduler.Topologies) ServerUtils(org.apache.storm.utils.ServerUtils) StormTopology(org.apache.storm.generated.StormTopology) NormalizedResourcesExtension(org.apache.storm.scheduler.resource.normalization.NormalizedResourcesExtension) LinkedList(java.util.LinkedList) StormMetricsRegistry(org.apache.storm.metric.StormMetricsRegistry) ValueSource(org.junit.jupiter.params.provider.ValueSource) Logger(org.slf4j.Logger) Iterator(java.util.Iterator) SharedOffHeapWithinWorker(org.apache.storm.topology.SharedOffHeapWithinWorker) INodeSorter(org.apache.storm.scheduler.resource.strategies.scheduling.sorter.INodeSorter) SupervisorDetails(org.apache.storm.scheduler.SupervisorDetails) TopologyResources(org.apache.storm.daemon.nimbus.TopologyResources) Cluster(org.apache.storm.scheduler.Cluster) ResourceAwareScheduler(org.apache.storm.scheduler.resource.ResourceAwareScheduler) SchedulingResult(org.apache.storm.scheduler.resource.SchedulingResult) Nimbus(org.apache.storm.daemon.nimbus.Nimbus) AfterEach(org.junit.jupiter.api.AfterEach) ParameterizedTest(org.junit.jupiter.params.ParameterizedTest) Assert(org.junit.Assert) Collections(java.util.Collections) ExecutorDetails(org.apache.storm.scheduler.ExecutorDetails) HashMap(java.util.HashMap) Config(org.apache.storm.Config) StormMetricsRegistry(org.apache.storm.metric.StormMetricsRegistry) ArrayList(java.util.ArrayList) NodeSorterHostProximity(org.apache.storm.scheduler.resource.strategies.scheduling.sorter.NodeSorterHostProximity) SchedulingResult(org.apache.storm.scheduler.resource.SchedulingResult) ResourceMetrics(org.apache.storm.scheduler.resource.normalization.ResourceMetrics) INodeSorter(org.apache.storm.scheduler.resource.strategies.scheduling.sorter.INodeSorter) WorkerSlot(org.apache.storm.scheduler.WorkerSlot) RasNode(org.apache.storm.scheduler.resource.RasNode) DNSToSwitchMapping(org.apache.storm.networktopography.DNSToSwitchMapping) Topologies(org.apache.storm.scheduler.Topologies) List(java.util.List) ArrayList(java.util.ArrayList) LinkedList(java.util.LinkedList) SupervisorDetails(org.apache.storm.scheduler.SupervisorDetails) Cluster(org.apache.storm.scheduler.Cluster) INimbus(org.apache.storm.scheduler.INimbus) TopologyDetails(org.apache.storm.scheduler.TopologyDetails) LinkedList(java.util.LinkedList) SchedulerAssignment(org.apache.storm.scheduler.SchedulerAssignment) Map(java.util.Map) HashMap(java.util.HashMap) Test(org.junit.jupiter.api.Test) ParameterizedTest(org.junit.jupiter.params.ParameterizedTest)

Example 10 with RasNode

use of org.apache.storm.scheduler.resource.RasNode in project storm by apache.

the class TestDefaultResourceAwareStrategy method testMultipleRacksWithFavoritism.

/**
 * Test whether strategy will choose correct rack
 */
@Test
public void testMultipleRacksWithFavoritism() {
    final Map<String, SupervisorDetails> supMap = new HashMap<>();
    final Map<String, SupervisorDetails> supMapRack0 = genSupervisors(10, 4, 0, 400, 8000);
    // generate another rack of supervisors with less resources
    final Map<String, SupervisorDetails> supMapRack1 = genSupervisors(10, 4, 10, 200, 4000);
    // generate some supervisors that are depleted of one resource
    final Map<String, SupervisorDetails> supMapRack2 = genSupervisors(10, 4, 20, 0, 8000);
    // generate some that has alot of memory but little of cpu
    final Map<String, SupervisorDetails> supMapRack3 = genSupervisors(10, 4, 30, 10, 8000 * 2 + 4000);
    // generate some that has alot of cpu but little of memory
    final Map<String, SupervisorDetails> supMapRack4 = genSupervisors(10, 4, 40, 400 + 200 + 10, 1000);
    supMap.putAll(supMapRack0);
    supMap.putAll(supMapRack1);
    supMap.putAll(supMapRack2);
    supMap.putAll(supMapRack3);
    supMap.putAll(supMapRack4);
    Config config = createClusterConfig(100, 500, 500, null);
    config.put(Config.TOPOLOGY_WORKER_MAX_HEAP_SIZE_MB, Double.MAX_VALUE);
    INimbus iNimbus = new INimbusTest();
    // create test DNSToSwitchMapping plugin
    DNSToSwitchMapping TestNetworkTopographyPlugin = new TestDNSToSwitchMapping(supMapRack0, supMapRack1, supMapRack2, supMapRack3, supMapRack4);
    Config t1Conf = new Config();
    t1Conf.putAll(config);
    final List<String> t1FavoredHostNames = Arrays.asList("host-41", "host-42", "host-43");
    t1Conf.put(Config.TOPOLOGY_SCHEDULER_FAVORED_NODES, t1FavoredHostNames);
    final List<String> t1UnfavoredHostIds = Arrays.asList("host-1", "host-2", "host-3");
    t1Conf.put(Config.TOPOLOGY_SCHEDULER_UNFAVORED_NODES, t1UnfavoredHostIds);
    // generate topologies
    TopologyDetails topo1 = genTopology("topo-1", t1Conf, 8, 0, 2, 0, CURRENT_TIME - 2, 10, "user");
    Config t2Conf = new Config();
    t2Conf.putAll(config);
    t2Conf.put(Config.TOPOLOGY_SCHEDULER_FAVORED_NODES, Arrays.asList("host-31", "host-32", "host-33"));
    t2Conf.put(Config.TOPOLOGY_SCHEDULER_UNFAVORED_NODES, Arrays.asList("host-11", "host-12", "host-13"));
    TopologyDetails topo2 = genTopology("topo-2", t2Conf, 8, 0, 2, 0, CURRENT_TIME - 2, 10, "user");
    Topologies topologies = new Topologies(topo1, topo2);
    Cluster cluster = new Cluster(iNimbus, new ResourceMetrics(new StormMetricsRegistry()), supMap, new HashMap<>(), topologies, config);
    List<String> supHostnames = new LinkedList<>();
    for (SupervisorDetails sup : supMap.values()) {
        supHostnames.add(sup.getHost());
    }
    Map<String, List<String>> rackToNodes = new HashMap<>();
    Map<String, String> resolvedSuperVisors = TestNetworkTopographyPlugin.resolve(supHostnames);
    for (Map.Entry<String, String> entry : resolvedSuperVisors.entrySet()) {
        String hostName = entry.getKey();
        String rack = entry.getValue();
        List<String> nodesForRack = rackToNodes.get(rack);
        if (nodesForRack == null) {
            nodesForRack = new ArrayList<>();
            rackToNodes.put(rack, nodesForRack);
        }
        nodesForRack.add(hostName);
    }
    cluster.setNetworkTopography(rackToNodes);
    DefaultResourceAwareStrategyOld rs = new DefaultResourceAwareStrategyOld();
    rs.prepareForScheduling(cluster, topo1);
    INodeSorter nodeSorter = new NodeSorterHostProximity(cluster, topo1, BaseResourceAwareStrategy.NodeSortType.DEFAULT_RAS);
    nodeSorter.prepare(null);
    Iterable<ObjectResourcesItem> sortedRacks = nodeSorter.getSortedRacks();
    Iterator<ObjectResourcesItem> it = sortedRacks.iterator();
    // Ranked first since rack-0 has the most balanced set of resources
    Assert.assertEquals("rack-0 should be ordered first", "rack-0", it.next().id);
    // Ranked second since rack-1 has a balanced set of resources but less than rack-0
    Assert.assertEquals("rack-1 should be ordered second", "rack-1", it.next().id);
    // Ranked third since rack-4 has a lot of cpu but not a lot of memory
    Assert.assertEquals("rack-4 should be ordered third", "rack-4", it.next().id);
    // Ranked fourth since rack-3 has alot of memory but not cpu
    Assert.assertEquals("rack-3 should be ordered fourth", "rack-3", it.next().id);
    // Ranked last since rack-2 has not cpu resources
    Assert.assertEquals("rack-2 should be ordered fifth", "rack-2", it.next().id);
    SchedulingResult schedulingResult = rs.schedule(cluster, topo1);
    assert (schedulingResult.isSuccess());
    SchedulerAssignment assignment = cluster.getAssignmentById(topo1.getId());
    for (WorkerSlot ws : assignment.getSlotToExecutors().keySet()) {
        String hostName = rs.idToNode(ws.getNodeId()).getHostname();
        String rackId = resolvedSuperVisors.get(hostName);
        Assert.assertTrue(ws + " is neither on a favored node " + t1FavoredHostNames + " nor the highest priority rack (rack-0)", t1FavoredHostNames.contains(hostName) || "rack-0".equals(rackId));
        Assert.assertFalse(ws + " is a part of an unfavored node " + t1UnfavoredHostIds, t1UnfavoredHostIds.contains(hostName));
    }
    Assert.assertEquals("All executors in topo-1 scheduled", 0, cluster.getUnassignedExecutors(topo1).size());
    // Test if topology is already partially scheduled on one rack
    Iterator<ExecutorDetails> executorIterator = topo2.getExecutors().iterator();
    List<String> nodeHostnames = rackToNodes.get("rack-1");
    for (int i = 0; i < topo2.getExecutors().size() / 2; i++) {
        String nodeHostname = nodeHostnames.get(i % nodeHostnames.size());
        RasNode node = rs.hostnameToNodes(nodeHostname).get(0);
        WorkerSlot targetSlot = node.getFreeSlots().iterator().next();
        ExecutorDetails targetExec = executorIterator.next();
        // to keep track of free slots
        node.assign(targetSlot, topo2, Arrays.asList(targetExec));
    }
    rs = new DefaultResourceAwareStrategyOld();
    // schedule topo2
    schedulingResult = rs.schedule(cluster, topo2);
    assert (schedulingResult.isSuccess());
    assignment = cluster.getAssignmentById(topo2.getId());
    for (WorkerSlot ws : assignment.getSlotToExecutors().keySet()) {
        // make sure all workers on scheduled in rack-1
        // The favored nodes would have put it on a different rack, but because that rack does not have free space to run the
        // topology it falls back to this rack
        Assert.assertEquals("assert worker scheduled on rack-1", "rack-1", resolvedSuperVisors.get(rs.idToNode(ws.getNodeId()).getHostname()));
    }
    Assert.assertEquals("All executors in topo-2 scheduled", 0, cluster.getUnassignedExecutors(topo2).size());
}
Also used : ExecutorDetails(org.apache.storm.scheduler.ExecutorDetails) HashMap(java.util.HashMap) Config(org.apache.storm.Config) StormMetricsRegistry(org.apache.storm.metric.StormMetricsRegistry) NodeSorterHostProximity(org.apache.storm.scheduler.resource.strategies.scheduling.sorter.NodeSorterHostProximity) SchedulingResult(org.apache.storm.scheduler.resource.SchedulingResult) ResourceMetrics(org.apache.storm.scheduler.resource.normalization.ResourceMetrics) INodeSorter(org.apache.storm.scheduler.resource.strategies.scheduling.sorter.INodeSorter) WorkerSlot(org.apache.storm.scheduler.WorkerSlot) RasNode(org.apache.storm.scheduler.resource.RasNode) DNSToSwitchMapping(org.apache.storm.networktopography.DNSToSwitchMapping) Topologies(org.apache.storm.scheduler.Topologies) List(java.util.List) ArrayList(java.util.ArrayList) LinkedList(java.util.LinkedList) SupervisorDetails(org.apache.storm.scheduler.SupervisorDetails) Cluster(org.apache.storm.scheduler.Cluster) INimbus(org.apache.storm.scheduler.INimbus) TopologyDetails(org.apache.storm.scheduler.TopologyDetails) LinkedList(java.util.LinkedList) SchedulerAssignment(org.apache.storm.scheduler.SchedulerAssignment) Map(java.util.Map) HashMap(java.util.HashMap) Test(org.junit.jupiter.api.Test) ParameterizedTest(org.junit.jupiter.params.ParameterizedTest)

Aggregations

RasNode (org.apache.storm.scheduler.resource.RasNode)10 ArrayList (java.util.ArrayList)7 HashMap (java.util.HashMap)7 List (java.util.List)7 Map (java.util.Map)7 LinkedList (java.util.LinkedList)5 ExecutorDetails (org.apache.storm.scheduler.ExecutorDetails)5 TopologyDetails (org.apache.storm.scheduler.TopologyDetails)5 WorkerSlot (org.apache.storm.scheduler.WorkerSlot)5 SchedulingResult (org.apache.storm.scheduler.resource.SchedulingResult)5 HashSet (java.util.HashSet)4 Config (org.apache.storm.Config)4 Cluster (org.apache.storm.scheduler.Cluster)4 SchedulerAssignment (org.apache.storm.scheduler.SchedulerAssignment)4 Logger (org.slf4j.Logger)4 LoggerFactory (org.slf4j.LoggerFactory)4 Set (java.util.Set)3 INodeSorter (org.apache.storm.scheduler.resource.strategies.scheduling.sorter.INodeSorter)3 NodeSorterHostProximity (org.apache.storm.scheduler.resource.strategies.scheduling.sorter.NodeSorterHostProximity)3 Collection (java.util.Collection)2