Search in sources :

Example 16 with LeafQueue

use of org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue in project hadoop by apache.

the class SLSCapacityScheduler method initQueueMetrics.

private void initQueueMetrics(CSQueue queue) {
    if (queue instanceof LeafQueue) {
        SortedMap<String, Counter> counterMap = metrics.getCounters();
        String queueName = queue.getQueueName();
        String[] names = new String[] { QUEUE_COUNTER_PREFIX + queueName + ".pending.memory", QUEUE_COUNTER_PREFIX + queueName + ".pending.cores", QUEUE_COUNTER_PREFIX + queueName + ".allocated.memory", QUEUE_COUNTER_PREFIX + queueName + ".allocated.cores" };
        for (int i = names.length - 1; i >= 0; i--) {
            if (!counterMap.containsKey(names[i])) {
                metrics.counter(names[i]);
                counterMap = metrics.getCounters();
            }
        }
        queueLock.lock();
        try {
            if (!schedulerMetrics.isTracked(queueName)) {
                schedulerMetrics.trackQueue(queueName);
            }
        } finally {
            queueLock.unlock();
        }
        return;
    }
    for (CSQueue child : queue.getChildQueues()) {
        initQueueMetrics(child);
    }
}
Also used : Counter(com.codahale.metrics.Counter) LeafQueue(org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue) CSQueue(org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CSQueue)

Example 17 with LeafQueue

use of org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue in project hadoop by apache.

the class IntraQueueCandidatesSelector method selectCandidates.

@Override
public Map<ApplicationAttemptId, Set<RMContainer>> selectCandidates(Map<ApplicationAttemptId, Set<RMContainer>> selectedCandidates, Resource clusterResource, Resource totalPreemptedResourceAllowed) {
    // 1. Calculate the abnormality within each queue one by one.
    computeIntraQueuePreemptionDemand(clusterResource, totalPreemptedResourceAllowed, selectedCandidates);
    // 2. Previous selectors (with higher priority) could have already
    // selected containers. We need to deduct pre-emptable resources
    // based on already selected candidates.
    CapacitySchedulerPreemptionUtils.deductPreemptableResourcesBasedSelectedCandidates(preemptionContext, selectedCandidates);
    // 3. Loop through all partitions to select containers for preemption.
    for (String partition : preemptionContext.getAllPartitions()) {
        LinkedHashSet<String> queueNames = preemptionContext.getUnderServedQueuesPerPartition(partition);
        // Error check to handle non-mapped labels to queue.
        if (null == queueNames) {
            continue;
        }
        // 4. Iterate from most under-served queue in order.
        for (String queueName : queueNames) {
            LeafQueue leafQueue = preemptionContext.getQueueByPartition(queueName, RMNodeLabelsManager.NO_LABEL).leafQueue;
            // skip if not a leafqueue
            if (null == leafQueue) {
                continue;
            }
            // Don't preempt if disabled for this queue.
            if (leafQueue.getPreemptionDisabled()) {
                continue;
            }
            // 5. Calculate the resource to obtain per partition
            Map<String, Resource> resToObtainByPartition = fifoPreemptionComputePlugin.getResourceDemandFromAppsPerQueue(queueName, partition);
            // containers with known policy from inter-queue preemption.
            try {
                leafQueue.getReadLock().lock();
                Iterator<FiCaSchedulerApp> desc = leafQueue.getOrderingPolicy().getPreemptionIterator();
                while (desc.hasNext()) {
                    FiCaSchedulerApp app = desc.next();
                    preemptFromLeastStarvedApp(selectedCandidates, clusterResource, totalPreemptedResourceAllowed, resToObtainByPartition, leafQueue, app);
                }
            } finally {
                leafQueue.getReadLock().unlock();
            }
        }
    }
    return selectedCandidates;
}
Also used : FiCaSchedulerApp(org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp) Resource(org.apache.hadoop.yarn.api.records.Resource) LeafQueue(org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue)

Aggregations

LeafQueue (org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue)17 Resource (org.apache.hadoop.yarn.api.records.Resource)9 FiCaSchedulerApp (org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp)7 ArrayList (java.util.ArrayList)5 Matchers.anyString (org.mockito.Matchers.anyString)5 TreeSet (java.util.TreeSet)4 HashMap (java.util.HashMap)3 RMContainer (org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer)3 ResourceUsage (org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceUsage)3 CSQueue (org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CSQueue)3 ParentQueue (org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue)3 DominantResourceCalculator (org.apache.hadoop.yarn.util.resource.DominantResourceCalculator)3 Test (org.junit.Test)3 InvocationOnMock (org.mockito.invocation.InvocationOnMock)3 ReentrantReadWriteLock (java.util.concurrent.locks.ReentrantReadWriteLock)2 ApplicationAttemptId (org.apache.hadoop.yarn.api.records.ApplicationAttemptId)2 ApplicationId (org.apache.hadoop.yarn.api.records.ApplicationId)2 QueueMetrics (org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueMetrics)2 CapacityScheduler (org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler)2 CapacitySchedulerConfiguration (org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration)2