Search in sources :

Example 51 with NavigableMap

use of java.util.NavigableMap in project hbase by apache.

the class AsyncAggregationClient method findMedian.

private static <R, S, P extends Message, Q extends Message, T extends Message> void findMedian(CompletableFuture<R> future, RawAsyncTable table, ColumnInterpreter<R, S, P, Q, T> ci, Scan scan, NavigableMap<byte[], S> sumByRegion) {
    double halfSum = ci.divideForAvg(sumByRegion.values().stream().reduce(ci::add).get(), 2L);
    S movingSum = null;
    byte[] startRow = null;
    for (Map.Entry<byte[], S> entry : sumByRegion.entrySet()) {
        startRow = entry.getKey();
        S newMovingSum = ci.add(movingSum, entry.getValue());
        if (ci.divideForAvg(newMovingSum, 1L) > halfSum) {
            break;
        }
        movingSum = newMovingSum;
    }
    if (startRow != null) {
        scan.withStartRow(startRow);
    }
    // we can not pass movingSum directly to an anonymous class as it is not final.
    S baseSum = movingSum;
    byte[] family = scan.getFamilies()[0];
    NavigableSet<byte[]> qualifiers = scan.getFamilyMap().get(family);
    byte[] weightQualifier = qualifiers.last();
    byte[] valueQualifier = qualifiers.first();
    table.scan(scan, new RawScanResultConsumer() {

        private S sum = baseSum;

        private R value = null;

        @Override
        public void onNext(Result[] results, ScanController controller) {
            try {
                for (Result result : results) {
                    Cell weightCell = result.getColumnLatestCell(family, weightQualifier);
                    R weight = ci.getValue(family, weightQualifier, weightCell);
                    sum = ci.add(sum, ci.castToReturnType(weight));
                    if (ci.divideForAvg(sum, 1L) > halfSum) {
                        if (value != null) {
                            future.complete(value);
                        } else {
                            future.completeExceptionally(new NoSuchElementException());
                        }
                        controller.terminate();
                        return;
                    }
                    Cell valueCell = result.getColumnLatestCell(family, valueQualifier);
                    value = ci.getValue(family, valueQualifier, valueCell);
                }
            } catch (IOException e) {
                future.completeExceptionally(e);
                controller.terminate();
            }
        }

        @Override
        public void onError(Throwable error) {
            future.completeExceptionally(error);
        }

        @Override
        public void onComplete() {
            if (!future.isDone()) {
                // we should not reach here as the future should be completed in onNext.
                future.completeExceptionally(new NoSuchElementException());
            }
        }
    });
}
Also used : RawScanResultConsumer(org.apache.hadoop.hbase.client.RawScanResultConsumer) IOException(java.io.IOException) Result(org.apache.hadoop.hbase.client.Result) Map(java.util.Map) NavigableMap(java.util.NavigableMap) TreeMap(java.util.TreeMap) Cell(org.apache.hadoop.hbase.Cell) NoSuchElementException(java.util.NoSuchElementException)

Example 52 with NavigableMap

use of java.util.NavigableMap in project hbase by apache.

the class Increment method getFamilyMapOfLongs.

/**
   * Before 0.95, when you called Increment#getFamilyMap(), you got back
   * a map of families to a list of Longs.  Now, {@link #getFamilyCellMap()} returns
   * families by list of Cells.  This method has been added so you can have the
   * old behavior.
   * @return Map of families to a Map of qualifiers and their Long increments.
   * @since 0.95.0
   */
public Map<byte[], NavigableMap<byte[], Long>> getFamilyMapOfLongs() {
    NavigableMap<byte[], List<Cell>> map = super.getFamilyCellMap();
    Map<byte[], NavigableMap<byte[], Long>> results = new TreeMap<>(Bytes.BYTES_COMPARATOR);
    for (Map.Entry<byte[], List<Cell>> entry : map.entrySet()) {
        NavigableMap<byte[], Long> longs = new TreeMap<>(Bytes.BYTES_COMPARATOR);
        for (Cell cell : entry.getValue()) {
            longs.put(CellUtil.cloneQualifier(cell), Bytes.toLong(cell.getValueArray(), cell.getValueOffset(), cell.getValueLength()));
        }
        results.put(entry.getKey(), longs);
    }
    return results;
}
Also used : NavigableMap(java.util.NavigableMap) List(java.util.List) TreeMap(java.util.TreeMap) NavigableMap(java.util.NavigableMap) TreeMap(java.util.TreeMap) Map(java.util.Map) Cell(org.apache.hadoop.hbase.Cell)

Example 53 with NavigableMap

use of java.util.NavigableMap in project hadoop by apache.

the class StageAllocatorGreedyRLE method computeStageAllocation.

@Override
public Map<ReservationInterval, Resource> computeStageAllocation(Plan plan, Map<Long, Resource> planLoads, RLESparseResourceAllocation planModifications, ReservationRequest rr, long stageEarliestStart, long stageDeadline, String user, ReservationId oldId) throws PlanningException {
    // abort early if the interval is not satisfiable
    if (stageEarliestStart + rr.getDuration() > stageDeadline) {
        return null;
    }
    Map<ReservationInterval, Resource> allocationRequests = new HashMap<ReservationInterval, Resource>();
    Resource totalCapacity = plan.getTotalCapacity();
    // compute the gang as a resource and get the duration
    Resource sizeOfGang = Resources.multiply(rr.getCapability(), rr.getConcurrency());
    long dur = rr.getDuration();
    long step = plan.getStep();
    // ceil the duration to the next multiple of the plan step
    if (dur % step != 0) {
        dur += (step - (dur % step));
    }
    // we know for sure that this division has no remainder (part of contract
    // with user, validate before
    int gangsToPlace = rr.getNumContainers() / rr.getConcurrency();
    // get available resources from plan
    RLESparseResourceAllocation netRLERes = plan.getAvailableResourceOverTime(user, oldId, stageEarliestStart, stageDeadline);
    // remove plan modifications
    netRLERes = RLESparseResourceAllocation.merge(plan.getResourceCalculator(), totalCapacity, netRLERes, planModifications, RLEOperator.subtract, stageEarliestStart, stageDeadline);
    // an invalid range of times
    while (gangsToPlace > 0 && stageEarliestStart + dur <= stageDeadline) {
        // as we run along we remember how many gangs we can fit, and what
        // was the most constraining moment in time (we will restart just
        // after that to place the next batch)
        int maxGang = gangsToPlace;
        long minPoint = -1;
        // focus our attention to a time-range under consideration
        NavigableMap<Long, Resource> partialMap = netRLERes.getRangeOverlapping(stageEarliestStart, stageDeadline).getCumulative();
        // revert the map for right-to-left allocation
        if (!allocateLeft) {
            partialMap = partialMap.descendingMap();
        }
        Iterator<Entry<Long, Resource>> netIt = partialMap.entrySet().iterator();
        long oldT = stageDeadline;
        // interval (with outside loop)
        while (maxGang > 0 && netIt.hasNext()) {
            long t;
            Resource curAvailRes;
            Entry<Long, Resource> e = netIt.next();
            if (allocateLeft) {
                t = Math.max(e.getKey(), stageEarliestStart);
                curAvailRes = e.getValue();
            } else {
                t = oldT;
                oldT = e.getKey();
                //attention: higher means lower, because we reversed the map direction
                curAvailRes = partialMap.higherEntry(t).getValue();
            }
            // check exit/skip conditions/
            if (curAvailRes == null) {
                //skip undefined regions (should not happen beside borders)
                continue;
            }
            if (exitCondition(t, stageEarliestStart, stageDeadline, dur)) {
                break;
            }
            // compute maximum number of gangs we could fit
            int curMaxGang = (int) Math.floor(Resources.divide(plan.getResourceCalculator(), totalCapacity, curAvailRes, sizeOfGang));
            curMaxGang = Math.min(gangsToPlace, curMaxGang);
            // the minimum (useful for next attempts)
            if (curMaxGang <= maxGang) {
                maxGang = curMaxGang;
                minPoint = t;
            }
        }
        // update data structures that retain the progress made so far
        gangsToPlace = trackProgress(planModifications, rr, stageEarliestStart, stageDeadline, allocationRequests, dur, gangsToPlace, maxGang);
        // reset the next range of time-intervals to deal with
        if (allocateLeft) {
            // end of this allocation
            if (partialMap.higherKey(minPoint) == null) {
                stageEarliestStart = stageEarliestStart + dur;
            } else {
                stageEarliestStart = Math.min(partialMap.higherKey(minPoint), stageEarliestStart + dur);
            }
        } else {
            // same as above moving right-to-left
            if (partialMap.higherKey(minPoint) == null) {
                stageDeadline = stageDeadline - dur;
            } else {
                stageDeadline = Math.max(partialMap.higherKey(minPoint), stageDeadline - dur);
            }
        }
    }
    // if no gangs are left to place we succeed and return the allocation
    if (gangsToPlace == 0) {
        return allocationRequests;
    } else {
        // for ANY).
        for (Map.Entry<ReservationInterval, Resource> tempAllocation : allocationRequests.entrySet()) {
            planModifications.removeInterval(tempAllocation.getKey(), tempAllocation.getValue());
        }
        // and return null to signal failure in this allocation
        return null;
    }
}
Also used : HashMap(java.util.HashMap) Resource(org.apache.hadoop.yarn.api.records.Resource) ReservationInterval(org.apache.hadoop.yarn.server.resourcemanager.reservation.ReservationInterval) Entry(java.util.Map.Entry) RLESparseResourceAllocation(org.apache.hadoop.yarn.server.resourcemanager.reservation.RLESparseResourceAllocation) HashMap(java.util.HashMap) NavigableMap(java.util.NavigableMap) Map(java.util.Map)

Example 54 with NavigableMap

use of java.util.NavigableMap in project hadoop by apache.

the class CapacityOverTimePolicy method validate.

/**
   * The validation algorithm walks over the RLE encoded allocation and
   * checks that for all transition points (when the start or end of the
   * checking window encounters a value in the RLE). At this point it
   * checkes whether the integral computed exceeds the quota limit. Note that
   * this might not find the exact time of a violation, but if a violation
   * exists it will find it. The advantage is a much lower number of checks
   * as compared to time-slot by time-slot checks.
   *
   * @param plan the plan to validate against
   * @param reservation the reservation allocation to test.
   * @throws PlanningException if the validation fails.
   */
@Override
public void validate(Plan plan, ReservationAllocation reservation) throws PlanningException {
    // cluster limits, and 3) maxInst (via override of available)
    try {
        super.validate(plan, reservation);
    } catch (PlanningException p) {
        //wrap it in proper quota exception
        throw new PlanningQuotaException(p);
    }
    //---- check for integral violations of capacity --------
    // Gather a view of what to check (curr allocation of user, minus old
    // version of this reservation, plus new version)
    RLESparseResourceAllocation consumptionForUserOverTime = plan.getConsumptionForUserOverTime(reservation.getUser(), reservation.getStartTime() - validWindow, reservation.getEndTime() + validWindow);
    ReservationAllocation old = plan.getReservationById(reservation.getReservationId());
    if (old != null) {
        consumptionForUserOverTime = RLESparseResourceAllocation.merge(plan.getResourceCalculator(), plan.getTotalCapacity(), consumptionForUserOverTime, old.getResourcesOverTime(), RLEOperator.add, reservation.getStartTime() - validWindow, reservation.getEndTime() + validWindow);
    }
    RLESparseResourceAllocation resRLE = reservation.getResourcesOverTime();
    RLESparseResourceAllocation toCheck = RLESparseResourceAllocation.merge(plan.getResourceCalculator(), plan.getTotalCapacity(), consumptionForUserOverTime, resRLE, RLEOperator.add, Long.MIN_VALUE, Long.MAX_VALUE);
    NavigableMap<Long, Resource> integralUp = new TreeMap<>();
    NavigableMap<Long, Resource> integralDown = new TreeMap<>();
    long prevTime = toCheck.getEarliestStartTime();
    IntegralResource prevResource = new IntegralResource(0L, 0L);
    IntegralResource runningTot = new IntegralResource(0L, 0L);
    // add intermediate points
    Map<Long, Resource> temp = new TreeMap<>();
    for (Map.Entry<Long, Resource> pointToCheck : toCheck.getCumulative().entrySet()) {
        Long timeToCheck = pointToCheck.getKey();
        Resource resourceToCheck = pointToCheck.getValue();
        Long nextPoint = toCheck.getCumulative().higherKey(timeToCheck);
        if (nextPoint == null || toCheck.getCumulative().get(nextPoint) == null) {
            continue;
        }
        for (int i = 1; i <= (nextPoint - timeToCheck) / validWindow; i++) {
            temp.put(timeToCheck + (i * validWindow), resourceToCheck);
        }
    }
    temp.putAll(toCheck.getCumulative());
    // compute point-wise integral for the up-fronts and down-fronts
    for (Map.Entry<Long, Resource> currPoint : temp.entrySet()) {
        Long currTime = currPoint.getKey();
        Resource currResource = currPoint.getValue();
        //add to running total current contribution
        prevResource.multiplyBy(currTime - prevTime);
        runningTot.add(prevResource);
        integralUp.put(currTime, normalizeToResource(runningTot, validWindow));
        integralDown.put(currTime + validWindow, normalizeToResource(runningTot, validWindow));
        if (currResource != null) {
            prevResource.memory = currResource.getMemorySize();
            prevResource.vcores = currResource.getVirtualCores();
        } else {
            prevResource.memory = 0L;
            prevResource.vcores = 0L;
        }
        prevTime = currTime;
    }
    // compute final integral as delta of up minus down transitions
    RLESparseResourceAllocation intUp = new RLESparseResourceAllocation(integralUp, plan.getResourceCalculator());
    RLESparseResourceAllocation intDown = new RLESparseResourceAllocation(integralDown, plan.getResourceCalculator());
    RLESparseResourceAllocation integral = RLESparseResourceAllocation.merge(plan.getResourceCalculator(), plan.getTotalCapacity(), intUp, intDown, RLEOperator.subtract, Long.MIN_VALUE, Long.MAX_VALUE);
    // define over-time integral limit
    // note: this is aligned with the normalization done above
    NavigableMap<Long, Resource> tlimit = new TreeMap<>();
    Resource maxAvgRes = Resources.multiply(plan.getTotalCapacity(), maxAvg);
    tlimit.put(toCheck.getEarliestStartTime() - validWindow, maxAvgRes);
    RLESparseResourceAllocation targetLimit = new RLESparseResourceAllocation(tlimit, plan.getResourceCalculator());
    // compare using merge() limit with integral
    try {
        RLESparseResourceAllocation.merge(plan.getResourceCalculator(), plan.getTotalCapacity(), targetLimit, integral, RLEOperator.subtractTestNonNegative, reservation.getStartTime() - validWindow, reservation.getEndTime() + validWindow);
    } catch (PlanningException p) {
        throw new PlanningQuotaException("Integral (avg over time) quota capacity " + maxAvg + " over a window of " + validWindow / 1000 + " seconds, " + " would be exceeded by accepting reservation: " + reservation.getReservationId(), p);
    }
}
Also used : PlanningQuotaException(org.apache.hadoop.yarn.server.resourcemanager.reservation.exceptions.PlanningQuotaException) Resource(org.apache.hadoop.yarn.api.records.Resource) TreeMap(java.util.TreeMap) PlanningException(org.apache.hadoop.yarn.server.resourcemanager.reservation.exceptions.PlanningException) TreeMap(java.util.TreeMap) Map(java.util.Map) NavigableMap(java.util.NavigableMap)

Example 55 with NavigableMap

use of java.util.NavigableMap in project flink by apache.

the class FlinkRelDecorrelator method decorrelateRel.

/**
	 * Rewrites a {@link LogicalAggregate}.
	 *
	 * @param rel Aggregate to rewrite
	 */
public Frame decorrelateRel(LogicalAggregate rel) {
    if (rel.getGroupType() != Aggregate.Group.SIMPLE) {
        throw new AssertionError(Bug.CALCITE_461_FIXED);
    }
    // Aggregate itself should not reference cor vars.
    assert !cm.mapRefRelToCorVar.containsKey(rel);
    final RelNode oldInput = rel.getInput();
    final Frame frame = getInvoke(oldInput, rel);
    if (frame == null) {
        // If input has not been rewritten, do not rewrite this rel.
        return null;
    }
    final RelNode newInput = frame.r;
    // map from newInput
    Map<Integer, Integer> mapNewInputToProjOutputPos = Maps.newHashMap();
    final int oldGroupKeyCount = rel.getGroupSet().cardinality();
    // Project projects the original expressions,
    // plus any correlated variables the input wants to pass along.
    final List<Pair<RexNode, String>> projects = Lists.newArrayList();
    List<RelDataTypeField> newInputOutput = newInput.getRowType().getFieldList();
    int newPos = 0;
    // oldInput has the original group by keys in the front.
    final NavigableMap<Integer, RexLiteral> omittedConstants = new TreeMap<>();
    for (int i = 0; i < oldGroupKeyCount; i++) {
        final RexLiteral constant = projectedLiteral(newInput, i);
        if (constant != null) {
            // Exclude constants. Aggregate({true}) occurs because Aggregate({})
            // would generate 1 row even when applied to an empty table.
            omittedConstants.put(i, constant);
            continue;
        }
        int newInputPos = frame.oldToNewOutputPos.get(i);
        projects.add(RexInputRef.of2(newInputPos, newInputOutput));
        mapNewInputToProjOutputPos.put(newInputPos, newPos);
        newPos++;
    }
    final SortedMap<Correlation, Integer> mapCorVarToOutputPos = new TreeMap<>();
    if (!frame.corVarOutputPos.isEmpty()) {
        // position oldGroupKeyCount.
        for (Map.Entry<Correlation, Integer> entry : frame.corVarOutputPos.entrySet()) {
            projects.add(RexInputRef.of2(entry.getValue(), newInputOutput));
            mapCorVarToOutputPos.put(entry.getKey(), newPos);
            mapNewInputToProjOutputPos.put(entry.getValue(), newPos);
            newPos++;
        }
    }
    // add the remaining fields
    final int newGroupKeyCount = newPos;
    for (int i = 0; i < newInputOutput.size(); i++) {
        if (!mapNewInputToProjOutputPos.containsKey(i)) {
            projects.add(RexInputRef.of2(i, newInputOutput));
            mapNewInputToProjOutputPos.put(i, newPos);
            newPos++;
        }
    }
    assert newPos == newInputOutput.size();
    // This Project will be what the old input maps to,
    // replacing any previous mapping from old input).
    RelNode newProject = RelOptUtil.createProject(newInput, projects, false);
    // update mappings:
    // oldInput ----> newInput
    //
    //                newProject
    //                   |
    // oldInput ----> newInput
    //
    // is transformed to
    //
    // oldInput ----> newProject
    //                   |
    //                newInput
    Map<Integer, Integer> combinedMap = Maps.newHashMap();
    for (Integer oldInputPos : frame.oldToNewOutputPos.keySet()) {
        combinedMap.put(oldInputPos, mapNewInputToProjOutputPos.get(frame.oldToNewOutputPos.get(oldInputPos)));
    }
    register(oldInput, newProject, combinedMap, mapCorVarToOutputPos);
    // now it's time to rewrite the Aggregate
    final ImmutableBitSet newGroupSet = ImmutableBitSet.range(newGroupKeyCount);
    List<AggregateCall> newAggCalls = Lists.newArrayList();
    List<AggregateCall> oldAggCalls = rel.getAggCallList();
    int oldInputOutputFieldCount = rel.getGroupSet().cardinality();
    int newInputOutputFieldCount = newGroupSet.cardinality();
    int i = -1;
    for (AggregateCall oldAggCall : oldAggCalls) {
        ++i;
        List<Integer> oldAggArgs = oldAggCall.getArgList();
        List<Integer> aggArgs = Lists.newArrayList();
        // for the argument.
        for (int oldPos : oldAggArgs) {
            aggArgs.add(combinedMap.get(oldPos));
        }
        final int filterArg = oldAggCall.filterArg < 0 ? oldAggCall.filterArg : combinedMap.get(oldAggCall.filterArg);
        newAggCalls.add(oldAggCall.adaptTo(newProject, aggArgs, filterArg, oldGroupKeyCount, newGroupKeyCount));
        // The old to new output position mapping will be the same as that
        // of newProject, plus any aggregates that the oldAgg produces.
        combinedMap.put(oldInputOutputFieldCount + i, newInputOutputFieldCount + i);
    }
    relBuilder.push(LogicalAggregate.create(newProject, false, newGroupSet, null, newAggCalls));
    if (!omittedConstants.isEmpty()) {
        final List<RexNode> postProjects = new ArrayList<>(relBuilder.fields());
        for (Map.Entry<Integer, RexLiteral> entry : omittedConstants.descendingMap().entrySet()) {
            postProjects.add(entry.getKey() + frame.corVarOutputPos.size(), entry.getValue());
        }
        relBuilder.project(postProjects);
    }
    // located at the same position as the input newProject.
    return register(rel, relBuilder.build(), combinedMap, mapCorVarToOutputPos);
}
Also used : RexLiteral(org.apache.calcite.rex.RexLiteral) ImmutableBitSet(org.apache.calcite.util.ImmutableBitSet) ArrayList(java.util.ArrayList) Pair(org.apache.calcite.util.Pair) TreeMap(java.util.TreeMap) AggregateCall(org.apache.calcite.rel.core.AggregateCall) RelDataTypeField(org.apache.calcite.rel.type.RelDataTypeField) RelNode(org.apache.calcite.rel.RelNode) Map(java.util.Map) ImmutableMap(com.google.common.collect.ImmutableMap) NavigableMap(java.util.NavigableMap) SortedMap(java.util.SortedMap) HashMap(java.util.HashMap) ImmutableSortedMap(com.google.common.collect.ImmutableSortedMap) TreeMap(java.util.TreeMap) RexNode(org.apache.calcite.rex.RexNode)

Aggregations

NavigableMap (java.util.NavigableMap)173 Map (java.util.Map)85 TreeMap (java.util.TreeMap)62 SortedMap (java.util.SortedMap)35 ArrayList (java.util.ArrayList)34 List (java.util.List)27 HashMap (java.util.HashMap)21 Iterator (java.util.Iterator)21 Cell (org.apache.hadoop.hbase.Cell)20 Result (org.apache.hadoop.hbase.client.Result)19 Set (java.util.Set)14 Get (org.apache.hadoop.hbase.client.Get)14 IOException (java.io.IOException)12 KeyValue (org.apache.hadoop.hbase.KeyValue)11 Test (org.junit.Test)11 Put (org.apache.hadoop.hbase.client.Put)10 Entry (java.util.Map.Entry)9 Update (co.cask.cdap.data2.dataset2.lib.table.Update)7 ImmutableMap (com.google.common.collect.ImmutableMap)7 TestSuite (junit.framework.TestSuite)7