Search in sources :

Example 1 with HadoopDefaultMapReducePlan

use of org.apache.ignite.internal.processors.hadoop.planner.HadoopDefaultMapReducePlan in project ignite by apache.

the class HadoopTestRoundRobinMrPlanner method preparePlan.

/** {@inheritDoc} */
@Override
public HadoopMapReducePlan preparePlan(HadoopJob job, Collection<ClusterNode> top, @Nullable HadoopMapReducePlan oldPlan) throws IgniteCheckedException {
    if (top.isEmpty())
        throw new IllegalArgumentException("Topology is empty");
    // Has at least one element.
    Iterator<ClusterNode> it = top.iterator();
    Map<UUID, Collection<HadoopInputSplit>> mappers = new HashMap<>();
    for (HadoopInputSplit block : job.input()) {
        ClusterNode node = it.next();
        Collection<HadoopInputSplit> nodeBlocks = mappers.get(node.id());
        if (nodeBlocks == null) {
            nodeBlocks = new ArrayList<>();
            mappers.put(node.id(), nodeBlocks);
        }
        nodeBlocks.add(block);
        if (!it.hasNext())
            it = top.iterator();
    }
    int[] rdc = new int[job.reducers()];
    for (int i = 0; i < rdc.length; i++) rdc[i] = i;
    return new HadoopDefaultMapReducePlan(mappers, Collections.singletonMap(it.next().id(), rdc));
}
Also used : ClusterNode(org.apache.ignite.cluster.ClusterNode) HashMap(java.util.HashMap) HadoopInputSplit(org.apache.ignite.hadoop.HadoopInputSplit) HadoopDefaultMapReducePlan(org.apache.ignite.internal.processors.hadoop.planner.HadoopDefaultMapReducePlan) Collection(java.util.Collection) UUID(java.util.UUID)

Example 2 with HadoopDefaultMapReducePlan

use of org.apache.ignite.internal.processors.hadoop.planner.HadoopDefaultMapReducePlan in project ignite by apache.

the class IgniteHadoopWeightedMapReducePlanner method preparePlan.

/** {@inheritDoc} */
@Override
public HadoopMapReducePlan preparePlan(HadoopJob job, Collection<ClusterNode> nodes, @Nullable HadoopMapReducePlan oldPlan) throws IgniteCheckedException {
    List<HadoopInputSplit> splits = HadoopCommonUtils.sortInputSplits(job.input());
    int reducerCnt = job.reducers();
    if (reducerCnt < 0)
        throw new IgniteCheckedException("Number of reducers must be non-negative, actual: " + reducerCnt);
    HadoopMapReducePlanTopology top = topology(nodes);
    Mappers mappers = assignMappers(splits, top);
    Map<UUID, int[]> reducers = assignReducers(splits, top, mappers, reducerCnt);
    return new HadoopDefaultMapReducePlan(mappers.nodeToSplits, reducers);
}
Also used : HadoopDefaultMapReducePlan(org.apache.ignite.internal.processors.hadoop.planner.HadoopDefaultMapReducePlan) IgniteCheckedException(org.apache.ignite.IgniteCheckedException) HadoopInputSplit(org.apache.ignite.hadoop.HadoopInputSplit) UUID(java.util.UUID) HadoopMapReducePlanTopology(org.apache.ignite.internal.processors.hadoop.planner.HadoopMapReducePlanTopology) HadoopIgfsEndpoint(org.apache.ignite.internal.processors.hadoop.igfs.HadoopIgfsEndpoint)

Aggregations

UUID (java.util.UUID)2 HadoopInputSplit (org.apache.ignite.hadoop.HadoopInputSplit)2 HadoopDefaultMapReducePlan (org.apache.ignite.internal.processors.hadoop.planner.HadoopDefaultMapReducePlan)2 Collection (java.util.Collection)1 HashMap (java.util.HashMap)1 IgniteCheckedException (org.apache.ignite.IgniteCheckedException)1 ClusterNode (org.apache.ignite.cluster.ClusterNode)1 HadoopIgfsEndpoint (org.apache.ignite.internal.processors.hadoop.igfs.HadoopIgfsEndpoint)1 HadoopMapReducePlanTopology (org.apache.ignite.internal.processors.hadoop.planner.HadoopMapReducePlanTopology)1