Search in sources :

Example 1 with JobGraphGenerator

use of org.apache.flink.optimizer.plantranslate.JobGraphGenerator in project flink by apache.

the class LocalExecutor method executePlan.

/**
	 * Executes the given program on a local runtime and waits for the job to finish.
	 * 
	 * <p>If the executor has not been started before, this starts the executor and shuts it down
	 * after the job finished. If the job runs in session mode, the executor is kept alive until
	 * no more references to the executor exist.</p>
	 * 
	 * @param plan The plan of the program to execute.
	 * @return The net runtime of the program, in milliseconds.
	 * 
	 * @throws Exception Thrown, if either the startup of the local execution context, or the execution
	 *                   caused an exception.
	 */
@Override
public JobExecutionResult executePlan(Plan plan) throws Exception {
    if (plan == null) {
        throw new IllegalArgumentException("The plan may not be null.");
    }
    synchronized (this.lock) {
        // check if we start a session dedicated for this execution
        final boolean shutDownAtEnd;
        if (flink == null) {
            shutDownAtEnd = true;
            // configure the number of local slots equal to the parallelism of the local plan
            if (this.taskManagerNumSlots == DEFAULT_TASK_MANAGER_NUM_SLOTS) {
                int maxParallelism = plan.getMaximumParallelism();
                if (maxParallelism > 0) {
                    this.taskManagerNumSlots = maxParallelism;
                }
            }
            // start the cluster for us
            start();
        } else {
            // we use the existing session
            shutDownAtEnd = false;
        }
        try {
            Configuration configuration = this.flink.configuration();
            Optimizer pc = new Optimizer(new DataStatistics(), configuration);
            OptimizedPlan op = pc.compile(plan);
            JobGraphGenerator jgg = new JobGraphGenerator(configuration);
            JobGraph jobGraph = jgg.compileJobGraph(op, plan.getJobId());
            boolean sysoutPrint = isPrintingStatusDuringExecution();
            return flink.submitJobAndWait(jobGraph, sysoutPrint);
        } finally {
            if (shutDownAtEnd) {
                stop();
            }
        }
    }
}
Also used : JobGraph(org.apache.flink.runtime.jobgraph.JobGraph) Configuration(org.apache.flink.configuration.Configuration) Optimizer(org.apache.flink.optimizer.Optimizer) JobGraphGenerator(org.apache.flink.optimizer.plantranslate.JobGraphGenerator) DataStatistics(org.apache.flink.optimizer.DataStatistics) OptimizedPlan(org.apache.flink.optimizer.plan.OptimizedPlan)

Example 2 with JobGraphGenerator

use of org.apache.flink.optimizer.plantranslate.JobGraphGenerator in project flink by apache.

the class AccumulatorLiveITCase method getOptimizedPlan.

/**
	 * Helpers to generate the JobGraph
	 */
private static JobGraph getOptimizedPlan(Plan plan) {
    Optimizer pc = new Optimizer(new DataStatistics(), new Configuration());
    JobGraphGenerator jgg = new JobGraphGenerator();
    OptimizedPlan op = pc.compile(plan);
    return jgg.compileJobGraph(op);
}
Also used : Configuration(org.apache.flink.configuration.Configuration) Optimizer(org.apache.flink.optimizer.Optimizer) JobGraphGenerator(org.apache.flink.optimizer.plantranslate.JobGraphGenerator) DataStatistics(org.apache.flink.optimizer.DataStatistics) OptimizedPlan(org.apache.flink.optimizer.plan.OptimizedPlan)

Example 3 with JobGraphGenerator

use of org.apache.flink.optimizer.plantranslate.JobGraphGenerator in project flink by apache.

the class TestEnvironment method execute.

@Override
public JobExecutionResult execute(String jobName) throws Exception {
    OptimizedPlan op = compileProgram(jobName);
    JobGraphGenerator jgg = new JobGraphGenerator();
    JobGraph jobGraph = jgg.compileJobGraph(op);
    this.lastJobExecutionResult = executor.submitJobAndWait(jobGraph, false);
    return this.lastJobExecutionResult;
}
Also used : JobGraph(org.apache.flink.runtime.jobgraph.JobGraph) JobGraphGenerator(org.apache.flink.optimizer.plantranslate.JobGraphGenerator) OptimizedPlan(org.apache.flink.optimizer.plan.OptimizedPlan)

Example 4 with JobGraphGenerator

use of org.apache.flink.optimizer.plantranslate.JobGraphGenerator in project flink by apache.

the class WorksetIterationsJavaApiCompilerTest method testJavaApiWithDeferredSoltionSetUpdateWithNonPreservingJoin.

@Test
public void testJavaApiWithDeferredSoltionSetUpdateWithNonPreservingJoin() {
    try {
        Plan plan = getJavaTestPlan(false, false);
        OptimizedPlan oPlan = compileNoStats(plan);
        OptimizerPlanNodeResolver resolver = getOptimizerPlanNodeResolver(oPlan);
        DualInputPlanNode joinWithInvariantNode = resolver.getNode(JOIN_WITH_INVARIANT_NAME);
        DualInputPlanNode joinWithSolutionSetNode = resolver.getNode(JOIN_WITH_SOLUTION_SET);
        SingleInputPlanNode worksetReducer = resolver.getNode(NEXT_WORKSET_REDUCER_NAME);
        // iteration preserves partitioning in reducer, so the first partitioning is out of the
        // loop,
        // the in-loop partitioning is before the final reducer
        // verify joinWithInvariant
        assertEquals(ShipStrategyType.FORWARD, joinWithInvariantNode.getInput1().getShipStrategy());
        assertEquals(ShipStrategyType.PARTITION_HASH, joinWithInvariantNode.getInput2().getShipStrategy());
        assertEquals(new FieldList(1, 2), joinWithInvariantNode.getKeysForInput1());
        assertEquals(new FieldList(1, 2), joinWithInvariantNode.getKeysForInput2());
        // verify joinWithSolutionSet
        assertEquals(ShipStrategyType.PARTITION_HASH, joinWithSolutionSetNode.getInput1().getShipStrategy());
        assertEquals(ShipStrategyType.FORWARD, joinWithSolutionSetNode.getInput2().getShipStrategy());
        assertEquals(new FieldList(1, 0), joinWithSolutionSetNode.getKeysForInput1());
        // verify reducer
        assertEquals(ShipStrategyType.PARTITION_HASH, worksetReducer.getInput().getShipStrategy());
        assertEquals(new FieldList(1, 2), worksetReducer.getKeys(0));
        // verify solution delta
        assertEquals(2, joinWithSolutionSetNode.getOutgoingChannels().size());
        assertEquals(ShipStrategyType.PARTITION_HASH, joinWithSolutionSetNode.getOutgoingChannels().get(0).getShipStrategy());
        assertEquals(ShipStrategyType.PARTITION_HASH, joinWithSolutionSetNode.getOutgoingChannels().get(1).getShipStrategy());
        new JobGraphGenerator().compileJobGraph(oPlan);
    } catch (Exception e) {
        System.err.println(e.getMessage());
        e.printStackTrace();
        fail("Test errored: " + e.getMessage());
    }
}
Also used : DualInputPlanNode(org.apache.flink.optimizer.plan.DualInputPlanNode) SingleInputPlanNode(org.apache.flink.optimizer.plan.SingleInputPlanNode) JobGraphGenerator(org.apache.flink.optimizer.plantranslate.JobGraphGenerator) Plan(org.apache.flink.api.common.Plan) OptimizedPlan(org.apache.flink.optimizer.plan.OptimizedPlan) InvalidProgramException(org.apache.flink.api.common.InvalidProgramException) OptimizedPlan(org.apache.flink.optimizer.plan.OptimizedPlan) FieldList(org.apache.flink.api.common.operators.util.FieldList) Test(org.junit.Test)

Example 5 with JobGraphGenerator

use of org.apache.flink.optimizer.plantranslate.JobGraphGenerator in project flink by apache.

the class UnionClosedBranchingTest method testUnionClosedBranchingTest.

@Test
public void testUnionClosedBranchingTest() throws Exception {
    // -----------------------------------------------------------------------------------------
    // Build test program
    // -----------------------------------------------------------------------------------------
    ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
    env.getConfig().setExecutionMode(executionMode);
    env.setParallelism(4);
    DataSet<Tuple1<Integer>> src1 = env.fromElements(new Tuple1<>(0), new Tuple1<>(1));
    DataSet<Tuple1<Integer>> src2 = env.fromElements(new Tuple1<>(0), new Tuple1<>(1));
    DataSet<Tuple1<Integer>> union = src1.union(src2);
    DataSet<Tuple2<Integer, Integer>> join = union.join(union).where(0).equalTo(0).projectFirst(0).projectSecond(0);
    join.output(new DiscardingOutputFormat<Tuple2<Integer, Integer>>());
    // -----------------------------------------------------------------------------------------
    // Verify optimized plan
    // -----------------------------------------------------------------------------------------
    OptimizedPlan optimizedPlan = compileNoStats(env.createProgramPlan());
    SinkPlanNode sinkNode = optimizedPlan.getDataSinks().iterator().next();
    DualInputPlanNode joinNode = (DualInputPlanNode) sinkNode.getPredecessor();
    // Verify that the compiler correctly sets the expected data exchange modes.
    for (Channel channel : joinNode.getInputs()) {
        assertEquals("Unexpected data exchange mode between union and join node.", unionToJoin, channel.getDataExchangeMode());
        assertEquals("Unexpected ship strategy between union and join node.", unionToJoinStrategy, channel.getShipStrategy());
    }
    for (SourcePlanNode src : optimizedPlan.getDataSources()) {
        for (Channel channel : src.getOutgoingChannels()) {
            assertEquals("Unexpected data exchange mode between source and union node.", sourceToUnion, channel.getDataExchangeMode());
            assertEquals("Unexpected ship strategy between source and union node.", sourceToUnionStrategy, channel.getShipStrategy());
        }
    }
    // -----------------------------------------------------------------------------------------
    // Verify generated JobGraph
    // -----------------------------------------------------------------------------------------
    JobGraphGenerator jgg = new JobGraphGenerator();
    JobGraph jobGraph = jgg.compileJobGraph(optimizedPlan);
    List<JobVertex> vertices = jobGraph.getVerticesSortedTopologicallyFromSources();
    // Sanity check for the test setup
    assertEquals("Unexpected number of vertices created.", 4, vertices.size());
    // Verify all sources
    JobVertex[] sources = new JobVertex[] { vertices.get(0), vertices.get(1) };
    for (JobVertex src : sources) {
        // Sanity check
        assertTrue("Unexpected vertex type. Test setup is broken.", src.isInputVertex());
        // The union is not translated to an extra union task, but the join uses a union
        // input gate to read multiple inputs. The source create a single result per consumer.
        assertEquals("Unexpected number of created results.", 2, src.getNumberOfProducedIntermediateDataSets());
        for (IntermediateDataSet dataSet : src.getProducedDataSets()) {
            ResultPartitionType dsType = dataSet.getResultType();
            // Ensure batch exchange unless PIPELINED_FORCE is enabled.
            if (!executionMode.equals(ExecutionMode.PIPELINED_FORCED)) {
                assertTrue("Expected batch exchange, but result type is " + dsType + ".", dsType.isBlocking());
            } else {
                assertFalse("Expected non-batch exchange, but result type is " + dsType + ".", dsType.isBlocking());
            }
        }
    }
}
Also used : ExecutionEnvironment(org.apache.flink.api.java.ExecutionEnvironment) IntermediateDataSet(org.apache.flink.runtime.jobgraph.IntermediateDataSet) Channel(org.apache.flink.optimizer.plan.Channel) OptimizedPlan(org.apache.flink.optimizer.plan.OptimizedPlan) DualInputPlanNode(org.apache.flink.optimizer.plan.DualInputPlanNode) JobGraph(org.apache.flink.runtime.jobgraph.JobGraph) JobVertex(org.apache.flink.runtime.jobgraph.JobVertex) ResultPartitionType(org.apache.flink.runtime.io.network.partition.ResultPartitionType) Tuple1(org.apache.flink.api.java.tuple.Tuple1) Tuple2(org.apache.flink.api.java.tuple.Tuple2) JobGraphGenerator(org.apache.flink.optimizer.plantranslate.JobGraphGenerator) SinkPlanNode(org.apache.flink.optimizer.plan.SinkPlanNode) SourcePlanNode(org.apache.flink.optimizer.plan.SourcePlanNode) Test(org.junit.Test)

Aggregations

OptimizedPlan (org.apache.flink.optimizer.plan.OptimizedPlan)57 JobGraphGenerator (org.apache.flink.optimizer.plantranslate.JobGraphGenerator)57 Test (org.junit.Test)49 Plan (org.apache.flink.api.common.Plan)47 ExecutionEnvironment (org.apache.flink.api.java.ExecutionEnvironment)35 DualInputPlanNode (org.apache.flink.optimizer.plan.DualInputPlanNode)19 Tuple2 (org.apache.flink.api.java.tuple.Tuple2)17 SingleInputPlanNode (org.apache.flink.optimizer.plan.SingleInputPlanNode)17 SinkPlanNode (org.apache.flink.optimizer.plan.SinkPlanNode)11 Channel (org.apache.flink.optimizer.plan.Channel)9 WorksetIterationPlanNode (org.apache.flink.optimizer.plan.WorksetIterationPlanNode)8 DataStatistics (org.apache.flink.optimizer.DataStatistics)6 Optimizer (org.apache.flink.optimizer.Optimizer)6 BulkIterationPlanNode (org.apache.flink.optimizer.plan.BulkIterationPlanNode)6 NAryUnionPlanNode (org.apache.flink.optimizer.plan.NAryUnionPlanNode)6 IdentityMapper (org.apache.flink.optimizer.testfunctions.IdentityMapper)6 IdentityGroupReducer (org.apache.flink.optimizer.testfunctions.IdentityGroupReducer)5 JobGraph (org.apache.flink.runtime.jobgraph.JobGraph)5 Configuration (org.apache.flink.configuration.Configuration)4 SourcePlanNode (org.apache.flink.optimizer.plan.SourcePlanNode)4