Search in sources :

Example 61 with ExecutionGraph

use of org.apache.flink.runtime.executiongraph.ExecutionGraph in project flink by apache.

the class LocalInputPreferredSlotSharingStrategyTest method testInputLocalityIsRespectedWithTwoEdgesBetweenTwoVertices.

/**
 * In this test case, there are two JobEdges between two JobVertices. There will be no
 * ExecutionSlotSharingGroup that contains two vertices with the same JobVertexID.
 */
@Test
public void testInputLocalityIsRespectedWithTwoEdgesBetweenTwoVertices() throws Exception {
    int parallelism = 4;
    JobVertex v1 = createJobVertex("v1", JOB_VERTEX_ID_1, parallelism);
    JobVertex v2 = createJobVertex("v2", JOB_VERTEX_ID_2, parallelism);
    v2.connectNewDataSetAsInput(v1, DistributionPattern.ALL_TO_ALL, ResultPartitionType.BLOCKING);
    v2.connectNewDataSetAsInput(v1, DistributionPattern.ALL_TO_ALL, ResultPartitionType.BLOCKING);
    assertEquals(2, v1.getProducedDataSets().size());
    assertEquals(2, v2.getInputs().size());
    final JobGraph jobGraph = JobGraphTestUtils.batchJobGraph(v1, v2);
    final ExecutionGraph executionGraph = TestingDefaultExecutionGraphBuilder.newBuilder().setJobGraph(jobGraph).build();
    final SchedulingTopology topology = executionGraph.getSchedulingTopology();
    final SlotSharingStrategy strategy = new LocalInputPreferredSlotSharingStrategy(topology, slotSharingGroups, Collections.emptySet());
    assertThat(strategy.getExecutionSlotSharingGroups(), hasSize(4));
    ExecutionVertex[] ev1 = Objects.requireNonNull(executionGraph.getJobVertex(JOB_VERTEX_ID_1)).getTaskVertices();
    ExecutionVertex[] ev2 = Objects.requireNonNull(executionGraph.getJobVertex(JOB_VERTEX_ID_2)).getTaskVertices();
    for (int i = 0; i < parallelism; i++) {
        assertThat(strategy.getExecutionSlotSharingGroup(ev1[i].getID()).getExecutionVertexIds(), containsInAnyOrder(ev1[i].getID(), ev2[i].getID()));
    }
}
Also used : JobGraph(org.apache.flink.runtime.jobgraph.JobGraph) JobVertex(org.apache.flink.runtime.jobgraph.JobVertex) ExecutionGraph(org.apache.flink.runtime.executiongraph.ExecutionGraph) SchedulingTopology(org.apache.flink.runtime.scheduler.strategy.SchedulingTopology) TestingSchedulingTopology(org.apache.flink.runtime.scheduler.strategy.TestingSchedulingTopology) TestingSchedulingExecutionVertex(org.apache.flink.runtime.scheduler.strategy.TestingSchedulingExecutionVertex) ExecutionVertex(org.apache.flink.runtime.executiongraph.ExecutionVertex) Test(org.junit.Test)

Example 62 with ExecutionGraph

use of org.apache.flink.runtime.executiongraph.ExecutionGraph in project flink by apache.

the class ExecutingTest method testExecutionVertexMarkedAsFailedOnDeploymentFailure.

@Test
public void testExecutionVertexMarkedAsFailedOnDeploymentFailure() throws Exception {
    try (MockExecutingContext ctx = new MockExecutingContext()) {
        MockExecutionJobVertex mejv = new MockExecutionJobVertex(FailOnDeployMockExecutionVertex::new);
        ExecutionGraph executionGraph = new MockExecutionGraph(() -> Collections.singletonList(mejv));
        Executing exec = new ExecutingStateBuilder().setExecutionGraph(executionGraph).build(ctx);
        assertThat(((FailOnDeployMockExecutionVertex) mejv.getMockExecutionVertex()).getMarkedFailure(), is(instanceOf(JobException.class)));
    }
}
Also used : ExecutionGraph(org.apache.flink.runtime.executiongraph.ExecutionGraph) ArchivedExecutionGraph(org.apache.flink.runtime.executiongraph.ArchivedExecutionGraph) Test(org.junit.Test)

Example 63 with ExecutionGraph

use of org.apache.flink.runtime.executiongraph.ExecutionGraph in project flink by apache.

the class ExecutingTest method testExecutionGraphDeploymentOnEnter.

@Test
public void testExecutionGraphDeploymentOnEnter() throws Exception {
    try (MockExecutingContext ctx = new MockExecutingContext()) {
        MockExecutionJobVertex mockExecutionJobVertex = new MockExecutionJobVertex(MockExecutionVertex::new);
        MockExecutionVertex mockExecutionVertex = (MockExecutionVertex) mockExecutionJobVertex.getMockExecutionVertex();
        mockExecutionVertex.setMockedExecutionState(ExecutionState.CREATED);
        ExecutionGraph executionGraph = new MockExecutionGraph(() -> Collections.singletonList(mockExecutionJobVertex));
        Executing exec = new ExecutingStateBuilder().setExecutionGraph(executionGraph).build(ctx);
        assertThat(mockExecutionVertex.isDeployCalled(), is(true));
        assertThat(executionGraph.getState(), is(JobStatus.RUNNING));
    }
}
Also used : ExecutionGraph(org.apache.flink.runtime.executiongraph.ExecutionGraph) ArchivedExecutionGraph(org.apache.flink.runtime.executiongraph.ArchivedExecutionGraph) Test(org.junit.Test)

Example 64 with ExecutionGraph

use of org.apache.flink.runtime.executiongraph.ExecutionGraph in project flink by apache.

the class ExecutingTest method testIllegalStateExceptionOnNotRunningExecutionGraph.

@Test(expected = IllegalStateException.class)
public void testIllegalStateExceptionOnNotRunningExecutionGraph() throws Exception {
    try (MockExecutingContext ctx = new MockExecutingContext()) {
        ExecutionGraph notRunningExecutionGraph = new StateTrackingMockExecutionGraph();
        assertThat(notRunningExecutionGraph.getState(), is(not(JobStatus.RUNNING)));
        new Executing(notRunningExecutionGraph, getExecutionGraphHandler(notRunningExecutionGraph, ctx.getMainThreadExecutor()), new TestingOperatorCoordinatorHandler(), log, ctx, ClassLoader.getSystemClassLoader(), new ArrayList<>());
    }
}
Also used : ExecutionGraph(org.apache.flink.runtime.executiongraph.ExecutionGraph) ArchivedExecutionGraph(org.apache.flink.runtime.executiongraph.ArchivedExecutionGraph) Test(org.junit.Test)

Example 65 with ExecutionGraph

use of org.apache.flink.runtime.executiongraph.ExecutionGraph in project flink by apache.

the class BackPressureStatsTrackerTest method testTriggerStackTraceSample.

/** Tests simple statistics with fake stack traces. */
@Test
@SuppressWarnings("unchecked")
public void testTriggerStackTraceSample() throws Exception {
    CompletableFuture<StackTraceSample> sampleFuture = new FlinkCompletableFuture<>();
    StackTraceSampleCoordinator sampleCoordinator = mock(StackTraceSampleCoordinator.class);
    when(sampleCoordinator.triggerStackTraceSample(any(ExecutionVertex[].class), anyInt(), any(Time.class), anyInt())).thenReturn(sampleFuture);
    ExecutionGraph graph = mock(ExecutionGraph.class);
    when(graph.getState()).thenReturn(JobStatus.RUNNING);
    // Same Thread execution context
    when(graph.getFutureExecutor()).thenReturn(new Executor() {

        @Override
        public void execute(Runnable runnable) {
            runnable.run();
        }
    });
    ExecutionVertex[] taskVertices = new ExecutionVertex[4];
    ExecutionJobVertex jobVertex = mock(ExecutionJobVertex.class);
    when(jobVertex.getJobId()).thenReturn(new JobID());
    when(jobVertex.getJobVertexId()).thenReturn(new JobVertexID());
    when(jobVertex.getGraph()).thenReturn(graph);
    when(jobVertex.getTaskVertices()).thenReturn(taskVertices);
    taskVertices[0] = mockExecutionVertex(jobVertex, 0);
    taskVertices[1] = mockExecutionVertex(jobVertex, 1);
    taskVertices[2] = mockExecutionVertex(jobVertex, 2);
    taskVertices[3] = mockExecutionVertex(jobVertex, 3);
    int numSamples = 100;
    Time delayBetweenSamples = Time.milliseconds(100L);
    BackPressureStatsTracker tracker = new BackPressureStatsTracker(sampleCoordinator, 9999, numSamples, delayBetweenSamples);
    // Trigger
    assertTrue("Failed to trigger", tracker.triggerStackTraceSample(jobVertex));
    verify(sampleCoordinator).triggerStackTraceSample(eq(taskVertices), eq(numSamples), eq(delayBetweenSamples), eq(BackPressureStatsTracker.MAX_STACK_TRACE_DEPTH));
    // Trigger again for pending request, should not fire
    assertFalse("Unexpected trigger", tracker.triggerStackTraceSample(jobVertex));
    assertTrue(tracker.getOperatorBackPressureStats(jobVertex).isEmpty());
    verify(sampleCoordinator).triggerStackTraceSample(eq(taskVertices), eq(numSamples), eq(delayBetweenSamples), eq(BackPressureStatsTracker.MAX_STACK_TRACE_DEPTH));
    assertTrue(tracker.getOperatorBackPressureStats(jobVertex).isEmpty());
    // Complete the future
    Map<ExecutionAttemptID, List<StackTraceElement[]>> traces = new HashMap<>();
    for (ExecutionVertex vertex : taskVertices) {
        List<StackTraceElement[]> taskTraces = new ArrayList<>();
        for (int i = 0; i < taskVertices.length; i++) {
            // Traces until sub task index are back pressured
            taskTraces.add(createStackTrace(i <= vertex.getParallelSubtaskIndex()));
        }
        traces.put(vertex.getCurrentExecutionAttempt().getAttemptId(), taskTraces);
    }
    int sampleId = 1231;
    int endTime = 841;
    StackTraceSample sample = new StackTraceSample(sampleId, 0, endTime, traces);
    // Succeed the promise
    sampleFuture.complete(sample);
    assertTrue(tracker.getOperatorBackPressureStats(jobVertex).isDefined());
    OperatorBackPressureStats stats = tracker.getOperatorBackPressureStats(jobVertex).get();
    // Verify the stats
    assertEquals(sampleId, stats.getSampleId());
    assertEquals(endTime, stats.getEndTimestamp());
    assertEquals(taskVertices.length, stats.getNumberOfSubTasks());
    for (int i = 0; i < taskVertices.length; i++) {
        double ratio = stats.getBackPressureRatio(i);
        // Traces until sub task index are back pressured
        assertEquals((i + 1) / ((double) 4), ratio, 0.0);
    }
}
Also used : HashMap(java.util.HashMap) JobVertexID(org.apache.flink.runtime.jobgraph.JobVertexID) ArrayList(java.util.ArrayList) Time(org.apache.flink.api.common.time.Time) FlinkCompletableFuture(org.apache.flink.runtime.concurrent.impl.FlinkCompletableFuture) ExecutionVertex(org.apache.flink.runtime.executiongraph.ExecutionVertex) Executor(java.util.concurrent.Executor) ExecutionJobVertex(org.apache.flink.runtime.executiongraph.ExecutionJobVertex) ArrayList(java.util.ArrayList) List(java.util.List) ExecutionAttemptID(org.apache.flink.runtime.executiongraph.ExecutionAttemptID) ExecutionGraph(org.apache.flink.runtime.executiongraph.ExecutionGraph) JobID(org.apache.flink.api.common.JobID) Test(org.junit.Test)

Aggregations

ExecutionGraph (org.apache.flink.runtime.executiongraph.ExecutionGraph)120 Test (org.junit.Test)96 JobVertexID (org.apache.flink.runtime.jobgraph.JobVertexID)77 ExecutionVertex (org.apache.flink.runtime.executiongraph.ExecutionVertex)53 CheckpointCoordinatorBuilder (org.apache.flink.runtime.checkpoint.CheckpointCoordinatorTestingUtils.CheckpointCoordinatorBuilder)40 ExecutionAttemptID (org.apache.flink.runtime.executiongraph.ExecutionAttemptID)36 AcknowledgeCheckpoint (org.apache.flink.runtime.messages.checkpoint.AcknowledgeCheckpoint)35 ExecutionJobVertex (org.apache.flink.runtime.executiongraph.ExecutionJobVertex)31 JobVertex (org.apache.flink.runtime.jobgraph.JobVertex)24 OperatorID (org.apache.flink.runtime.jobgraph.OperatorID)24 HashMap (java.util.HashMap)20 CompletableFuture (java.util.concurrent.CompletableFuture)19 JobID (org.apache.flink.api.common.JobID)19 ArrayList (java.util.ArrayList)17 HashSet (java.util.HashSet)17 JobGraph (org.apache.flink.runtime.jobgraph.JobGraph)17 DeclineCheckpoint (org.apache.flink.runtime.messages.checkpoint.DeclineCheckpoint)17 ExecutionException (java.util.concurrent.ExecutionException)13 Executor (java.util.concurrent.Executor)13 IOException (java.io.IOException)12