Search in sources :

Example 1 with Accumulator

use of org.apache.flink.api.common.accumulators.Accumulator in project flink by apache.

the class ExecutionGraphDeploymentTest method testAccumulatorsAndMetricsForwarding.

/**
	 * Verifies that {@link ExecutionGraph#updateState(TaskExecutionState)} updates the accumulators and metrics for an
	 * execution that failed or was canceled.
	 */
@Test
public void testAccumulatorsAndMetricsForwarding() throws Exception {
    final JobVertexID jid1 = new JobVertexID();
    final JobVertexID jid2 = new JobVertexID();
    JobVertex v1 = new JobVertex("v1", jid1);
    JobVertex v2 = new JobVertex("v2", jid2);
    Tuple2<ExecutionGraph, Map<ExecutionAttemptID, Execution>> graphAndExecutions = setupExecution(v1, 1, v2, 1);
    ExecutionGraph graph = graphAndExecutions.f0;
    // verify behavior for canceled executions
    Execution execution1 = graphAndExecutions.f1.values().iterator().next();
    IOMetrics ioMetrics = new IOMetrics(0, 0, 0, 0, 0, 0.0, 0.0, 0.0, 0.0, 0.0);
    Map<String, Accumulator<?, ?>> accumulators = new HashMap<>();
    accumulators.put("acc", new IntCounter(4));
    AccumulatorSnapshot accumulatorSnapshot = new AccumulatorSnapshot(graph.getJobID(), execution1.getAttemptId(), accumulators);
    TaskExecutionState state = new TaskExecutionState(graph.getJobID(), execution1.getAttemptId(), ExecutionState.CANCELED, null, accumulatorSnapshot, ioMetrics);
    graph.updateState(state);
    assertEquals(ioMetrics, execution1.getIOMetrics());
    assertNotNull(execution1.getUserAccumulators());
    assertEquals(4, execution1.getUserAccumulators().get("acc").getLocalValue());
    // verify behavior for failed executions
    Execution execution2 = graphAndExecutions.f1.values().iterator().next();
    IOMetrics ioMetrics2 = new IOMetrics(0, 0, 0, 0, 0, 0.0, 0.0, 0.0, 0.0, 0.0);
    Map<String, Accumulator<?, ?>> accumulators2 = new HashMap<>();
    accumulators2.put("acc", new IntCounter(8));
    AccumulatorSnapshot accumulatorSnapshot2 = new AccumulatorSnapshot(graph.getJobID(), execution2.getAttemptId(), accumulators2);
    TaskExecutionState state2 = new TaskExecutionState(graph.getJobID(), execution2.getAttemptId(), ExecutionState.FAILED, null, accumulatorSnapshot2, ioMetrics2);
    graph.updateState(state2);
    assertEquals(ioMetrics2, execution2.getIOMetrics());
    assertNotNull(execution2.getUserAccumulators());
    assertEquals(8, execution2.getUserAccumulators().get("acc").getLocalValue());
}
Also used : Accumulator(org.apache.flink.api.common.accumulators.Accumulator) HashMap(java.util.HashMap) JobVertexID(org.apache.flink.runtime.jobgraph.JobVertexID) TaskExecutionState(org.apache.flink.runtime.taskmanager.TaskExecutionState) JobVertex(org.apache.flink.runtime.jobgraph.JobVertex) AccumulatorSnapshot(org.apache.flink.runtime.accumulators.AccumulatorSnapshot) IntCounter(org.apache.flink.api.common.accumulators.IntCounter) Map(java.util.Map) HashMap(java.util.HashMap) Test(org.junit.Test)

Example 2 with Accumulator

use of org.apache.flink.api.common.accumulators.Accumulator in project flink by apache.

the class SourceFunctionUtil method runSourceFunction.

public static <T extends Serializable> List<T> runSourceFunction(SourceFunction<T> sourceFunction) throws Exception {
    final List<T> outputs = new ArrayList<T>();
    if (sourceFunction instanceof RichFunction) {
        AbstractStreamOperator<?> operator = mock(AbstractStreamOperator.class);
        when(operator.getExecutionConfig()).thenReturn(new ExecutionConfig());
        RuntimeContext runtimeContext = new StreamingRuntimeContext(operator, new MockEnvironment("MockTask", 3 * 1024 * 1024, new MockInputSplitProvider(), 1024), new HashMap<String, Accumulator<?, ?>>());
        ((RichFunction) sourceFunction).setRuntimeContext(runtimeContext);
        ((RichFunction) sourceFunction).open(new Configuration());
    }
    try {
        SourceFunction.SourceContext<T> ctx = new CollectingSourceContext<T>(new Object(), outputs);
        sourceFunction.run(ctx);
    } catch (Exception e) {
        throw new RuntimeException("Cannot invoke source.", e);
    }
    return outputs;
}
Also used : Accumulator(org.apache.flink.api.common.accumulators.Accumulator) SourceFunction(org.apache.flink.streaming.api.functions.source.SourceFunction) StreamingRuntimeContext(org.apache.flink.streaming.api.operators.StreamingRuntimeContext) Configuration(org.apache.flink.configuration.Configuration) RichFunction(org.apache.flink.api.common.functions.RichFunction) ArrayList(java.util.ArrayList) ExecutionConfig(org.apache.flink.api.common.ExecutionConfig) MockEnvironment(org.apache.flink.runtime.operators.testutils.MockEnvironment) RuntimeContext(org.apache.flink.api.common.functions.RuntimeContext) StreamingRuntimeContext(org.apache.flink.streaming.api.operators.StreamingRuntimeContext) MockInputSplitProvider(org.apache.flink.runtime.operators.testutils.MockInputSplitProvider)

Example 3 with Accumulator

use of org.apache.flink.api.common.accumulators.Accumulator in project flink by apache.

the class ExecutionGraph method updateState.

// --------------------------------------------------------------------------------------------
//  Callbacks and Callback Utilities
// --------------------------------------------------------------------------------------------
/**
	 * Updates the state of one of the ExecutionVertex's Execution attempts.
	 * If the new status if "FINISHED", this also updates the accumulators.
	 * 
	 * @param state The state update.
	 * @return True, if the task update was properly applied, false, if the execution attempt was not found.
	 */
public boolean updateState(TaskExecutionState state) {
    Execution attempt = this.currentExecutions.get(state.getID());
    if (attempt != null) {
        switch(state.getExecutionState()) {
            case RUNNING:
                return attempt.switchToRunning();
            case FINISHED:
                try {
                    Map<String, Accumulator<?, ?>> userAccumulators = deserializeAccumulators(state);
                    attempt.markFinished(userAccumulators, state.getIOMetrics());
                } catch (Exception e) {
                    LOG.error("Failed to deserialize final accumulator results.", e);
                    attempt.markFailed(e);
                }
                return true;
            case CANCELED:
                Map<String, Accumulator<?, ?>> userAcc1 = deserializeAccumulators(state);
                attempt.cancelingComplete(userAcc1, state.getIOMetrics());
                return true;
            case FAILED:
                Map<String, Accumulator<?, ?>> userAcc2 = deserializeAccumulators(state);
                attempt.markFailed(state.getError(userClassLoader), userAcc2, state.getIOMetrics());
                return true;
            default:
                // we mark as failed and return false, which triggers the TaskManager
                // to remove the task
                attempt.fail(new Exception("TaskManager sent illegal state update: " + state.getExecutionState()));
                return false;
        }
    } else {
        return false;
    }
}
Also used : Accumulator(org.apache.flink.api.common.accumulators.Accumulator) SuppressRestartsException(org.apache.flink.runtime.execution.SuppressRestartsException) StoppingException(org.apache.flink.runtime.StoppingException) NoResourceAvailableException(org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException) JobException(org.apache.flink.runtime.JobException) NoSuchElementException(java.util.NoSuchElementException) IOException(java.io.IOException) ExecutionException(java.util.concurrent.ExecutionException)

Example 4 with Accumulator

use of org.apache.flink.api.common.accumulators.Accumulator in project flink by apache.

the class AccumulatingAlignedProcessingTimeWindowOperatorTest method createMockTask.

// ------------------------------------------------------------------------
private static StreamTask<?, ?> createMockTask() {
    Configuration configuration = new Configuration();
    configuration.setString(CoreOptions.STATE_BACKEND, "jobmanager");
    StreamTask<?, ?> task = mock(StreamTask.class);
    when(task.getAccumulatorMap()).thenReturn(new HashMap<String, Accumulator<?, ?>>());
    when(task.getName()).thenReturn("Test task name");
    when(task.getExecutionConfig()).thenReturn(new ExecutionConfig());
    final TaskManagerRuntimeInfo mockTaskManagerRuntimeInfo = mock(TaskManagerRuntimeInfo.class);
    when(mockTaskManagerRuntimeInfo.getConfiguration()).thenReturn(configuration);
    final Environment env = mock(Environment.class);
    when(env.getTaskInfo()).thenReturn(new TaskInfo("Test task name", 1, 0, 1, 0));
    when(env.getUserClassLoader()).thenReturn(AggregatingAlignedProcessingTimeWindowOperatorTest.class.getClassLoader());
    when(env.getMetricGroup()).thenReturn(new UnregisteredTaskMetricsGroup());
    when(env.getTaskManagerInfo()).thenReturn(new TestingTaskManagerRuntimeInfo());
    when(task.getEnvironment()).thenReturn(env);
    return task;
}
Also used : Accumulator(org.apache.flink.api.common.accumulators.Accumulator) TaskInfo(org.apache.flink.api.common.TaskInfo) UnregisteredTaskMetricsGroup(org.apache.flink.runtime.operators.testutils.UnregisteredTaskMetricsGroup) TestingTaskManagerRuntimeInfo(org.apache.flink.runtime.util.TestingTaskManagerRuntimeInfo) Configuration(org.apache.flink.configuration.Configuration) TestingTaskManagerRuntimeInfo(org.apache.flink.runtime.util.TestingTaskManagerRuntimeInfo) TaskManagerRuntimeInfo(org.apache.flink.runtime.taskmanager.TaskManagerRuntimeInfo) Environment(org.apache.flink.runtime.execution.Environment) ExecutionConfig(org.apache.flink.api.common.ExecutionConfig)

Example 5 with Accumulator

use of org.apache.flink.api.common.accumulators.Accumulator in project flink by apache.

the class GenericDataSourceBaseTest method testDataSourceWithRuntimeContext.

@Test
public void testDataSourceWithRuntimeContext() {
    try {
        TestRichInputFormat in = new TestRichInputFormat();
        GenericDataSourceBase<String, TestRichInputFormat> source = new GenericDataSourceBase<String, TestRichInputFormat>(in, new OperatorInformation<String>(BasicTypeInfo.STRING_TYPE_INFO), "testSource");
        final HashMap<String, Accumulator<?, ?>> accumulatorMap = new HashMap<String, Accumulator<?, ?>>();
        final HashMap<String, Future<Path>> cpTasks = new HashMap<>();
        final TaskInfo taskInfo = new TaskInfo("test_source", 1, 0, 1, 0);
        ExecutionConfig executionConfig = new ExecutionConfig();
        executionConfig.disableObjectReuse();
        assertEquals(false, in.hasBeenClosed());
        assertEquals(false, in.hasBeenOpened());
        List<String> resultMutableSafe = source.executeOnCollections(new RuntimeUDFContext(taskInfo, null, executionConfig, cpTasks, accumulatorMap, UnregisteredMetricsGroup.createOperatorMetricGroup()), executionConfig);
        assertEquals(true, in.hasBeenClosed());
        assertEquals(true, in.hasBeenOpened());
        in.reset();
        executionConfig.enableObjectReuse();
        assertEquals(false, in.hasBeenClosed());
        assertEquals(false, in.hasBeenOpened());
        List<String> resultRegular = source.executeOnCollections(new RuntimeUDFContext(taskInfo, null, executionConfig, cpTasks, accumulatorMap, UnregisteredMetricsGroup.createOperatorMetricGroup()), executionConfig);
        assertEquals(true, in.hasBeenClosed());
        assertEquals(true, in.hasBeenOpened());
        assertEquals(asList(TestIOData.RICH_NAMES), resultMutableSafe);
        assertEquals(asList(TestIOData.RICH_NAMES), resultRegular);
    } catch (Exception e) {
        e.printStackTrace();
        fail(e.getMessage());
    }
}
Also used : Accumulator(org.apache.flink.api.common.accumulators.Accumulator) HashMap(java.util.HashMap) ExecutionConfig(org.apache.flink.api.common.ExecutionConfig) TaskInfo(org.apache.flink.api.common.TaskInfo) TestRichInputFormat(org.apache.flink.api.common.operators.util.TestRichInputFormat) RuntimeUDFContext(org.apache.flink.api.common.functions.util.RuntimeUDFContext) Future(java.util.concurrent.Future) Test(org.junit.Test)

Aggregations

Accumulator (org.apache.flink.api.common.accumulators.Accumulator)17 Test (org.junit.Test)10 ExecutionConfig (org.apache.flink.api.common.ExecutionConfig)9 HashMap (java.util.HashMap)8 TaskInfo (org.apache.flink.api.common.TaskInfo)7 Future (java.util.concurrent.Future)6 RuntimeUDFContext (org.apache.flink.api.common.functions.util.RuntimeUDFContext)6 RuntimeContext (org.apache.flink.api.common.functions.RuntimeContext)4 Configuration (org.apache.flink.configuration.Configuration)4 JobVertex (org.apache.flink.runtime.jobgraph.JobVertex)4 JobVertexID (org.apache.flink.runtime.jobgraph.JobVertexID)4 IOException (java.io.IOException)3 ArrayList (java.util.ArrayList)3 NoSuchElementException (java.util.NoSuchElementException)3 ExecutionException (java.util.concurrent.ExecutionException)3 JobException (org.apache.flink.runtime.JobException)3 AccumulatorSnapshot (org.apache.flink.runtime.accumulators.AccumulatorSnapshot)3 AtomicBoolean (java.util.concurrent.atomic.AtomicBoolean)2 IntCounter (org.apache.flink.api.common.accumulators.IntCounter)2 StoppingException (org.apache.flink.runtime.StoppingException)2