Search in sources :

Example 6 with JobExecutionResult

use of org.apache.flink.api.common.JobExecutionResult in project flink by apache.

the class JobClientActorRecoveryITCase method testJobClientRecovery.

/**
	 * Tests wether the JobClientActor can connect to a newly elected leading job manager to obtain
	 * the JobExecutionResult. The submitted job blocks for the first execution attempt. The
	 * leading job manager will be killed so that the second job manager will be elected as the
	 * leader. The newly elected leader has to retrieve the checkpointed job from ZooKeeper
	 * and continue its execution. This time, the job does not block and, thus, can be finished.
	 * The execution result should be sent to the JobClientActor which originally submitted the
	 * job.
	 *
	 * @throws Exception
	 */
@Test
public void testJobClientRecovery() throws Exception {
    File rootFolder = tempFolder.getRoot();
    Configuration config = ZooKeeperTestUtils.createZooKeeperHAConfig(zkServer.getConnectString(), rootFolder.getPath());
    config.setInteger(ConfigConstants.LOCAL_NUMBER_JOB_MANAGER, 2);
    config.setInteger(ConfigConstants.LOCAL_NUMBER_TASK_MANAGER, 1);
    final TestingCluster cluster = new TestingCluster(config);
    cluster.start();
    JobVertex blockingVertex = new JobVertex("Blocking Vertex");
    blockingVertex.setInvokableClass(BlockingTask.class);
    blockingVertex.setParallelism(1);
    final JobGraph jobGraph = new JobGraph("Blocking Test Job", blockingVertex);
    final Promise<JobExecutionResult> promise = new scala.concurrent.impl.Promise.DefaultPromise<>();
    Deadline deadline = new FiniteDuration(2, TimeUnit.MINUTES).fromNow();
    try {
        Thread submitter = new Thread(new Runnable() {

            @Override
            public void run() {
                try {
                    JobExecutionResult result = cluster.submitJobAndWait(jobGraph, false);
                    promise.success(result);
                } catch (Exception e) {
                    promise.failure(e);
                }
            }
        });
        submitter.start();
        synchronized (BlockingTask.waitLock) {
            while (BlockingTask.HasBlockedExecution < 1 && deadline.hasTimeLeft()) {
                BlockingTask.waitLock.wait(deadline.timeLeft().toMillis());
            }
        }
        if (deadline.isOverdue()) {
            Assert.fail("The job has not blocked within the given deadline.");
        }
        ActorGateway gateway = cluster.getLeaderGateway(deadline.timeLeft());
        gateway.tell(TestingJobManagerMessages.getDisablePostStop());
        gateway.tell(PoisonPill.getInstance());
        // if the job fails then an exception is thrown here
        Await.result(promise.future(), deadline.timeLeft());
    } finally {
        cluster.shutdown();
    }
}
Also used : Configuration(org.apache.flink.configuration.Configuration) Deadline(scala.concurrent.duration.Deadline) FiniteDuration(scala.concurrent.duration.FiniteDuration) JobExecutionResult(org.apache.flink.api.common.JobExecutionResult) JobGraph(org.apache.flink.runtime.jobgraph.JobGraph) TestingCluster(org.apache.flink.runtime.testingUtils.TestingCluster) JobVertex(org.apache.flink.runtime.jobgraph.JobVertex) ActorGateway(org.apache.flink.runtime.instance.ActorGateway) File(java.io.File) Test(org.junit.Test)

Example 7 with JobExecutionResult

use of org.apache.flink.api.common.JobExecutionResult in project flink by apache.

the class KafkaConsumerTestBase method runEndOfStreamTest.

/**
	 * Test that ensures that DeserializationSchema.isEndOfStream() is properly evaluated.
	 *
	 * @throws Exception
	 */
public void runEndOfStreamTest() throws Exception {
    final int ELEMENT_COUNT = 300;
    final String topic = writeSequence("testEndOfStream", ELEMENT_COUNT, 1, 1);
    // read using custom schema
    final StreamExecutionEnvironment env1 = StreamExecutionEnvironment.createRemoteEnvironment("localhost", flinkPort);
    env1.setParallelism(1);
    env1.getConfig().setRestartStrategy(RestartStrategies.noRestart());
    env1.getConfig().disableSysoutLogging();
    Properties props = new Properties();
    props.putAll(standardProps);
    props.putAll(secureProps);
    DataStream<Tuple2<Integer, Integer>> fromKafka = env1.addSource(kafkaServer.getConsumer(topic, new FixedNumberDeserializationSchema(ELEMENT_COUNT), props));
    fromKafka.flatMap(new FlatMapFunction<Tuple2<Integer, Integer>, Void>() {

        @Override
        public void flatMap(Tuple2<Integer, Integer> value, Collector<Void> out) throws Exception {
        // noop ;)
        }
    });
    JobExecutionResult result = tryExecute(env1, "Consume " + ELEMENT_COUNT + " elements from Kafka");
    deleteTestTopic(topic);
}
Also used : Properties(java.util.Properties) TypeHint(org.apache.flink.api.common.typeinfo.TypeHint) RetryOnException(org.apache.flink.testutils.junit.RetryOnException) ProgramInvocationException(org.apache.flink.client.program.ProgramInvocationException) SuccessException(org.apache.flink.test.util.SuccessException) NoResourceAvailableException(org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException) JobExecutionException(org.apache.flink.runtime.client.JobExecutionException) TimeoutException(org.apache.kafka.common.errors.TimeoutException) JobCancellationException(org.apache.flink.runtime.client.JobCancellationException) IOException(java.io.IOException) JobExecutionResult(org.apache.flink.api.common.JobExecutionResult) Tuple2(org.apache.flink.api.java.tuple.Tuple2) StreamExecutionEnvironment(org.apache.flink.streaming.api.environment.StreamExecutionEnvironment)

Example 8 with JobExecutionResult

use of org.apache.flink.api.common.JobExecutionResult in project flink by apache.

the class DataSet method collect.

/**
	 * Convenience method to get the elements of a DataSet as a List.
	 * As DataSet can contain a lot of data, this method should be used with caution.
	 *
	 * @return A List containing the elements of the DataSet
	 */
public List<T> collect() throws Exception {
    final String id = new AbstractID().toString();
    final TypeSerializer<T> serializer = getType().createSerializer(getExecutionEnvironment().getConfig());
    this.output(new Utils.CollectHelper<>(id, serializer)).name("collect()");
    JobExecutionResult res = getExecutionEnvironment().execute();
    ArrayList<byte[]> accResult = res.getAccumulatorResult(id);
    if (accResult != null) {
        try {
            return SerializedListAccumulator.deserializeList(accResult, serializer);
        } catch (ClassNotFoundException e) {
            throw new RuntimeException("Cannot find type class of collected data type.", e);
        } catch (IOException e) {
            throw new RuntimeException("Serialization error while deserializing collected data", e);
        }
    } else {
        throw new RuntimeException("The call to collect() could not retrieve the DataSet.");
    }
}
Also used : JobExecutionResult(org.apache.flink.api.common.JobExecutionResult) IOException(java.io.IOException) AbstractID(org.apache.flink.util.AbstractID)

Example 9 with JobExecutionResult

use of org.apache.flink.api.common.JobExecutionResult in project flink by apache.

the class RemoteEnvironment method execute.

// ------------------------------------------------------------------------
@Override
public JobExecutionResult execute(String jobName) throws Exception {
    PlanExecutor executor = getExecutor();
    Plan p = createProgramPlan(jobName);
    // Session management is disabled, revert this commit to enable
    //p.setJobId(jobID);
    //p.setSessionTimeout(sessionTimeout);
    JobExecutionResult result = executor.executePlan(p);
    this.lastJobExecutionResult = result;
    return result;
}
Also used : JobExecutionResult(org.apache.flink.api.common.JobExecutionResult) PlanExecutor(org.apache.flink.api.common.PlanExecutor) Plan(org.apache.flink.api.common.Plan)

Example 10 with JobExecutionResult

use of org.apache.flink.api.common.JobExecutionResult in project flink by apache.

the class CollectionExecutionAccumulatorsTest method testAccumulator.

@Test
public void testAccumulator() {
    try {
        final int NUM_ELEMENTS = 100;
        ExecutionEnvironment env = ExecutionEnvironment.createCollectionsEnvironment();
        env.generateSequence(1, NUM_ELEMENTS).map(new CountingMapper()).output(new DiscardingOutputFormat<Long>());
        JobExecutionResult result = env.execute();
        assertTrue(result.getNetRuntime() >= 0);
        assertEquals(NUM_ELEMENTS, (int) result.getAccumulatorResult(ACCUMULATOR_NAME));
    } catch (Exception e) {
        e.printStackTrace();
        fail(e.getMessage());
    }
}
Also used : JobExecutionResult(org.apache.flink.api.common.JobExecutionResult) ExecutionEnvironment(org.apache.flink.api.java.ExecutionEnvironment) Test(org.junit.Test)

Aggregations

JobExecutionResult (org.apache.flink.api.common.JobExecutionResult)28 ExecutionEnvironment (org.apache.flink.api.java.ExecutionEnvironment)10 ParameterTool (org.apache.flink.api.java.utils.ParameterTool)7 ProgramParametrizationException (org.apache.flink.client.program.ProgramParametrizationException)7 NumberFormat (java.text.NumberFormat)6 JDKRandomGeneratorFactory (org.apache.flink.graph.generator.random.JDKRandomGeneratorFactory)6 LongValue (org.apache.flink.types.LongValue)6 NullValue (org.apache.flink.types.NullValue)6 Graph (org.apache.flink.graph.Graph)5 GraphCsvReader (org.apache.flink.graph.GraphCsvReader)5 LongValueToUnsignedIntValue (org.apache.flink.graph.asm.translate.translators.LongValueToUnsignedIntValue)5 RMatGraph (org.apache.flink.graph.generator.RMatGraph)5 RandomGenerableFactory (org.apache.flink.graph.generator.random.RandomGenerableFactory)5 IntValue (org.apache.flink.types.IntValue)5 StringValue (org.apache.flink.types.StringValue)5 Test (org.junit.Test)5 IOException (java.io.IOException)4 DataSet (org.apache.flink.api.java.DataSet)4 ProgramInvocationException (org.apache.flink.client.program.ProgramInvocationException)3 GraphAnalytic (org.apache.flink.graph.GraphAnalytic)3