Search in sources :

Example 6 with JobExecutionException

use of org.apache.flink.runtime.client.JobExecutionException in project flink by apache.

the class AccumulatorErrorITCase method testInvalidTypeAccumulator.

@Test
public void testInvalidTypeAccumulator() throws Exception {
    ExecutionEnvironment env = ExecutionEnvironment.createRemoteEnvironment("localhost", cluster.getLeaderRPCPort());
    env.getConfig().disableSysoutLogging();
    // Test Exception forwarding with faulty Accumulator implementation
    DataSet<Long> input = env.generateSequence(0, 10000);
    DataSet<Long> mappers = input.map(new IncompatibleAccumulatorTypesMapper()).map(new IncompatibleAccumulatorTypesMapper2());
    mappers.output(new DiscardingOutputFormat<Long>());
    try {
        env.execute();
        fail("Should have failed.");
    } catch (ProgramInvocationException e) {
        Assert.assertTrue("Exception should be passed:", e.getCause() instanceof JobExecutionException);
        Assert.assertTrue("Root cause should be:", e.getCause().getCause() instanceof Exception);
        Assert.assertTrue("Root cause should be:", e.getCause().getCause().getCause() instanceof UnsupportedOperationException);
    }
}
Also used : ExecutionEnvironment(org.apache.flink.api.java.ExecutionEnvironment) JobExecutionException(org.apache.flink.runtime.client.JobExecutionException) ProgramInvocationException(org.apache.flink.client.program.ProgramInvocationException) ProgramInvocationException(org.apache.flink.client.program.ProgramInvocationException) JobExecutionException(org.apache.flink.runtime.client.JobExecutionException) Test(org.junit.Test)

Example 7 with JobExecutionException

use of org.apache.flink.runtime.client.JobExecutionException in project flink by apache.

the class AccumulatorErrorITCase method testFaultyAccumulator.

@Test
public void testFaultyAccumulator() throws Exception {
    ExecutionEnvironment env = ExecutionEnvironment.createRemoteEnvironment("localhost", cluster.getLeaderRPCPort());
    env.getConfig().disableSysoutLogging();
    // Test Exception forwarding with faulty Accumulator implementation
    DataSet<Long> input = env.generateSequence(0, 10000);
    DataSet<Long> map = input.map(new FaultyAccumulatorUsingMapper());
    map.output(new DiscardingOutputFormat<Long>());
    try {
        env.execute();
        fail("Should have failed.");
    } catch (ProgramInvocationException e) {
        Assert.assertTrue("Exception should be passed:", e.getCause() instanceof JobExecutionException);
        Assert.assertTrue("Root cause should be:", e.getCause().getCause() instanceof CustomException);
    }
}
Also used : ExecutionEnvironment(org.apache.flink.api.java.ExecutionEnvironment) JobExecutionException(org.apache.flink.runtime.client.JobExecutionException) ProgramInvocationException(org.apache.flink.client.program.ProgramInvocationException) Test(org.junit.Test)

Example 8 with JobExecutionException

use of org.apache.flink.runtime.client.JobExecutionException in project flink by apache.

the class JobSubmissionFailsITCase method testExceptionInInitializeOnMaster.

@Test
public void testExceptionInInitializeOnMaster() {
    try {
        final JobVertex failingJobVertex = new FailingJobVertex("Failing job vertex");
        failingJobVertex.setInvokableClass(NoOpInvokable.class);
        final JobGraph failingJobGraph = new JobGraph("Failing testing job", failingJobVertex);
        try {
            submitJob(failingJobGraph);
            fail("Expected JobExecutionException.");
        } catch (JobExecutionException e) {
            assertEquals("Test exception.", e.getCause().getMessage());
        } catch (Throwable t) {
            t.printStackTrace();
            fail("Caught wrong exception of type " + t.getClass() + ".");
        }
        cluster.submitJobAndWait(workingJobGraph, false);
    } catch (Exception e) {
        e.printStackTrace();
        fail(e.getMessage());
    }
}
Also used : JobGraph(org.apache.flink.runtime.jobgraph.JobGraph) JobVertex(org.apache.flink.runtime.jobgraph.JobVertex) JobExecutionException(org.apache.flink.runtime.client.JobExecutionException) JobSubmissionException(org.apache.flink.runtime.client.JobSubmissionException) JobExecutionException(org.apache.flink.runtime.client.JobExecutionException) Test(org.junit.Test)

Example 9 with JobExecutionException

use of org.apache.flink.runtime.client.JobExecutionException in project flink by apache.

the class ClusterClient method retrieveJob.

/**
	 * Reattaches to a running from from the supplied job id
	 * @param jobID The job id of the job to attach to
	 * @return The JobExecutionResult for the jobID
	 * @throws JobExecutionException if an error occurs during monitoring the job execution
	 */
public JobExecutionResult retrieveJob(JobID jobID) throws JobExecutionException {
    final LeaderRetrievalService leaderRetrievalService;
    try {
        leaderRetrievalService = LeaderRetrievalUtils.createLeaderRetrievalService(flinkConfig);
    } catch (Exception e) {
        throw new JobRetrievalException(jobID, "Could not create the leader retrieval service", e);
    }
    ActorGateway jobManagerGateway;
    try {
        jobManagerGateway = getJobManagerGateway();
    } catch (Exception e) {
        throw new JobRetrievalException(jobID, "Could not retrieve the JobManager Gateway");
    }
    final JobListeningContext listeningContext = JobClient.attachToRunningJob(jobID, jobManagerGateway, flinkConfig, actorSystemLoader.get(), leaderRetrievalService, timeout, printStatusDuringExecution);
    return JobClient.awaitJobResult(listeningContext);
}
Also used : JobListeningContext(org.apache.flink.runtime.client.JobListeningContext) JobRetrievalException(org.apache.flink.runtime.client.JobRetrievalException) LeaderRetrievalService(org.apache.flink.runtime.leaderretrieval.LeaderRetrievalService) ActorGateway(org.apache.flink.runtime.instance.ActorGateway) JobRetrievalException(org.apache.flink.runtime.client.JobRetrievalException) URISyntaxException(java.net.URISyntaxException) JobExecutionException(org.apache.flink.runtime.client.JobExecutionException) IOException(java.io.IOException) CompilerException(org.apache.flink.optimizer.CompilerException)

Example 10 with JobExecutionException

use of org.apache.flink.runtime.client.JobExecutionException in project flink by apache.

the class KafkaConsumerTestBase method runSimpleConcurrentProducerConsumerTopology.

/**
	 * Ensure Kafka is working on both producer and consumer side.
	 * This executes a job that contains two Flink pipelines.
	 *
	 * <pre>
	 * (generator source) --> (kafka sink)-[KAFKA-TOPIC]-(kafka source) --> (validating sink)
	 * </pre>
	 * 
	 * We need to externally retry this test. We cannot let Flink's retry mechanism do it, because the Kafka producer
	 * does not guarantee exactly-once output. Hence a recovery would introduce duplicates that
	 * cause the test to fail.
	 *
	 * This test also ensures that FLINK-3156 doesn't happen again:
	 *
	 * The following situation caused a NPE in the FlinkKafkaConsumer
	 *
	 * topic-1 <-- elements are only produced into topic1.
	 * topic-2
	 *
	 * Therefore, this test is consuming as well from an empty topic.
	 *
	 */
@RetryOnException(times = 2, exception = kafka.common.NotLeaderForPartitionException.class)
public void runSimpleConcurrentProducerConsumerTopology() throws Exception {
    final String topic = "concurrentProducerConsumerTopic_" + UUID.randomUUID().toString();
    final String additionalEmptyTopic = "additionalEmptyTopic_" + UUID.randomUUID().toString();
    final int parallelism = 3;
    final int elementsPerPartition = 100;
    final int totalElements = parallelism * elementsPerPartition;
    createTestTopic(topic, parallelism, 2);
    // create an empty topic which will remain empty all the time
    createTestTopic(additionalEmptyTopic, parallelism, 1);
    final StreamExecutionEnvironment env = StreamExecutionEnvironment.createRemoteEnvironment("localhost", flinkPort);
    env.setParallelism(parallelism);
    env.enableCheckpointing(500);
    // fail immediately
    env.setRestartStrategy(RestartStrategies.noRestart());
    env.getConfig().disableSysoutLogging();
    TypeInformation<Tuple2<Long, String>> longStringType = TypeInfoParser.parse("Tuple2<Long, String>");
    TypeInformationSerializationSchema<Tuple2<Long, String>> sourceSchema = new TypeInformationSerializationSchema<>(longStringType, env.getConfig());
    TypeInformationSerializationSchema<Tuple2<Long, String>> sinkSchema = new TypeInformationSerializationSchema<>(longStringType, env.getConfig());
    // ----------- add producer dataflow ----------
    DataStream<Tuple2<Long, String>> stream = env.addSource(new RichParallelSourceFunction<Tuple2<Long, String>>() {

        private boolean running = true;

        @Override
        public void run(SourceContext<Tuple2<Long, String>> ctx) throws InterruptedException {
            int cnt = getRuntimeContext().getIndexOfThisSubtask() * elementsPerPartition;
            int limit = cnt + elementsPerPartition;
            while (running && cnt < limit) {
                ctx.collect(new Tuple2<>(1000L + cnt, "kafka-" + cnt));
                cnt++;
                // we delay data generation a bit so that we are sure that some checkpoints are
                // triggered (for FLINK-3156)
                Thread.sleep(50);
            }
        }

        @Override
        public void cancel() {
            running = false;
        }
    });
    Properties producerProperties = FlinkKafkaProducerBase.getPropertiesFromBrokerList(brokerConnectionStrings);
    producerProperties.setProperty("retries", "3");
    producerProperties.putAll(secureProps);
    kafkaServer.produceIntoKafka(stream, topic, new KeyedSerializationSchemaWrapper<>(sinkSchema), producerProperties, null);
    // ----------- add consumer dataflow ----------
    List<String> topics = new ArrayList<>();
    topics.add(topic);
    topics.add(additionalEmptyTopic);
    Properties props = new Properties();
    props.putAll(standardProps);
    props.putAll(secureProps);
    FlinkKafkaConsumerBase<Tuple2<Long, String>> source = kafkaServer.getConsumer(topics, sourceSchema, props);
    DataStreamSource<Tuple2<Long, String>> consuming = env.addSource(source).setParallelism(parallelism);
    consuming.addSink(new RichSinkFunction<Tuple2<Long, String>>() {

        private int elCnt = 0;

        private BitSet validator = new BitSet(totalElements);

        @Override
        public void invoke(Tuple2<Long, String> value) throws Exception {
            String[] sp = value.f1.split("-");
            int v = Integer.parseInt(sp[1]);
            assertEquals(value.f0 - 1000, (long) v);
            assertFalse("Received tuple twice", validator.get(v));
            validator.set(v);
            elCnt++;
            if (elCnt == totalElements) {
                // check if everything in the bitset is set to true
                int nc;
                if ((nc = validator.nextClearBit(0)) != totalElements) {
                    fail("The bitset was not set to 1 on all elements. Next clear:" + nc + " Set: " + validator);
                }
                throw new SuccessException();
            }
        }

        @Override
        public void close() throws Exception {
            super.close();
        }
    }).setParallelism(1);
    try {
        tryExecutePropagateExceptions(env, "runSimpleConcurrentProducerConsumerTopology");
    } catch (ProgramInvocationException | JobExecutionException e) {
        // look for NotLeaderForPartitionException
        Throwable cause = e.getCause();
        // search for nested SuccessExceptions
        int depth = 0;
        while (cause != null && depth++ < 20) {
            if (cause instanceof kafka.common.NotLeaderForPartitionException) {
                throw (Exception) cause;
            }
            cause = cause.getCause();
        }
        throw e;
    }
    deleteTestTopic(topic);
}
Also used : ArrayList(java.util.ArrayList) Properties(java.util.Properties) JobExecutionException(org.apache.flink.runtime.client.JobExecutionException) RichSinkFunction(org.apache.flink.streaming.api.functions.sink.RichSinkFunction) BitSet(java.util.BitSet) TypeHint(org.apache.flink.api.common.typeinfo.TypeHint) TypeInformationSerializationSchema(org.apache.flink.streaming.util.serialization.TypeInformationSerializationSchema) Tuple2(org.apache.flink.api.java.tuple.Tuple2) SuccessException(org.apache.flink.test.util.SuccessException) ProgramInvocationException(org.apache.flink.client.program.ProgramInvocationException) StreamExecutionEnvironment(org.apache.flink.streaming.api.environment.StreamExecutionEnvironment) RetryOnException(org.apache.flink.testutils.junit.RetryOnException)

Aggregations

JobExecutionException (org.apache.flink.runtime.client.JobExecutionException)21 Test (org.junit.Test)10 IOException (java.io.IOException)8 StreamExecutionEnvironment (org.apache.flink.streaming.api.environment.StreamExecutionEnvironment)7 ProgramInvocationException (org.apache.flink.client.program.ProgramInvocationException)5 JobGraph (org.apache.flink.runtime.jobgraph.JobGraph)5 JobVertex (org.apache.flink.runtime.jobgraph.JobVertex)4 URISyntaxException (java.net.URISyntaxException)3 JobID (org.apache.flink.api.common.JobID)3 Tuple2 (org.apache.flink.api.java.tuple.Tuple2)3 CompilerException (org.apache.flink.optimizer.CompilerException)3 JobRetrievalException (org.apache.flink.runtime.client.JobRetrievalException)3 ActorGateway (org.apache.flink.runtime.instance.ActorGateway)3 JobManagerMessages (org.apache.flink.runtime.messages.JobManagerMessages)3 TimerException (org.apache.flink.streaming.runtime.tasks.TimerException)3 Properties (java.util.Properties)2 TimeoutException (java.util.concurrent.TimeoutException)2 JobExecutionResult (org.apache.flink.api.common.JobExecutionResult)2 ExecutionEnvironment (org.apache.flink.api.java.ExecutionEnvironment)2 JobSubmissionException (org.apache.flink.runtime.client.JobSubmissionException)2